Home > Videos

πŸ€–πŸ”—β¬†οΈβœ… 12-Factor Agents: Patterns of reliable LLM applicationsΒ β€”Β Dex Horthy, HumanLayer

πŸ“πŸ’ Human Notes

  • 🀏 Focused prompt β†’ βœ… quality response
  • πŸ€– Strength: πŸ—£οΈ natural language β†’ βš™οΈ JSON
  • 🌍 Context Engineering is everything
    • ✍️ Prompt
    • 🧠 Memory
    • πŸ“š RAG
    • ⏳ History
    • 🧱 Structured Output
  • 🎯 Prefer small, focused agents
  • ➑️ Agents should be stateless

πŸ€– AI Summary

The video discusses several challenges and issues related to building reliable AI agents πŸ€– and LLM applications, drawing parallels to traditional software engineering principles.

  • ⚠️ Difficulty in achieving high quality with agents [00:45]: It’s challenging to get agents beyond 70-80% functionality πŸ“ˆ, often requiring deep dives 🀿 into call stacks and prompt engineering.
  • βš™οΈ Over-engineering with agents [01:00]: Not every problem requires an agent πŸ€–; some can be solved with simpler scripts πŸ“.
  • 🎭 Lack of β€œagentic” behavior in production agents [01:54]: Many production agents function more like traditional software πŸ’» than truly β€œagentic” systems.
  • ⏳ Challenges with long context windows [02:27]: The reliability and quality of results πŸ“‰ decrease significantly with longer LLM context windows.
  • ⚠️ Tool use being β€œharmful” (in a specific context) [04:26]: The abstraction of β€œtool use” as a magical interaction ✨ makes it harder; it should be viewed as an LLM outputting JSON πŸ’» processed by deterministic code.
  • πŸ” Naive agent loop limitations [06:21]: Simple agent loops don’t work well for longer workflows πŸ“‰ due to context window issues.
  • πŸ› Blindly adding errors to context [10:54]: Adding full error messages ⚠️ or stack traces to the context can cause the agent to spin out or get stuck.
  • πŸ€” Avoiding the choice between tool call and human message [12:25]: Builders often avoid deciding whether an agent’s output should be a tool call or a message πŸ’¬ to a human, leading to less effective interactions.
  • πŸ–±οΈ Users needing to open multiple tabs for agents [12:13]: The current user experience often requires interacting with different agents across various tabs πŸ“‘, highlighting a need for agents to be accessible through common communication channels πŸ’¬.
  • πŸ—οΈ Frameworks abstracting away hard AI parts [15:48]: Current frameworks often hide complex AI aspects of agent building 🧱, when they should instead handle other hard parts, allowing developers to focus on critical AI elements.

πŸ“š Book Recommendations

🐦 Tweet