On June 17, Temporal hosted a webinar titled Natural Language as Code: Build AI Agents with Nothing But Prose, given by yours truly. In it we explored the role that natural language plays in the construction of AI agents (spoiler, it’s huge! ), with a particular emphasis on how the descriptions you provide as a part of tool definitions must align with the language used to describe an agent’s goal. And then we demoed creating a new agent with only English, leveraging the Temporal sample agent framework that y’all have been hearing so much about. We even debugged our prose to get the real behavior we wanted.
Thanks everyone for joining.
The recording of the session is available here
And here are answers to questions we didn’t get to before our time was up. Have any more? Please join us in the #topic-ai slack channel.
Won’t it be very costly to have LLMs constantly being called [while the agent is running]?
The short answer is yes - it can definitely be costly and you should evaluate the cost (and other factors) against the value using the LLM is bringing. In the webinar we built, a “hello world” app - in real life, you should definitely NOT use an LLM for that. As I said during the webinar, please do not use an LLM to add two numbers .
Are you building RAGs with these AI agents as well?
One of the key AI agent responsibilities is to provide all of the necessary context to the LLM so that it can generate the best answer within that context. To that end, the workflows of these AI agents, and the sample we used during demo, aligns well with those for RAG. We were not , however, doing any vector embeddings of that content. You could, however, do exactly that - in other words, you could use Temporal to build RAG pipelines and then include vector data stores as a part of the context passed to the LLM.
How can we use langchain agents alongside temporal GitHub - rachfop/langchain-temporal: Integration between LangChain and Temporal.
LangChain provides a set of abstractions that implement various parts of agentic AI applications, such as connectivity to various LLMs and sequencing of events. Leveraging langchain chat_models to access the LLMs can be made more durable by doing so from within a Temporal activity. On the other hand, using langchain to connect steps in pipeline does not provide the types of durability capabilities you would enjoy if you simply put those steps together in a Temporal workflow. In short, there are some valuable patterns you could implement and some less valuable ones. Keep an eye out for more content coming in this regard.
Can we see the workflow code?
The webinar demo leveraged the Temporal sample agent framework available in the temporal-community github. The agentic loop is implemented as a Temporal workflow, and LLMs and remote tools are executed through activities.
Are you planning to include more tools/examples using MCP servers as well?
The Temporal sample agent framework has support for MCP. Do have a look at the code and bring any further questions to the#topic-ai slack channel.
I wonder if this will work with so called small models like Quen which can be run locally?
Yes, absolutely. In fact, the sample agent framework includes support for leveraging ollama locally.
Can we use these agents within existing activities?
The best practices for agent to agent interactions are still undergoing significant changes across the entire industry, and we ourselves are still exploring how they are best implemented using Temporal. That said, our work is currently driving us in the direction of agents implemented as workflows, invoking other agents as child workflows. Add Nexus into the mix and it gets even more interesting.
Stay tuned - we’ll share more as our conviction deepens. And if any of you all are experimenting too, come share in the #topic-ai slack channel.