I’ve read a lot on HN about how BEAM execution model is perfect for AI. I think a crucial part that’s usually missing in LLM-focused libraries is the robustness story in the face of node failures, rolling deployments, etc. There’s a misconception about Elixir (demonstrated in one of the claw comments below) that it provides location transparency - it ain’t so. You can have the most robust OTP node, but if you commit to an agent inside a long running process, it will go down when the node does.
Having clear, pure agent state between every API call step goes a long way towards solving that - put it in Mnesia or Redis, pick up on another node when the original is decommissioned. Checkpointing is the solution
Just a heads up, some of your code samples seem to be having an issue with entity escaping.
name: "my_agent",
description: "A simple agent",https://web.archive.org/web/20260305161030/https://jido.run/
https://github.com/openai/symphony
I'm not very familiar with the space, I follow Elixir goings on more than some of the AI stuff.
It is curious... and refreshing... to see Elixir & the BEAM popping up for these sorts of orchestration type workloads.
I’ve found the hardest part with agent frameworks isn’t model plumbing, it’s operational boundaries: how you isolate tools, enforce time/budget limits, and recover from partial failures when an agent call chain fans out.
BEAM’s supervision model feels like a genuinely strong fit for that, especially if each tool execution can be treated as a supervised unit with clear restart/escalation semantics. Curious whether you’ve seen teams default to many small specialized agents vs fewer general agents with stricter policies.
Agree on operational boundaries - it took a long time to land where we did with the 2.0 release
Too much to say about this in a comment, but take a look at the "Concepts: Executor" section - it digs into the model here
I just LLM-built an A2A package which is a GenServer-like abstraction. I however missed that there already was another A2A implementation for Elixir. Anyway, I decided to leave it up because the package semantics were different enough. Here it is if anyone is interested: https://github.com/actioncard/a2a-elixir
Congrats on the release!
The future is going to be wild
There’s a growing community showcase and I have a list of private/commercial references as well depending on your goals
Edit: for those not familiar with the BEAM ecosystem, observer shows all the running Erlang 'processes' (internal to the VM). Here are some examples screenshots on one of the first Google hits I found:
https://fly.io/docs/elixir/advanced-guides/connect-observer-...
Teaser screenshot is here: https://x.com/mikehostetler/status/2025970863237972319
Agents, when wrapped with an AgentRuntime, are typically a single GenServer process. There are some exceptions if you need a larger topology.
I was curious about the actual BEAM processes though, that you see via the observer application in Erlang/Elixir.
It's use-case specific though - security is a much bigger topic then just "agents in containers"
The point of Jido isn't to solve this directly - it's to give you the tools to solve it for your needs.
I used Claude to learn & refine the patterns, but it couldn’t write this level of OTP code at that time.
As models got better, I used them to find bugs and simplify - but the bones are roughly the same from that original design.
(Probably complimentary but wanted to check)
https://hex.pm/packages/req_llm
ReqLLM is baked into the heart of Jido now - we don't support anything else
This agentic framework can co-exist with LangChain if that's what you're wondering.
As LLM API's evolved, I needed more and built ReqLLM which is now embedded deeply into Jido.
Sidian Sidekicks, Obsidian vault reviewer agents.
I think Jido will be prefect for us and will help us organize and streamline not just our agent interactions but make them more clear, what is happening and which agent is doing what.
And on top of that, I get excuse to include Elixir in this project.
Thanks for shipping.
What's old is now rebranded, reheated and new again.