upvote
I see the LLM not as the one giving direct commands, but as suggesting a path. An arbitration layer should always check whether that suggestion is safe, and if it isn’t, the system should fall back to a deterministic, well-tested behavior. That way you get flexibility without ever compromising safety.
reply