Great point, I believe safety has to be layered. The real challenge is deciding which agent is responsible for judging whether a command is safe to execute. For instance, MCP could enforce permissions, rate limits, and safe defaults, while the ROS stack could add motion constraints, watchdogs, and velocity/force caps, all backed by physical interlocks as the final safeguard.
replyI see the LLM not as the one giving direct commands, but as suggesting a path. An arbitration layer should always check whether that suggestion is safe, and if it isn’t, the system should fall back to a deterministic, well-tested behavior. That way you get flexibility without ever compromising safety.
reply