An AI actions and reasons through probabilistic methods - creating a lot more risk than a human with memory, emotions, and rationale thinking.
We can’t trust AI to do any sensitive work because they consistently f up. With & without malicious intent, whether it’s a fault of their attention mechanisms, reward hacking, instrumental convergence, etc all very different than what causes most human f ups.
If there's a mistake, you can't blame the computer. Who is the human accountable at the end of it all? If there's liability, who pays for it?
That's where defining clear boundaries helps you design for your risk profile.
What happens if AI agent you run causes a lot of damage? The best you can do is to turn it off