The real meaning of accountability is that you can fire one if you don't like how they work. Good news! You can fire an AI too.
It's similarly reasonable to drop a tool that's unreliable, though I don't think that's a reasonable description here. Instead, they used a tool which is generally known to be unpredictable and failed to sandbox it adequately.
The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish.
You mean checking every action of theirs outside the sandbox I suppose? Otherwise any attempt at letting an agent do some work I would consider foolish.
At least for now.
No, we are not born with all the pre-training we need. That is rather the point of education, teaching people's brains how to process information in new, maybe unintuitive ways.
And in the reverse, if a person makes a series of impulsive, damaging decisions, they probably will not be able to accurately explain why they did it, because neither the brain nor physiology are tuned to permit it.
Seems pretty much the same to me.
What do you mean by fire? And how is the accountability similar to an employee?