upvote
You haven't met certain humans. Not all humans have internal capacity for accountability.

The real meaning of accountability is that you can fire one if you don't like how they work. Good news! You can fire an AI too.

reply
Bad news! They will not be aware that you have done this and will not care.
reply
The purpose of firing a person shouldn't be vengeance but to remove someone who is unreliable or not cost effective.

It's similarly reasonable to drop a tool that's unreliable, though I don't think that's a reasonable description here. Instead, they used a tool which is generally known to be unpredictable and failed to sandbox it adequately.

reply
The purpose of firing a person is to remove someone unreliable, but also, the person having that skin in the game makes him behave more reliably. The latter is something you cannot do with an LLM.

The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish.

reply
"The cold hard fact is: LLMs are an unreliable tool, and using them without checking their every action is extremely foolish."

You mean checking every action of theirs outside the sandbox I suppose? Otherwise any attempt at letting an agent do some work I would consider foolish.

reply
The AI company has skin in the game which motivates them to produce reliable AIs.
reply
Can you actually sue Anthropic over this when they clearly state that AI can make mistakes and you should double-check everything it does?
reply
You can fire Anthropic. Anthropic can decide it's losing too many customers and do something about it.
reply
Doesn't seem to be working though. :(
reply
But it's still a bit more difficult to sue them for leaking your company's data.

At least for now.

reply
Don’t forget learning, humans can learn, LLMs do not learn, they are trained before use.
reply
Do we? Or are we born with pre-training (all the crucial functions the brain does without us having to learn them) and a context window orders of magnitude larger than an LLM?
reply
It is incredible how willing and eager AI boosters are to denigrate the incredible miracle of human consciousness to make their chatbots seem so special.

No, we are not born with all the pre-training we need. That is rather the point of education, teaching people's brains how to process information in new, maybe unintuitive ways.

reply
They learn on the next update :p
reply
That’s training, not learning.
reply
Yup. And eventually there will be online learning, that doesn't require a formal update step. People keep conflating the current implementation, as an inherent feature.
reply
What does that actually mean in practice? You can yell at human if it makes you feel better, sure, but you can do that with an AI agent too, and it's approximately as productive.
reply
I disagree. They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar. Anthropic probably promised no particular outcome, but then what employee does?

And in the reverse, if a person makes a series of impulsive, damaging decisions, they probably will not be able to accurately explain why they did it, because neither the brain nor physiology are tuned to permit it.

Seems pretty much the same to me.

reply
> They could fire Claude and their legal counsel could pursue claims (if there were any, idk)-- the accountability model is similar.

What do you mean by fire? And how is the accountability similar to an employee?

reply
That’s a feature that other humans impose on whoever’s being held accountable. There’s no reason in principle we couldn’t do the same with agents.
reply
How would you fire an agent? This impacts the company that makes the LLM, but not the agent itself.
reply
Yep.
reply