Of course, because an LLM can’t take any action: a human being does, when he sets up a system comprising an LLM and other components which act based on the LLM’s output. That can certainly be unsafe, much as hooking up a CD tray to the trigger of a gun would be — and the fault for doing so would lie with the human who did so, not for the software which ejected the CD.
Yes, LLMs can and do take actions in the world, because things like MCP allow them to translate speech into action, without a human in the loop.
Many companies are already pushing LLMs into roles where they make decisions. It’s only going to get worse. The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.
Is the layoff-based business model really the best use case for AI systems?
> The surface area for attacks against LLM agents is absolutely colossal, and I’m not confident that the problems can be fixed.
The flaws are baked into the training data.
"Trust but verify" applies, as do Murphy's law and the law of unintended consequences.
It's certainly not enough to build a cheap, un-flight-worthy airplane and then say "but if this crashes, that's on the airline dumb enough to fly it".
And it's very certainly not enough to put cars on the road with no working brakes, while saying "the duty of safety is on whoever chose to turn the key and push the gas pedal".
For most of us, we do actually have to do better than that.
But apparently not AI engineers?
Maybe even the makers of the model, but that’s not quite clear. If you produced a bolt that wasn’t to spec and failed, that would probably be on you.
If you thought bureaucracy was dumb before, wait until the humans are replaced with LLMs that can be tricked into telling you how to make meth by asking them to role play as Dr House.
No more so than correctly pointing out that writing code for ffmpeg doesn't mean that you're enabling streaming services to try to redefine the meaning of the phrase "ad-free" because you're allowing them to continue existing.
The problem is not the existence of the library that enables streaming services (AI "safety"), it's that you're not ensuring that the companies misusing technology are prevented from doing so.
"A company is trying to misuse technology so we should cripple the tech instead of fixing the underlying social problem of the company's behavior" is, quite frankly, an absolutely insane mindset, and is the reason for a lot of the evil we see in the world today.
You cannot and should not try to fix social or governmental problems with technology.
The semantics of whether it’s the LLM or the human setting up the system that “take an action” are irrelevant.
It’s perfectly clear to anyone that cares to look that we are in the process of constructing these systems. The safety of these systems will depend a lot on the configuration of the black box labeled “LLM”.
If people were in the process of wiring up CD trays to guns on every street corner you’d I hope be interested in CDGun safety and the algorithms being used.
“Don’t build it if it’s unsafe” is also obviously not viable, the theoretical economic value of agentic AI is so big that everyone is chasing it. (Again, it’s irrelevant whether you think they are wrong; they are doing it, and so AI safety, steerability, hackability, corrigibility, etc are very important.)
LLMs are "unreliable", in a sense that when using LLMs one should always consider the fact that no matter what they try, any LLM will do something that could be considered undesirable (both foreseeable and non-foreseeable).
You hit the nail on the head right there. That's exactly why LLM's fundamentally aren't suited for any greater unmediated access to "harmful actions" than other vulnerable tools.
LLM input and output always needs to be seen as tainted at their point of integration. There's not going to be any escaping that as long as they fundamentally have a singular, mixed-content input/output channel.
Internal vendor blocks reduce capabilities but don't actually solve the problem, and the first wave of them are mostly just cultural assertions of Silicon Valley norms rather than objective safety checks anyway.
Real AI safety looks more like "Users shouldn't integrate this directly into their control systems" and not like "This text generator shouldn't generate text we don't like" -- but the former is bad for the AI business and the latter is a way to traffic in political favor and stroke moral egos.
Of course you could do like deno and other such systems and just deny internet or filesystem access outright, but then you limit the usefulness of the AI system significantly. Tricky problem to be honest.
That is, made of pliant material and with motors with limited force and speed. Then no matter if the AI inside is compromised, the harm would be limited.