The 'boy (or girl) who cried wolf' isn't just a story. It's a lesson for both the person, and the village who hears them.
Global Warming, Invasion, Impunity, and yes Inequality
Also, trajectory of celestial bodies can be predicted with a somewhat decent level of accuracy. Pretending societal changes can be equally predicted is borderline bad faith.
Besides, you do realize that the film is a satire, and that the comet was an analogy, right? It draws parallels with real-world science denialism around climate change, COVID-19, etc. Dismissing the opinion of an "AI" domain expert based on fairly flawed reasoning is an obvious extension of this analogy.
I think "safety research" has a tendency to attract doomers. So when one of them quits while preaching doom, they are behaving par for the course. There's little new information in someone doing something that fits their type.
https://x.com/MrinankSharma/status/2020881722003583421
A slightly longer quote:
> The world is in peril. And not just from AI, or from bioweapons, gut from a whole series of interconnected crises unfolding at this very moment.
In a footnote he refers to the "poly-crisis."
There are all sorts of things one might decide to do in response, including getting more involved in US politics, working more on climate change, or working on other existential risks.
Claude invented something completely nonsensical:
> This is a classic upside-down cup trick! The cup is designed to be flipped — you drink from it by turning it upside down, which makes the sealed end the bottom and the open end the top. Once flipped, it functions just like a normal cup. *The sealed "top" prevents it from spilling while it's in its resting position, but the moment you flip it, you can drink normally from the open end.*
Emphasis mine.
I can't really take this very seriously without seeing the list of these ostensible "unethical" things that Anthropic models will allow over other providers.
Bring on the cryptocore.
That's why I have a functioning brain, to discern between ethical and unethical, among other things.
It's more like a hammer which makes its own independent evaluation of the ethics of every project you seek to use it on, and refuses to work whenever it judges against that – sometimes inscrutably or for obviously poor reasons.
If I use a hammer to bash in someone else's head, I'm the one going to prison, not the hammer or the hammer manufacturer or the hardware store I bought it from. And that's how it should be.
Here's some rules about dogs: https://en.wikipedia.org/wiki/Dangerous_Dogs_Act_1991
How many people do frontier AI models kill each year, in circumstances nobody would justify?
The Pentagon has already received Claude's help in killing people, but the ethics and legality of those acts are disputed – when a dog kills a three year old, nobody is calling that a good thing or even the lesser evil.
Dunno, stats aren't recorded.
But I can say there's wrongful death lawsuits naming some of the labs and their models. And there was that anecdote a while back about raw garlic infused olive oil botulism, a search for which reminded me about AI-generated mushroom "guides": https://news.ycombinator.com/item?id=40724714
Do you count death by self driving car in such stats? If someone takes medical advice and dies, is that reported like people who drive off an unsafe bridge when following google maps?
But this is all danger by incompetence. The opposite, danger by competence, is where they enable people to become more dangerous than they otherwise would have been.
A competent planner with no moral compass, you only find out how bad it can be when it's much too late. I don't think LLMs are that danger yet, even with METR timelines that's 3 years off. But I think it's best to aim for where the ball will be, rather than where it is.
Then there's LLM-psychosis, which isn't on the competent-incompetent spectrum at all, and I have no idea if that affects people who weren't already prone to psychosis, or indeed if it's really just a moral panic hallucinated by the mileau.
Without safety features, an LLM could also help plan a terrorist attack.
A smart, competent terrorist can plan a successful attack without help from Claude. But most would-be terrorists aren't that smart and competent. Many are caught before hurting anyone or do far less damage than they could have. An LLM can help walk you through every step, and answer all your questions along the way. It could, say, explain to you all the different bomb chemistries, recommend one for your use case, help you source materials, and walk you through how to build the bomb safely. It lowers the bar for who can do this.
[1] https://www.theguardian.com/technology/2026/feb/14/us-milita...
For the bomb example, the barrier of entry is just sourcing of some chemicals. Wikipedia has quite detailed description of all the manufacture of all the popular bombs you can think of.
The question is, at what point does some AI become competent enough to engineer one? And that's just one example, it's an illustration of the category and not the specific sole risk.
If the model makers don't know that in advance, the argument given for delaying GPT-2 applies: you can't take back publication, better to have a standard of excess caution.
I think the two of you might be using different meanings of the word "safety"
You're right that it's dangerous for governments to have this new technology. We're all a bit less "safe" now that they can create weapons that are more intelligent.
The other meaning of "safety" is alignment - meaning, the AI does what you want it to do (subtly different than "does what it's told").
I don't think that Anthropic or any corporation can keep us safe from governments using AI. I think governments have the resources to create AIs that kill, no matter what Anthropic does with Claude.
So for me, the real safety issue is alignment. And even if a rogue government (or my own government) decides to kill me, it's in my best interest that the AI be well aligned, so that at least some humans get to live.
What line are we talking about?
You recon?
Ok, so now every random lone wolf attacker can ask for help with designing and performing whatever attack with whatever DIY weapon system the AI is competent to help with.
Right now, what keeps us safe from serious threats is limited competence of both humans and AI, including for removing alignment from open models, plus any safeties in specifically ChatGPT models and how ChatGPT is synonymous with LLMs for 90% of the population.
Used to be true, when facing any competent attacker.
When the attacker needs an AI in order to gain the competence to unlock an AI that would help it unlock itself?
I would't say it's definitely a different case, but it certainly seems like it should be a different case.
There are several open source models with no built in (or trivial to ecape) safeguards. Of course they can afford that because they are non-commercial.
Anthorpic can’t afford a headline like “Claude helped a terrorist build a bomb”.
And this whataboutism is completely meaningless. See: P. A. Luty’s Expedient Homemade Firearms (https://en.wikipedia.org/wiki/Philip_Luty), or FGC-9 when 3D printing.
It’s trivial to build guns or bombs, and there’s a strong inverse correlation between people wanting to cause mass harm and those willing to learn how to do so.
I’m certain that _everyone_ looking for AI assistance even with your example would be learning about it for academic reasons, sheer curiosity, or would kill themselves in the process.
“What saveguards should LLMs have” is the wrong question. “When aren’t they going to have any?” is an inevitability. Perhaps not in widespread commercial products, but definitely widely-accessible ones.
Perhaps it won't flip. Perhaps LLMs will always be worse at this than humans. Perhaps all that code I just got was secretly outsourced to a secret cabal in India who can type faster than I can read.
I would prefer not to make the bet that universities continue to be better at solving problems than LLMs. And not just LLMs: AI have been busy finding new dangerous chemicals since before most people had heard of LLMs.
Think of it that way. The hard part for nuclear device is enriching thr uranium. If you have it a chimp could build the bomb.
But with bioweapons, yeah, that should be a solid zero. The ones actually doing it off an AI prompt aren't going to have access to a BSL-3 lab (or more importantly, probably know nothing about cross-contamination), and just about everyone who has access to a BSL-3 lab, should already have all the theoretical knowledge they would need for it.
a) Uncensored and simple technology for all humans; that's our birthright and what makes us special and interesting creatures. It's dangerous and requires a vibrant society of ongoing ethical discussion.
b) No governments at all in the internet age. Nobody has any particular authority to initiate violence.
That's where the line goes. We're still probably a few centuries away, but all the more reason to hone in our course now.
Well, yeah I think that's a very reasonable worldview: when a very tiny number of people have the capability to "do what they want", or I might phrase it as, "effect change on the world", then we get the easy-to-observe absolute corruption that comes with absolute power.
As a different human species emerges such that many people (and even intelligences that we can't easily understand as discrete persons) have this capability, our better angels will prevail.
I'm a firm believer that nobody _wants_ to drop explosives from airplanes onto children halfway around the world, or rape and torture them on a remote island; these things stem from profoundly perverse incentive structures.
I believe that governments were an extremely important feature of our evolution, but are no longer necessary and are causing these incentives. We've been aboard a lifeboat for the past few millennia, crossing the choppy seas from agriculture to information. But now that we're on the other shore, it no longer makes sense to enforce the rules that were needed to maintain order on the lifeboat.
Thanks for the successful pitch. I am seriously considering them now.
I don't think that's what you're trying to convey.