As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"
--Paul Graham, 2008
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.
Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.
I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)
"When the president does it, that means it is not illegal".
This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.
> On August 7, Nixon met in the Oval Office with Republican congressional leaders "to discuss the impeachment picture," and was told that his support in Congress had all but disappeared. They painted a gloomy picture for the president: he would face certain impeachment when the articles came up for vote in the full House, and in the Senate, there were not only enough votes to convict him, but no more than 15 or so senators were willing to vote for acquittal. That night, knowing his presidency was effectively over, Nixon finalized his decision to resign.
The contrast with how compliant the majorities in Congress are today to the whims of the White House cannot be overstated. The past decade has pretty much completely eliminated any semblance of a Republican Party that stood for anything other than the whims of Trump. Everyone either got on board or was exiled from power; the third highest member of House leadership got driven from Congress for taking a stand on the events of January 6, whereas the senator who in a debate in 2016 alleged that Trump's small hands implied a similar proportion for one of his less-visible body parts faded into the background for the next eight years and was rewarded with a prominent position in the cabinet this time around.
> https://en.wikipedia.org/wiki/Presidency_of_Richard_Nixon#Re...
But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.
I for one do not want ai labs to designate what is legally ok to do.
I much prefer the demos to take care of that.
Civilians are allowed to put conditions on working for, or supplying, the DoD or any governmental customer.
Tremendous good comes from those that are not willing to facilitate harms, simply because they are legal.
Equating legal with ethical or safe, makes no sense. [0]
[0] All of human history.
Shift from Nonprofit Mission to For-Profit Orientation – OpenAI was founded as a nonprofit with a charter focused on “benefit to humanity,” but under Altman it created a capped-profit subsidiary, accepted large investments (e.g., from Microsoft), and critics (including Elon Musk in a 2024 lawsuit) argue this departed from that original mission. A federal judge allowed Musk’s claim that Altman and OpenAI broke promises about nonprofit governance to proceed to trial.
Nonprofit Control Reorganization Drama (2023) – In November 2023, the original nonprofit board cited a lack of transparency and confidence in Altman’s candor as a reason for firing him. He was reinstated days later after investor and employee pressure, highlighting internal conflict over governance and communication.
Dust-Up Over Military Usage Policies – OpenAI initially had explicit public policies restricting AI use in “military and warfare” contexts, but those clauses were reportedly removed quietly in 2024, allowing the company to pursue Department of Defense contracts — a turnaround from earlier language that appeared to preclude such use.
Statements on Pentagon Deal vs. Prior Positioning – In early 2026, Altman publicly said OpenAI shared safety “red lines” (e.g., prohibiting mass surveillance and autonomous weapons) similar to some competitors, but hours later OpenAI signed a deal to deploy its models on classified military networks, leading critics to argue this contradicts earlier positioning on limits for military use.
Regulation Stance Shifts in Congressional Testimony – Altman has advocated for strong regulation of AI in some public settings but in later congressional hearings opposed specific regulatory requirements (like mandatory pre-deployment vetting), aligning more with industry concerns about overregulation — a shift in tone compared with earlier support of regulatory frameworks.
Nobody is prosecuting the DoD with non-laws here. But one company is using their legal right to refuse to facilitate great harms.
> Not rely on the goodness of Sam Altman.
(Who said anything about that? Where did that come from?)
Nobody wants to rely on Altman!
For anything. But it would be better if he would stand up for safety, instead of undermining it.
Your logic is backwards.
If we don’t want to rely entirely on a centralized government alone, increasingly interested in giving its leaders unfettered power, with all three branches increasingly willing to bend our laws and give itself impunity, then a widespread civilian culture of upholding safety by many and all actors is a necessity.
The need for the latter is always a necessity. But the risks of power consolidation, with the help of AI, are rising.