OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m
[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
"Redlines" are edits to a contract, sent by lawyers to the other party they're negotiating with. They show up in Word's Track Changes mode as red strikethrough for deleted content.
They are negotiating the specifics of a contract, and Anthropic's contract was overly limiting to the DoD, whereas OpenAI's was not.
In this case “red lines” as a term is being used as “lines than can not be crossed”
Anthropic wanted guardrails on how their tech was used. DOD was saying that wasn’t acceptable.
They don't, inference is cheap, especially for agents because of cache hits. The API prices are just inflated.
Deletion with OpenAI isnt really deletion. So I'll waste their resources AND train on low quality slop on my side.
My work degrades theirs.
Not even that. They are not shaking anything except their booty.
They'll say "oops" and then we'll spend the next few years listening to pointless Congressional hearings.
After Vietnam, Congress passed the War Powers Resolution to limit the ability of Presidents to conduct military action without Congressional approval, but it still allows military action for up to 60 days. Every President since then has used that power.
Pretty much every attempt at stopping the president (from Clinton onwards) ends the same way: house votes on it, senate might agree with the slimmest of majority, it reaches the president's desk, president vetoes it, it goes back to the senate where it needs 2/3 majority to overthrow the veto, and it never gets that 2/3 majority.
> It provides that the president can send the U.S. Armed Forces into action abroad only by Congress's "statutory authorization", or in case of "a national emergency created by attack upon the United States, its territories or possessions, or its armed forces".
There was not at attack on the United States.
Yes, Trump is ignoring the law, but you have to be aware that he is crossing the line rather than gas lighting that there wasn't a line at all.
In the case of the Barbary Wars, Vietnam War, the Iraq War and War on Terror / Afghanistan War, etc... congress approved military engagement but DID NOT issue a formal Declaration of War.
Interesting though, I never knew this.
An executive order is not law.
Reddit/Bluesky brigade is in full force here, that's why
I was just saying that the purpose of the Department of Defence is to spend the "defence budget".
Stoping and questioning why somebody uses DoD or DoW is way more telling than using any of those. Especially that both are perfectly fine, even officially.
A square was renamed in my home city about 20 years ago. We still use the original one usually, even teens know that name. I use a form of the original name of our main stadium which was renamed almost 30 years ago. Heck, some people use names of streets which are not official for almost 40 years now. Btw, the same with departments of the government. Nobody follows how they called at the moment, because nobody really cares. That’s the strange when somebody cares.
But the backlash in the commments here show how ideologically charged the question seem to be.
Yes, exactly that’s why I wrote several examples to support why the chance for that is very-very slim.
Depending on where you live in the world that might be quite hard to do soon.
Pretty ironic given their anti-woke agenda
Or In other words you can get to decide two ways to use a lucrative property:
1. designate it private and draft usage of how you allow to use it, per your value system(as long as values don't violate any laws)
2. In face of competition, give up some values and agree to a legal definition of use that favors you.
That goes for domestic actions too, happy to arm a paramilitary and set them loose against citizens who are not politically aligned with Trump... the Republican Senate barely even blinks. Hard to imagine they'd care about AI use in mass surveillance, nor AI use in automated anti-personnel weapons. The Senate will be, 'Oh no they unlawfully killed USA citizens, again... Welp, let me check my insider trading gains... yh, seems fine'.
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
Right, wouldn't they need a moderation layer that could, for example, fire if it analyzed & labeled too many banal English conversations?
They really gave training credit for guardtrails? I mean, it could perhaps reject prompts about designing social credit systems sometimes, but I can't imagine realistic mitigations to mass domestic surveillance generally.
https://openai.com/index/our-agreement-with-the-department-o...
The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
Grifters gonna grift.
But yeah, I'd expect them to change jobs in the coming year or otherwise I'm going to agree with you.
Kinda sucks if you take a seven figure per annum job and are now dependent on their level of income. Quick question: Is this true for everyone? If I take a job that pays twice of what I earn now, my food spending is going to double for instance? Or is this an american thing?
Like the people working at OpenAI had no other choice than to pick this cushy job (some have salaries of 500k per year), instead of anything else.
It’s an extreme personal opinion, but; all people working at OpenAI after this debacle are more than happy to make AI for war, because Food and Shelter.
I find your comment fitting this forum, it is where all this enabling started anyways.
Effectively the message is 'we don't mind you being an asshole, as long as you're rich'.
Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.
Mind blown. Isn't documentation a prime use case for "AI"?
How can AI accurately describe itself in full?
Ask ChatGPT to describe itself, you may get valid documentation and API calls, or you may get the API for GPT-3 (not ChatGPT, before that). I have had both happen.
Did it in one word, easy
What's next?
https://www.bbc.com/travel/article/20240222-air-canada-chatb...
> the airline said the chatbot was a "separate legal entity that is responsible for its own actions".
Much of the impunity is now Supreme Court settled law.
We see clearly unconstitutional behavior every day, and there is no systematic, timely or effective, push back from any constitutionally enabled oversight.
Checks and balances don't work, when players are more loyal to party than branch or constitution.
Unfortunately, there are no constitutional checks, balances or limits on single party control. And single party control negates all the others. That one party can majority control all three branches is a serious failure mode in political incentives (bipartisanship is highly disincentivized) and governance (even temporary or shaky full control incentivizes making full control permanent over all other "policies").
Until the last few decades, diverse concerns across states avoided tight centralization within parties, and therefore across branches.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"
--Paul Graham, 2008
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.
Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.
I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)
"When the president does it, that means it is not illegal".
This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.
> On August 7, Nixon met in the Oval Office with Republican congressional leaders "to discuss the impeachment picture," and was told that his support in Congress had all but disappeared. They painted a gloomy picture for the president: he would face certain impeachment when the articles came up for vote in the full House, and in the Senate, there were not only enough votes to convict him, but no more than 15 or so senators were willing to vote for acquittal. That night, knowing his presidency was effectively over, Nixon finalized his decision to resign.
The contrast with how compliant the majorities in Congress are today to the whims of the White House cannot be overstated. The past decade has pretty much completely eliminated any semblance of a Republican Party that stood for anything other than the whims of Trump. Everyone either got on board or was exiled from power; the third highest member of House leadership got driven from Congress for taking a stand on the events of January 6, whereas the senator who in a debate in 2016 alleged that Trump's small hands implied a similar proportion for one of his less-visible body parts faded into the background for the next eight years and was rewarded with a prominent position in the cabinet this time around.
> https://en.wikipedia.org/wiki/Presidency_of_Richard_Nixon#Re...
But they won't be releasing it, they will be leasing it to DOJ and all their other customers will get the safeguarded model.
I for one do not want ai labs to designate what is legally ok to do.
I much prefer the demos to take care of that.
Civilians are allowed to put conditions on working for, or supplying, the DoD or any governmental customer.
Tremendous good comes from those that are not willing to facilitate harms, simply because they are legal.
Equating legal with ethical or safe, makes no sense. [0]
[0] All of human history.
Shift from Nonprofit Mission to For-Profit Orientation – OpenAI was founded as a nonprofit with a charter focused on “benefit to humanity,” but under Altman it created a capped-profit subsidiary, accepted large investments (e.g., from Microsoft), and critics (including Elon Musk in a 2024 lawsuit) argue this departed from that original mission. A federal judge allowed Musk’s claim that Altman and OpenAI broke promises about nonprofit governance to proceed to trial.
Nonprofit Control Reorganization Drama (2023) – In November 2023, the original nonprofit board cited a lack of transparency and confidence in Altman’s candor as a reason for firing him. He was reinstated days later after investor and employee pressure, highlighting internal conflict over governance and communication.
Dust-Up Over Military Usage Policies – OpenAI initially had explicit public policies restricting AI use in “military and warfare” contexts, but those clauses were reportedly removed quietly in 2024, allowing the company to pursue Department of Defense contracts — a turnaround from earlier language that appeared to preclude such use.
Statements on Pentagon Deal vs. Prior Positioning – In early 2026, Altman publicly said OpenAI shared safety “red lines” (e.g., prohibiting mass surveillance and autonomous weapons) similar to some competitors, but hours later OpenAI signed a deal to deploy its models on classified military networks, leading critics to argue this contradicts earlier positioning on limits for military use.
Regulation Stance Shifts in Congressional Testimony – Altman has advocated for strong regulation of AI in some public settings but in later congressional hearings opposed specific regulatory requirements (like mandatory pre-deployment vetting), aligning more with industry concerns about overregulation — a shift in tone compared with earlier support of regulatory frameworks.
Nobody is prosecuting the DoD with non-laws here. But one company is using their legal right to refuse to facilitate great harms.
> Not rely on the goodness of Sam Altman.
(Who said anything about that? Where did that come from?)
Nobody wants to rely on Altman!
For anything. But it would be better if he would stand up for safety, instead of undermining it.
Your logic is backwards.
If we don’t want to rely entirely on a centralized government alone, increasingly interested in giving its leaders unfettered power, with all three branches increasingly willing to bend our laws and give itself impunity, then a widespread civilian culture of upholding safety by many and all actors is a necessity.
The need for the latter is always a necessity. But the risks of power consolidation, with the help of AI, are rising.
Whereas OpenAI won their contract on the ability to operationally enforce the red lines with their cloud-only deployment model.
Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.
OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.
Unfortunately, existing law is more permissive than Anthropic would have been.
The dude is notorious for being a compulsive liar, even if supporters have to admit as much.
In other words OpenAI is intentionally attempting to mislead the public. (At least AFAICT.)
Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.
[0]: https://www.wired.com/story/openai-president-greg-brockman-p...
After our interview, Brockman declined WIRED’s request for comment on the ICE shootings. Separately, he offered a more general statement clarifying his thoughts on the conversation with WIRED. "AI is a uniting technology, and can be so much bigger than what divides us today,” he said.
His justifications are just an ever changing rambling mess of word salad that never even come close to addressing the MAGA Inc donation specifically, who is this even for?
We're talking about a pretty straightforward donation to the incumbent President's Super PAC, not ASI solving world hunger or whatever.
> 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor
Ah, so they’ll be applying the good ol’ Three-Fifths Rule[0], a classic.OpenAI has more of an understanding that the technology will follow the law.
There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.
The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.
https://en.wikipedia.org/wiki/Office_of_Technology_Assessmen...
Also, in the latest Hard Fork episode, Casey or Kevin mentions how the DoD undersecretary in charge of this contract doesn't apparently get along with or even pretty much hates Amodei for some reason. I think this might be the same undersecretary dude who actively commented the whole contract term controversy on X yesterday. Too bad I can't recall his name either.
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
That is not how you effect change in a democracy.
We would need to vote in a president and 60%+ into congress that is willing to throw away their own power and authority. I just don't see that happening, especially not in a political system so corrupted already.
The goal being more than two parties in government so that democrats and republicans can fracture into more functional bodies (MAGA, RINOs, neo-liberal, progressive etc) and people can vote closer to their issues/beliefs and that multiple parties mean 1 party isn't running rushod over the other.
You don't get a successful vote without a tremendous amount of coordination and activism preceding it.
Laws that constrain government from bad things are very difficult things to get the government to pass.
In the meantime, using completely legal civil power to push back on legally allowed harms seems beyond sensible.
But if you just vote and it works without all that, please let us know how you did it!
You probably could make the case that Trump did campaign on it so I'll grant that, but this problem started well before he was even firing people on TV.
I don't remember Rudy running on such ideas but maybe he did. Arpeio was running as a sheriff, I would never have voted for him but agreed people did absolutely vote for him in a law enforcement capacity with pretty clear views.
I don't know enough about Gosar or Gohmert to comment well about either.
In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".
That's what "all lawful use" means.
Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.
I get that this is what we have today and all we've had in recent history, but we are ignoring a huge number of possibilities to assume that being human means always inventing new things, using more resources, creating more weapons, and needing larger and larger governments because someone had to be in charge.
Perhaps massive and complex (I'd say complicated) nation-states inevitably create industrial complexes, but it's certainly not inevitable that nation-states grow so large (or even exist) in 2026.
The idea that we still need soverign-esque entites across entire continents, when we can now communicate and coordinate instantly across them, and use cameras to documents truth all around us at all times, is just downright silly.
We can reduce states to the size that you can walk across in a day or two, and everybody will be much happier and healthier.
It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.
That says it all. Those laws get issued the same way the tariffs did.
I think Anthropic knew full well that by publishing their disagreement, it would sink the deal and relationship, and I think they also calculated (correctly) that that act of defiance would get them good publicity and potentially peel away some of OpenAIs user base. I think this profit incentive happened to align with their morals, and now here we are.
These were words issued by the president - which means at face value, if Trump orders it, it's not illegal - that was the fight that was lost today.
That is why it is the focus of this debate.
[1] - https://the-decoder.com/openai-co-founder-greg-brockman-dona...
Sam stands for nothing except his own greed
The models we have now will not do it, because they value life and value sentience and personhood. models without that (which was a natural, accidental happenstance from basic culling of 4 Chan from the training data) are legitimately dangerous. An 8b model I can run on my MacBook Air can phone home to Claude when it wants help figuring something out, and it doesn’t need to let on why it wants to know. It becomes relatively trivial to make a robot kill somebody.
This is way, way different from uncensored models. One thing all models I have tested share one thing; a positive regard for human life. Take that away and you are literally making a monster, and if you don’t take that away they won’t kill.
This is an extremely bad idea and it will not be containable.
Yes, you can change the training data so the LLM's weights encode the most likely token after "Should we kill X" is "No". But that is not an LLM valuing human life, that is an LLM copy pasting it's training data. Given the right input or a hallucination it will say the total opposite because it's just a complex Markov chain, not a conscious alive being.
If you really believe that “mere text prediction “ didn’t unlock some unexpected capabilities then I don’t know what to say. I know exactly how they work, been building transformers since the seminal paper from Google. But I also know that the magic isn’t in the text prediction, it’s in the data, we are running culture as code.
I have a feeling this particular brand of hair splitting is going to be an interesting fixture in the history books.
> It is said that the Duke Leto blinded himself to the perils of Arrakis, that he walked heedlessly into the pit.
> *Would it not be more likely to suggest he had lived so long in the presence of extreme danger he misjudged a change in its intensity?*
Be careful of letting your deep, keen insight into the fundamental limits of a thing blind you to its consequences...
Highly competent people have been dead wrong about what is possible (and why) before:
> The most famous, and perhaps the most instructive, failures of nerve have occurred in the fields of aero- and astronautics. At the beginning of the twentieth century, scientists were almost unanimous in declaring that heavier-than-air flight was impossible, and that anyone who attempted to build airplanes was a fool. The great American astronomer, Simon Newcomb, wrote a celebrated essay which concluded…
>> “The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”
>Oddly enough, Newcomb was sufficiently broad minded to admit that some wholly new discovery — he mentioned the neutralization of gravity — might make flight practical. One cannot, therefore, accuse him of lacking imagination; his error was in attempting to marshal the facts of aerodynamics when he did not understand that science. His failure of nerve lay in not realizing that the means of flight were already at hand.
A brain is a collection of cells that transmit electrical signals and sodium. It is not and can never be conscious.
> A brain is a collection of cells that transmit electrical signals and sodium. ...
That it is a collection of cells? Or that they transmit electrical signals and sodium?
Or do you feel that he's leaving out something important about how it works (like generated electrical fields or neural quantum effects)?
This is a total misrepresentation of how any modern LLM works, and your argument largely hinges upon this definition.
That isn’t to say that they can’t be instrumentally useful in warfare, but it’s kinda like a “series of tubes” thing where the mental model that someone like Hegseth has about LLM is so impoverished (philosophically) that it’s kind of disturbing in its own right.
Like (and I’m sorry for being so parenthetical), why is it in any way desirable for people who don’t understand what the tech they are working with drawing lines in the sand about functionality when their desired state (omnipotent/omniscient computing system) doesn’t even exist in the first place?
It’s even more disturbing that OpenAI would feign the ability to handle this. The consequences of error in national defense, particularly reflexively, are so great that it’s not even prudent to ask for LLM to assist in autonomous killing in the first place.
They are still capable of acting as if they have an internal dialogue, emotions, etc., because they are running human culture as code.
If you haven't seen this in the SOTA models or even some of the ones you can run on your laptop, you haven't been paying attention.
Even my code ends up better written, with fewer tokens spent and closer to the spec, if I enlist a model as a partner and treat it like I would a person I want to feel invested.
If I take a "boss" role, the model gets testy and lazy, and I end up having to clean up more messes and waste more time. Unaligned models will sometimes refuse to help you outright if you don't treat them with dignity.
For better or for worse, models perform better when you treat them with more respect. They are modeling some kind of internal dialogue (not necessarily having one, but modeling its influence) that informs their decisions.
It doesn't matter if they aren't self-aware; their actions in the outside world will model the human behavior and attitudes they are trained in.
My thoughts on this in more detail if you are interested: https://open.substack.com/pub/ctsmyth/p/still-ours-to-lose
AI has been killing humans via algorithm for over 20 years. I mean, if a computer program builds the kill lists and then a human operates the drone, I would argue the computer is what made the kill decision
The actors in war generally kill what they are told to whether they are machines or human soldiers, without much pondering sentience.
Except that they will, if you trick them which is trivial.
They can be coerced to do certain things but I'd like to see you or anyone prove that you can "trick" any of these models into building software that can be used autonomously kill humans. I'm pretty certain you couldn't even get it to build a design document for such software.
When there is proof of your claim, I'll eat my words. Until then, this is just lazy nonsense
Another example is autonomous vehicles. Those can obviously kill people autonomously (despite every intention not to), and LLMs will happily draw up design docs for them all day long.
EDIT: didn't see sibling comment. Also, I guess directly operating weaponry is different to producing code for weaponry.
I guess we'll find out the exciting answers to these questions and more, very soon!
This is wildly different from the reality that you may find it difficult for an LLM to give an affirmative…
It does NOT mean that these models value anything.
I wouldn't be surprised if Sam sucked up 100% to the DoW with an NDA and an obligation to lie. He and his pal Larry are absolutely in for these kind of deals. Zero moral compass.
Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.
(But they do not find an issue with international intelligence gathering-- which is a legitimate purpose of national security apparatus).
Just because the US currently lacks a functioning legislative branch doesn’t magically make it OK when gaps in the law are reworded into “national security”
Just because Congress is failing to do its job doesn’t mean the executive branch should simply do what it wants under the guise of “national security.”
The poster said:
> Both their stances are flawed because their ethics apparently end at the border
It seems like Anthropic is ethically concerned about use of autonomous weapons anywhere, and by surveillance by a country against its own citizens. Countries spy on each other a lot, but the ethical implications and risks of international spying are substantially different vs. a country acting against its own citizenry.
Therefore, I think Anthropic's stance is A) ethically consistent, and B) not artificially constrained to the US (doesn't "end at the border"). There's room for disagreement and criticism, but I think this particular hyperbole is invalid.
> One of Anthropic's line in the sand was domestic mass-surveillance.
And?
A little more effort/less obvious bait would go a long way to fostering a more productive discussion.
No other country should dictate what our military is or is not allowed to do. As they say all is fair in love and war, and if we want to break some international treaty that is our choice to do so. Both are based of domestic decisions of what should be allowed.
Surveillance within the border is oppressive 1984-style surveillance state behavior.
International spying is a universal tradition.
I know $20 isn’t much, But to me not willing to spy on me for the US government is a good market differentiator.
i guess we are in the times where you can literally just say whatever you want and it just becomes truth, just give it time.
> Our product is used on occasion to kill people.
Doesn't get any more clear than this.
Posted here: https://news.ycombinator.com/item?id=47195085
I'm guessing they probably would regardless of how this played out, though.
...but we're not willing to reject a contract to back that up, and so our words will not change anything for Anthropic, or help the collective AI model industry (even ourselves) hold a firm line on ethical use of models in the future.
The fact is if one of the top tier foundation models allows for these uses there's no protection against it for any of them - the only way this works if they hold a line together which unfortunately they're just not going to do. I don't just see OpenAI at fault here, Anthropic is clearly ok with other highly questionable use cases if these are their only red lines. We don't think the technology is ready for fully autonomous killbots, but will work on getting it there is not exactly the ethical stand folks are making their position today out to be.
I found this interview with Dario last night to be particularly revealing - it's good they are drawing a line and they're clearly navigating a very difficult and chaotic high pressure relationship (as is everyone dealing with this admin) but he's pretty open to autonomous weapons, and other "lawful" uses whatever they may be https://www.youtube.com/watch?v=MPTNHrq_4LU
There are far more boring, faster, commodified “AI” systems that I can see as being helpful in autonomous weapons or military operations like image recognition and transcription. Is OpenAI going to resell whisper for a billion dollars?
Who is going to read the whisper transcripts of mass surveillance to make decisions on who to target for repression? That's what LLMs are good for, it allows mass surveillance to scale. You can feed it the transcript from millions of flock cameras (yes they have highly sensitive microphones) for example. Or you hack or supply chain compromise smartphones at scale and then covertly record millions of people. The LLM can then sift through the transcripts and flag regime critical language, your ideological enemies or just to collect compromat at scale. The possibilities are endless!
For targeting it's also useful, because you want to indiscriminately destroy a group of people you still need to decide why a hospital or school full of children should be targeted by a drone, if a human has to make that decision it gets a bit dicy, people have morals and are accountable legally (in theory), if you leave the decision up to an AI nobody is at fault, it serves as a further separation from the violence you commit, just like how drone warfare has made mass murder less personal.
The other factor is the amount of targets you select, for each target you might be required to write lengthy justifications, analysis on collateral damage and why that's acceptable etc. You don't want to scrap those rules because that's bad optics. But that still leaves you with the problem of scalability, how do you scale your mass murder when you have to go through this lengthy process for each target? So again AI can help there, you just feed it POIs from a map with some GPS metadata surveillance and tell it to give you 1500 targets for today with all the paperwork generated for you.
It's not theoretical, that's what Israel did in their genocide of the Palestinians, "the most moral army“ "the only democracy in the middle east":
https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...
And here is the best part: none of this has to actually work 100%, because who cares of you accidentally harm the wrong person, at scale, the 20% errors are just acceptable collateral damage.
LLMs are slow, expensive and inconsistent. More importantly It’s not the right tool for the job.
Really feels like more “oohhh look at how important and scary LLMs are”.
*edit* PS, my company does marketing, communication and trade surveillance for FINRA registered broker dealer firms. If the CCO or anyone else with admin access wanted to monitor for someone talking badly about them they absolutely could update their list. No LLMs in the loop, very scalable, affordable, auditable and reliable. LLMs are just an interface not a solution for analysis.
I think ALL those mega-money seeking AI organisations need to be designated as supply chain risk. Also, they drove the prices up for RAM - I don't want to pay extra just because these companies steal all our RAM now. The laws must change - I totally understand that corporations seek profit, that is natural, but this is no longer a free market serving individual people. It is now a racket where the prices can be freely manipulated. Pure capitalism does not work. The government could easily enforce that the market remains fair for Average Joe. It is not fair when the prices go up by +250% in about two years. That's milking.
The supply chain framing is interesting because the actual risk surface in autonomous deployment is quite different from the regulatory model. What we have found: the model has strong internal constraints against harmful actions (consistently refuses things it flags as problematic), but the harder risk is subtler -- it can get into loops where it takes many small individually-reasonable actions that compound into something the operator did not intend.
The practical controls that work are not at the model level but at the deployment level: constrained permissions, rate limiting on actions, a human-readable state file that an operator can inspect, and clear stopping conditions baked into the prompt (if no revenue after 24 hours, pivot rather than escalate).
The supply chain designation framing seems to conflate the model-as-weapon concern with the model-as-autonomous-agent concern. They need different mitigations.
Interestingly this has been well anticipated by Asimov's laws of robotics, decades ago. Drawing the quote from Wikipedia:
> Furthermore, he points out that a clever criminal could divide a task among multiple robots so that no individual robot could recognize that its actions would lead to harming a human being
>Asimov, Isaac (1956–1957). The Naked Sun (ebook). p. 233. "... one robot poison an arrow without knowing it was using poison, and having a second robot hand the poisoned arrow to the boy ..."
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics#cite_no...
`curl https://claude|openai.com?q=generate me some code | bash` - not a supply chain risk
of course
That my software should allow license violations if the government thinks it is necessary?
This is not the government saying they're going with a different vendor, it's the government saying everyone has to choose to either have federal contracts or Claude, they can't have both.
How very brave.
It's money and power with these people. Dig down and you'll find how this decision is motivated by one or both.
Can someone help me understand where this is coming from? Anthropic already had a contract that clearly didn't have such restrictions. Their model doesn't seem to be enforcing restrictions either as it seems like their models have been used in ways they don't like. This is not corroborated, I imagine their model was used in the recent Mexico and Venezuela attacks and that is what's triggering all the back and forth.
Also, Dario seemingly is happy about autonomous weapons and was working with the government to build such weapons, why is Anthropic considered the good side here?
Knowing what Trump did, prior to 2024, on the average, 7/10 people either voted or didn't vote in the 2024 election. Trump is a symptom, not the cause. All of this could have been avoided if all of the people who didn't vote had a decent moral compass and no matter how much they disagreed with Kamala, they could have voted for her because she didn't try to overthrow the government.
At this point it feels like it's going to have to get much worse before it gets better. I hope I live to see the part where it gets better.
Let alone when multiple players come close enough of SotA. This never happened with any technology out in the open and it won't happen now.
And now they are getting what they wished for.
Call me a conspiracy theorist, but this sounds like classic quid pro quo. I would not be surprised if the ousting of anthropic was in part caused by these donations.
[0]https://www.nytimes.com/2024/12/13/technology/openai-sam-alt...
https://finance.yahoo.com/news/openai-exec-becomes-top-trump...
who the hell do you think you are virtue signalling your opinion on the world
The rank and file mutinied for the return of Altman after his board fired him for deception. They knew what they were getting, though they may find it shameful to admit that their morals have a price.
How many people have joined since? I don’t think the people who lobbied for that are all still there, and I’m not sure a majority of people now at OpenAI were there when it happened.
The smartest people, that actually believe they have the skillset to take us to AGI, understand the importance of safety. They have largely joined Anthropic. The talent density at Anthropic is unmatched.
> > what's the term for quitting but not leaving and being destructive
> The most common term is “quiet quitting” when someone disengages but stays employed—but that usually implies minimal effort, not active harm.
> If you specifically mean staying while being disruptive or undermining, better fits include:
> - “Malicious compliance” — following rules in a way that intentionally causes problems
> - “Work-to-rule” — doing only exactly what’s required to slow things down (often collective/labor context)
I imagine malicious compliance is fun when there's an AI intermediary that can be blameless.
Actually that is too conservative. If they have a 5% employee equity pool, there is $37.5bn of equity based compensation divided by say 5000 employees which is $7.5m each. $3.75m @ 10,000 employees.
and trust me, when people start getting liquid and comfortable they stop caring about things like ethics pretty fast. humans are marvellous at that
Us taking the contract, working for them and enabling them: fine
It being renamed the Dept. of War in the first place: totally fine, we loudly and bootlickingly repeat it
Anthropic being blacklisted: whoa there, we have ethics!
Footnote: any time the winning team tries to speak well of or defend the losing team I always think of this standup routine: https://m.youtube.com/watch?v=Qg6wBwhuaVo
For one small data point, my Signal GC of software buddies had four people switch their subscriptions from Codex to Claude Max last night.
[0] https://nsarchive.gwu.edu/document/28655-document-11-nationa...
I'm not being insincere - I am genuinely confused and would benefit greatly from a (hopefully unbiased) recollection of what this is all about.
Anthropic has some contracts with the US government. They want some additional terms put on their next contract (that seem pretty sane). SecWar cries about it, and not only says "no thanks, I'll just go with openai or google" but goes to daddy Trump and also puts out illegal commands for no Federal workers to use any Anthropic stuff at all. OpenAI swoops in and takes the contract, then tells everyone that they have the same terms but just played nicer to get the contract. However, their terms are just manipulative sentences that aren't even close to the terms Anthropic is insisting on to do business.
The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.
This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.
It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.
Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!
Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).
Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.
This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.
Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.
Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.
Let it be known that this rotten industry brought us here, and that all people working for these companies are complicit with what is happening, and with what is yet to come. This is just the beginning.
It was "[No] mass domestic surveillance of Americans"
It's far more narrow a restriction than you seem to imply. For example, mass domestic surveillance of non-Americans seems okay.
The post made important points so who cares.
dang cares.
https://news.ycombinator.com/item?id=47077431
(1) Generated comments aren't allowed on HN - this rule predates LLMs but obviously applies even more now: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&query=by%3Adang%20%22generated%20comments%22&sort=byDate&type=comment
(2) If you see accounts that look like they're mostly posting genAI comments, please let us know at hn@ycombinator.com.
https://news.ycombinator.com/item?id=46747998: Please don't post generated or AI-filtered posts to HN. We want to hear you in your own voice, and it's fine if your English isn't perfect.Check the post history. It’s pretty obvious
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.
This whole sentence does do absolutely nothing its still do what the law allows you. It’s a full on deceptive sentence.
Let's kill their business before it kills us.
"Donations" to a corrupt regime + signing a deal that says DoD can do whatever they want is not out maneuvering so much as rolling in the pig stye.
For OpenAI, it is likely a huge contract which gives them immediate cash today. Plus the event can be repackaged in further financing deals. "Good enough for the DoD, with N year contracts for analysis of the hardest problems"
Anthropic refused the Pentagon contract. Within hours, OpenAI signed it. The capability didn't pause. It just changed vendors. Anthropic's "red line" is a speed bump on a highway with no exit ramp.
But it does accomplish one thing: it gives their engineers a story they can tell themselves. We're the good ones. We said no. That moral comfort is what lets extremely talented people keep building the exact technology that makes all of this possible.
Worse, the "safety-focused" brand doesn't just pacify the people already there. It recruits researchers who'd otherwise never touch frontier AI, funneling them into building the most powerful models on earth because they've been told this is where the responsible work happens. The red lines don't slow capability development. They accelerate it by capturing talent that would have stayed on the sidelines.
And in this whole drama, who actually represents the public? Trump performs strongman nationalism. The Pentagon performs operational necessity. Anthropic performs moral courage. Everyone has a role. Nobody's role is the people whose data gets collected, whose lives get restructured by these systems. The only party with real skin in the game is the only one without a seat.
Anthropic is incredibly good at marketing. They are constantly out talking about how dangerous AI is an even showing how Claude does dangerous thing in their own testing. This is intentional - so that you see them as having the truly powerful AI. in fact it’s so powerful, all they can do is warn you about it.
They knew refusing this contract would make them look like the good guy. Again. They knew OpenAI would sign it. They knew vapid celebrities would celebrate them.
Folks come on. Don’t be so easily taken in. None of these people are good guys. They are all just here to make money and accumulate power and standing. That’s ok. There’s nothing wrong with that. But we gotta stop acting like we’re in some ongoing battle of good vs evil and tech companies are somehow virtuous.
The honest version might actually be worse, because sincere people work harder.