[1]: https://www.wired.com/story/openai-staff-walk-protest-sam-al...
I think what you are missing is their annual comp with two commas in it.
https://calebhearth.com/dont-get-distracted
Don't get distracted
You underestimate how many top AI scientists are perfectly okay with building autonomous weapons systems and are not ashamed of it.
Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
Now note that many L7+ at OpenAI are making $10 million+ per year.
I sincerely doubt that's true. I hope it's not. $1m is a lot of money, but I find it hard to believe most people would be willing to indiscriminately kill a large number of people for it.
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I will respond with a personal, related story. I was living in Hongkong when "democracy fell" in the late 2010s / early 2020s. It was depressing, and I wanted to leave. (I did later.) I was trying to explain to my parents (and relatives) why most highly skilled foreign workers just didn't care. I said: "Imagine you told a bunch of people in 1984 that they could move to Moscow to open a local office for a wealthy international corporation and get paid big money, like 500K+ in today's dollars. Fat expat package is included. How many people would take it? Most."Another point completely unrelated to my previous story: Since the advent of pretty good LLMs starting in 2023, when I watch flims with warfare set in the future, it makes absolutely no sense that soldiers are still manually aiming. I'm not saying it will be like Terminator 2 right away, but surely the 19-22 year old operator will just point the weapon in the general direction of the target, then AI will handle the rest. And yet, we still see people manually aiming and shooting in these scenarios. Am I the only one who cringes when I see this? There is something uncanney valley about it, like seeing a character in a film using a flip phone post-2015! Maybe directors don't want to show us the ugly truth of the future of warfare.
Other universes take it further - Warhammer 40k often features combatants fighting with melee weapons. Rule of cool and all that.
One of the few works that at least attempts to get this right is the Culture series where it's remarked on several different occasions that anything over some threshold of computing power has AGI built into it (but don't worry you're totally free, just ignore the hall monitor in all of your devices).
A better way is to say you can always find a cheap sellout at least than the morally dammed cannot claim equality of belief
I think those are not really comparable to OpenAI employees who leave, but that only underlines your point more:
Leaving OpenAI is not like death. In fact most of the employees will have an easy time finding a new job, given the resume of having worked at OpenAI. It is nowhere near any actual martyr.
Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?
Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.
One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.
That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.
We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.
Well… TACO.
But we all know how OpenAI is desperate for money, its the weakest link in the bubble quite frankly burning Billions and failed at Sora and there isn't much moat as well economically.
DOD giving them billions for a deal feels like a huge carrot on the stick and wink wink (let's have autonomous killing machines) with the skepticism that you, me or perhaps most people of the community would share.
I for what its worth, don't appreciate Anthropic in its whole (I do still remember perhaps the week old thread where everyone pushed on Anthropic for trying to see user data through API when they looked at the chinese models whole thing) but I give credit where its due and Enemy of my Enemy is my friend, and at the moment it seems that OpenAI might be more friendlier to DOD who wishes to create autonomous killing machine and mass surveillance systems which is like Sci-fi level dystopia rather than anthropic.
Until they volunteer evidence that the deal is being misdescribed or that it won't be enforced, you can honestly say that you haven't seen any. What a convenient position!
You're conflating the Trump administration and their fascist tendencies with all US government. You want to work for fascists if you get paid well enough. You can admit that on here.
"we will comply with US law" The problem is, the US government does not actually comply with US law.
My point was I don't think that Money (whether from VC or taking Jobs from other massive AI employers) should be as important issue to them atleast imo.
1. Department of War broadly uses Anthropic for general purposes
2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons
3. Anthropic disagrees and it escalates
4. Anthropic goes public criticizing the whole Department of War
5. Trump sees a political reason to make an example of Anthropic and bans them
6. The entirety of the Department of War now has no AI for anything
7. Department of War makes agreement with another organization
If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.
I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.
Like, they haven't paid me a bribe? That seems to be the only "politics" at play in Trumps head.
But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.
One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.
Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.
This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.
https://x.com/sama/status/1876780763653263770
If so, I believe the lawsuit is still going on. I'm personally withholding judgment on him on this matter since I don't know the details.
But it's easy to criticize and judge him on other stuff he's said in public.
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities
Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.
I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
[0] https://x.com/sama/status/2027578652477821175
[1] https://x.com/UnderSecretaryF/status/2027594072811098230
Government: "Anthropic, let us do whatever we want"
Anthropic: "We have some minimal conditions."
Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"
OpenAI: "Uh well I guess I should ask for those conditions"
Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."
By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?
It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.
From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
So the governments stance is "We already have laws and procedures in place, we don't want and can't have a CEO to also be part of those checks"
I don't think this outcome would have been any different under a normal blue government either. Definitely with less mud slinging though.
Government's not free to say, "We'll blow up your business with a false accusation if you don't give us the terms we want (and then use defence production act to commandeer the product anyway)". How much more blatantly authoritarian does it get than that?
While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.
Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]
[0] https://en.wikipedia.org/wiki/Third-party_doctrine
[1] https://www.penguinrandomhouse.com/books/706321/means-of-con...
DoD is now trying to strongarm Anthropic into changing the deal that they already signed!
`yes | killbot -model openai`I’m not accusing the above commenter of deception; I’m merely saying reasonable people are skeptical. There are classic game theory approaches to address cooperation failure modes. We have to use them. Apologies if this seems cryptic; I’m trying to be brief. It if doesn’t make sense just ask.
I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.
But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.
To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)
Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.
Agreed, the moral stance is saying no to DoJ and the US government
In that case, what on earth just happened?
The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.
Do you not see something very, very wrong with this picture?
At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?
> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)
If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.
Hegseths tweet strongly alluded to this, and the general terms of the agreement are not public, just the hot button ones.
The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.
It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.
Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.
Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.
You, and your colleagues, should resign.
It would be better if everyone stopped doing business with OpenAI so these employees lose their stock value.
But of course neither of these things will happen.
Obviously nothing is going to make Teddy quit his cushy OpenAI job.
So, can you please draw the line when you will quit?
- If OpenAI deal allows domestic mass surveillance - If OpenAI allows the development of autonomous weapons - OpenAI no longer asks for the same terms for other AI companies
Correct?
If so, then if I take your words at face value:
- By your reading non-domestic mass surveillance is fine
- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved
- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.
I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.
Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.
The evidence seems to overwhelmingly point in the opposite direction.
It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.
If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.
It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?
What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?
And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?
> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.
So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?
I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.
Know that when things go wrong (not if, when), the blood will be on your hands too.
There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)
Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.
What is your red line?
I don't mean this in any way rude and I apologize if this comes accross as such but believing it won't be used in exactly this way is just naive. History has taught us this lesson again and again and again.
OpenAI agrees to be put in the same position as Anthropic.
It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?
There's surely no way that's actually what you believe...
I do not know but I would not very optimistic about those new terms.
Someone might just create a spawn of openai with a tag and do all the stuff there...
There is no much guarantee I think
Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.
I can't tell you what to do but I hope you make the right decision.
And the US Military is forbidden from operating on US soil, but that didn't stop this administration from deploying US Marines to California recently.
You're fooling yourself if you think this administration is following any kind of rule.
You’re being purposefully niave if you trust any government and especially this government to behave legally or ethically.
Y’all are developing amazing technology. But accept reality and drop whatever sense of moral righteousness you’re carrying here. Not because some asshole on the internet says so, but for your own mental health.
I think its wrong for someone to ask someone to resign but acting that there is no issue here is debating in bad faith.
Or Sam bribed the government to do this, which is also entirely possible.
If you think that means your company isn't going to be involved in lethal autonomous weapons and mass domestic surveillance... I don't really know what to tell you. I doubt you really believe that. Obviously you will be involved in that and you are effectively working on those projects now.
> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons
Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.
Is it really worth the long-term risk being associated with Sam Altman when the other firms would willingly take you and probably give you a pay bump to boot?
It doesn't make sense to me why anyone would want to associate themselves with Altman. He is universally distrusted. No one believes anything he says. It's insane to work with a person who PG, Ilya, Murati, Musk have all designated a liar and just general creep.
Defending him or the firms actions instantly makes you look terrible, like you'll gladly take the "Elites vs UBI recipients" his vision propagates.
Shame on you people. What a disgusting vision.
One got characterized as supply chain risk and so much for OpenAI to get the same.
And even that being said, I can be wrong but if I remember, OpenAI and every other company had basically accepted all uses and it was only Anthropic which said no to these two demands.
And I think that this whole scenario became public because Anthropic denied, I do think that the deal could've been done sneakily if Anthropic wanted.
So now OpenAI taking the deal doesn't help with the fact that to me, it looks like they can always walk back and all the optics are horrendous to me for OpenAI so I am curious what you think.
The thing which I am thinking OTOH is why would OpenAI come and say, hey guys yea we are gonna feed autonomous killing machines. Of course they are gonna try to keep it a secret right before their IPO and you are an employee and you mention walking out of openAI but with the current optics, it seems that you/other employees of OpenAI are also more willing to work because evidence isn't out here but to me, as others have pointed out, it looks like slowly boiling the water.
OpenAI gets to have the cake and eat it too but I don't think that there's free lunch. I simply don't understand why DOD would make such a high mess about Anthropic terms being outrageous and then sign the same deal with same terms with OpenAI unless there's a catch. Only time will tell though how wrong or right I am though.
If I may ask, how transparent is OpenAI from an employees perspective? Just out of curiosity but will you as a employee get informed of if OpenAI's top leadership (Sam?) decided that the deal gets changed and DOD gets to have Autonomous killing machine. Would you as an employee or us as the general public get information about it if the deal is done through secret back doors. Snowden did show that a lot of secret court deals were made not available to public until he whistleblowed but not all things get whistleblowed though, so I am genuinely curious to hear your thoughts.
In my mind the only people left are those who are there for the stocks.
But they did.
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
You learned this where?
You should have said this.
> https://x.com/UnderSecretaryF/status/2027594072811098230
Thank you.
who decides these weighty questions? Approach (1), accepted by OAI, references laws and thus appropriately vests those questions in our democratic system. Approach (2) unacceptably vests those questions in a single unaccountable CEO who would usurp sovereign control of our most sensitive systems.
Amodei is the type of person who thinks he can tell the US government what they can and can’t do.
And the US government should have precisely none of that, regardless of whether they’re red or blue.
I don't think that's the case. Amodei is worried that AI is extraordinarily capable, and our current system of checks and balances is not adequate yet to set the proper constraints so the law is correctly enforced. Here's an excerpt from his statement [1]:
> Powerful AI makes it possible to assemble this scattered, individually innocuous data into a comprehensive picture of any person’s life—automatically and at massive scale.
Let's do this thought exercise: how long would it take you, using Claude Code, to write some code to crawl the internet and find all the postings of the HN user nandomrumber under all their names on various social media, and create a profile with the top 10 ways that user can be legally harassed? Of course, Claude would refuse to do this, because of its guardrails, but what if Claude didn't refuse?[1]https://www.anthropic.com/news/statement-department-of-war
You see, Obama droned more combatants than anyone else before or after him but always followed a legal paper trail and following the book (except perhaps in some cases, search for Anwar al-Awlaki).
One can argue whether the rules and laws (secret courts, proceedings, asymmetries in court processes that severely compress civil liberties… to the point they might violate other constitutional rights) are legitimate, but he operated within the limits of the law.
You folks just blurt “me ne frego” like a random Mussolini and think you’re being patriotic.
SMH
> And the US government should have precisely none of that, regardless of whether they’re red or blue.
This is a pretty hot take. "You can't break the law and kill people or do mass surveillance with our technology." fuck that, the government should break whatever laws and kill whoever they please
I hope you A: aren't a U.S. citizen, and B: don't vote.
If I'm selling widgets to the government and come to find out they are using those widgets unconstitutionally and to violate my neighbors rights you can be damn sure I'm going to stop selling the gov my widgets. Amodei said that Anthropic was willing to step away if they and the government couldn't come to terms, and instead of the government acting like adults and letting them they decided to double down on being the dumbest people in the room and act like toddlers and throw a massive fit about the whole thing.
No. Altman said human responsibility. Anthropic said human in the loop.
> And Sam’s wording all but confirms that OpenAI’s agreement defers to DoD policies and laws (which a defense contract cannot prescribe), and effectively only pays lip service to the two exclusions.
All but confirmed was not confirmed.
To your second comment, it was clear enough to me to be the most plausible reading of the situation by far.
We state what we think the situation is all the time, without explicitly writing “I think the situation is…”.
>A defense official said the Pentagon’s technology chief whittled the debate down to a life-and-death nuclear scenario at a meeting last month: If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic’s Claude AI system to help shoot it down?
>It’s the kind of situation where technological might and speed could be critical to detection and counterstrike, with the time to make a decision measured in minutes and seconds. Anthropic chief executive Dario Amodei’s answer rankled the Pentagon, according to the official, who characterized the CEO’s reply as: You could call us and we’d work it out.
>An Anthropic spokesperson denied Amodei gave that response, calling the account “patently false,” and saying the company has agreed to allow Claude to be used for missile defense. But officials have cited this and another incident involving Claude’s use in the capture of Venezuelan leader Nicolás Maduro as flashpoints in a spiraling standoff between the company and the Pentagon in recent days. The meeting was previously reported by Semafor.
I have a hunch that Anthropic interpreted this question to be on the dimension of authority, when the Pentagon was very likely asking about capability, and they then followed up to clarify that for missile defense they would, I guess, allow an exception. I get the (at times overwhelming) skepticism that people have about these tools and this administration but this is not a reasonable position to hold, even if Anthropic held it accidentally because they initially misunderstood what they were being asked.
https://web.archive.org/web/20260227182412/https://www.washi...
Seems not unlikely that Anthropic was manipulated into this position for purposes of invalidating their contract.
Missile detection and decision to make a (nuclear) counterstrike are 2 different things to me but apparently the department of war wants both, so it seems not "just" about missile detection.
Anthropic, with its current war chest, is supposedly employeeing lawyers that are misunderstanding the Department of War? This is considered to be the likelier of possibilities, am I understanding this correctly?
I'm sorry but lol
What a joke. I suggest folks read up on the very poor performance of US ICBM interceptor systems. They're barely a coin flip, in ideal conditions. How is Claude going to help with that? Push the launch interceptor button faster? Maybe Claude can help design a better system, but it's not turning our existing poor systems into super capable systems by simply adding AI.
Probably also got assurances about a bailout when OpenAI collapses.
My bet is that what the DoW wants is pretty clearly tied to mass surveillance and kill-bots. Altman is a snake.
It's like the one honest thing they've done
But the executive-order driven name change just another bit of illegal/extra-legal/paralegal behavior by the administration that, every time we just nod along, eats away at the constitutional structure of our government. So don't go along with it.
"Changing it back" is completely ahistoric.
Or perhaps, maybe, just a little maybe, DoW is getting absolutely excited about mass surveillance and kill-bots?
I didn't have much of an opinion of Altman before but now I think he's a grifting douche.
And they are crossing the picket line, which honestly I was sure they would do, though I did expect it to take a bit longer.
This is too transparent even for sama.
this is going to end up being interpreted as "well, the president signed off on the operation. see - there's a human in the loop!" - is it?
You could recoup your investment in a year by collecting toll. Expedited financing available on good credit!
Well some may voluntarily leave, some will be actively poached by Anthropic perhaps and some I suppose will stay in their jobs because leaving isn't an easy decision to make.
Anyone who chooses to stay shouldn’t have signed the letter. What’s the point of doing it if you’re not going to follow through? If you signed the letter and don’t leave after the demands aren’t met, you’re a liar and a coward and are actively harming every signatory of every future letter.
https://www.theguardian.com/world/2026/feb/21/tumbler-ridge-...
Have we been watching the same Trump admin for the last year? That sound exactly like something the government would do: pointlessly throw a fit and end up signing a worse deal after blowing up all political capital.
Sometimes money is more attractive than morality. So I guess money is the answer here.
Do you mean the same OpenAI that has a retired U.S. Army General & former director of the NSA (Gen. Nakasone) serving on its board of directors?
So using Anthropic’s own words to cover a power play or pulling relationships to see if they could get anthropic to balk at it.
um, easy -- everyone has a price. Some of the most highly-paid workers on the planet work there.
Pay me $5M/yr and there are a LOT of things I wouldn't do for $300k.
Woolad theyll create the autonomous military robots themselves for that check.
The morals were just there while it was easy virtue signaling.
Same for almost all Google, Facebook, etc. Prove me wrong, please.
Honestly, the best thing to happen is that someone comes up with a new UI (think claw...like) that everyone starts using instead. A very cute, well integrated system that just works for everyone, has free tier, and has something that the others dont have.
> Do you expect that to work?
Many years ago Tim O'Reilly (of book publishing fame) knew Apple would one day would become really big even though they were a small, niche player in the "PC" space as the time (2000s). How did he know that? By seeing what the 'alpha geeks' were doing: the folks that not just used tech, but were working at companies that were inventing the future. They were the ones where friends and families asked for advice. And the alpha geeks (at the time) were switch to MacOS X and telling their friends and family about it.
* https://www.oreilly.com/tim/archives/rationaledge_interview....
* https://www.wired.com/2006/05/tim-says-watch-alpha-geeks/
There's a good chance that if you're on HN, you're the person in your non-techies social group that many others ask for advice. You can potentially sway many people by your example and your advice.
There is more to this story behind the scenes. The government wanted to show power and control over our companies and industries. They didn’t need those terms for any specific utility, they wanted to fight “woke” business that stood up to them.
Supposedly OpenAI had the same terms as Anthropic (according to SamA). Maybe they offered it cheaper and that’s why they agreed. Maybe it’s all the lobbying money from OpenAI that let the government look the other way. Maybe it’s all the PR announcements SamA and Trump do together.
"we put them into our agreement." is strange framing is Altman's tweet. Makes me think the agreement does mention the principles, but doesn't state them as binding rules DoD must follow.
I don't necessarily think he's lying, but there's so much obvious incentive for him to lie here (if only because his employees can save face).
https://www.stilldrinking.org/stop-talking-to-technology-exe...
He doesn't even need to be lying, the comment is vague and contains enough loopholes that it could be true yet meaningless. I explained some that I noticed here: https://news.ycombinator.com/item?id=47190163
He said human responsibility. Anthropic said human in the loop.
And Anthropic refused to say any lawful purpose would be allowed reportedly.
But regardless of the moral implications, will this improve America’s position on the global stage or further undermine it?
I can also interpret this as Sam and the administration supporting accelerationism while Dario is more measured and wishes to slow things down.
Ultimately, I don't know how much the specific reasons matter. Pete Hegseth must be removed from office, OpenAI must be destroyed for their betrayal of the US public, that's all there is to it.
2) Trump’s son in law (Kushner) has most of his net worth wrapped up in OpenAI.
If true (too lazy to check but I honestly take your word for it), this should probably be bigger news. Not that the outright corruption when it comes to the highest position in the US Government constitutes news anymore, but because it puts the Government’s fight against Anthropic (and supposedly other potential OpenAI competitors) in a new light.