upvote
> it's very hard to see how anyone could look at what just happened

I think what you are missing is their annual comp with two commas in it.

reply
When the genius of Upton Sinclair and Russ Hanneman come together so eloquently.
reply
This, for that check theyll be building the autonomous robots themselves, saying "theyre food delivery robots, thats not a gun that a drink dispenser!"
reply
For today's lucky ten thousand, this essay was previously featured on HN

https://calebhearth.com/dont-get-distracted

Don't get distracted

reply
Back in 1960 us early detection systems mistook the moon for a massive nuclear first strike with 99.9% certainty. With a fully autonomous system the world would have burned.
reply
> theyre food delivery robots, thats not a gun that a drink dispenser!"

You underestimate how many top AI scientists are perfectly okay with building autonomous weapons systems and are not ashamed of it.

Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.

Now note that many L7+ at OpenAI are making $10 million+ per year.

reply
> Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.

I sincerely doubt that's true. I hope it's not. $1m is a lot of money, but I find it hard to believe most people would be willing to indiscriminately kill a large number of people for it.

reply
deleted
reply

    > Me, and 99% of HN readers, will gladly pull the trigger to release a missile from a drone if we are paid even just US$1,000,000/year.
I will respond with a personal, related story. I was living in Hongkong when "democracy fell" in the late 2010s / early 2020s. It was depressing, and I wanted to leave. (I did later.) I was trying to explain to my parents (and relatives) why most highly skilled foreign workers just didn't care. I said: "Imagine you told a bunch of people in 1984 that they could move to Moscow to open a local office for a wealthy international corporation and get paid big money, like 500K+ in today's dollars. Fat expat package is included. How many people would take it? Most."

Another point completely unrelated to my previous story: Since the advent of pretty good LLMs starting in 2023, when I watch flims with warfare set in the future, it makes absolutely no sense that soldiers are still manually aiming. I'm not saying it will be like Terminator 2 right away, but surely the 19-22 year old operator will just point the weapon in the general direction of the target, then AI will handle the rest. And yet, we still see people manually aiming and shooting in these scenarios. Am I the only one who cringes when I see this? There is something uncanney valley about it, like seeing a character in a film using a flip phone post-2015! Maybe directors don't want to show us the ugly truth of the future of warfare.

reply
I don't cringe because it's for dramatic/narrative effect. It's the same reason the crew of the Enterprise regularly beam into dangerous locations rather than sending a semi-autonomous drone. Or that despite having intelligent machines their operations are often very manual, as it is on many science fiction shows. The audience (if they think about it) realises this is not realistic and understands that the vast majority of our exploration would be done by unmanned/automated vessels. But that wouldn't be very interesting.

Other universes take it further - Warhammer 40k often features combatants fighting with melee weapons. Rule of cool and all that.

reply
1,000,000 ? lol gimme 200,000 and I'm your trigger puller
reply
deleted
reply
How many?
reply
As many as are at OpenAI about a month from now.
reply
True that - everybody has a price.
reply
I mean this is not actually true and the statement justifies and vindicates those that do sell out by saying of course anyone would. There are countless marytr for religion, politics, and other things.

A better way is to say you can always find a cheap sellout at least than the morally dammed cannot claim equality of belief

reply
You mean like all of the religious leaders who are actively supporting a defending a three time married adulterer? You’ll have to excuse my skepticism of the morality of “the moral majority”.
reply
Religion is and always has been about control… it strikes me as exceedingly naive to be surprised the church is backing a pedophile, have you literally ever read any history of any kind?
reply
I am the last person to be surprised at the corruption of any large organization.
reply
The world needs a nuclear war to just eliminate 99% of human life and just start over.
reply
Same answer the last ten thousand edge lords who said this got: you first.
reply
but you're part of the 1%, right?
reply
Either that or a cockroach.
reply
And all the survivors die from radiation? This must be a joke
reply
Lets be real one comma is enough for most Americans to flee their own humanity.
reply
deleted
reply
Hey, with expected stock payout - tres commas!

Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?

reply
One explanation is that this is effectively a quid pro quo, given Brockman’s enormous financial support of the current president.
reply
Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.
reply
I agree with your assessment, but given the past behaviour of this administration I wouldn't be shocked to discover that the real reason is "petulance".
reply
It’s obvious retaliation, and will be struck down by the courts.
reply
Maybe, within the next 5 years.
reply
I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.
reply
Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.
reply
The supply chain risk stuff is bogus. Anthropic is a great, trustworthy company, and no enemy of America. I genuinely root for Anthropic, because its success benefits consumers and all the charities that Anthropic employees have pledged equity toward.

Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me. I can see arguments on both sides and I acknowledge it’s probably impossible to eliminate all possible bias within myself.

One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public so that people can judge for themselves, without having to speculate about who’s being honest and who’s lying.

reply
>Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.

That isn't what many of us are challenging here. We're not concerned about OpenAI's ethics because they agreed to work with the government after Anthropic was mistreated.

We're skeptical because it seems unlikely that those restrictions were such a third rail for the government that Anthropic got sanctioned for asking for them, but then the government immediately turned around and voluntarily gave those same restrictions to OpenAI. It's just tough to believe the government would concede so much ground on this deal so quickly. It's easier to believe that one company was willing to agree to a deal that the other company wasn't.

reply
I’m skeptical because while I can totally believe that the deal presently contains restrictive language, I can totally believe that OpenAI will abandon its ethical principles to create wealth for the people who control it. Sort of like how they used to be a non-profit that was, allegedly, about creating an Open AI, and now they’re sabotaging the entire world’s supply of RAM to discourage competition to their closed, paid model.
reply
Not "asking for them", insisting the already agreed to terms be respected.
reply
> It's just tough to believe the government would concede so much ground on this deal so quickly.

Well… TACO.

reply
Exactly this. Looks like we had the same conclusion. I really am inclined to believe that OpenAI given that its IPO'ing (soon?) would be absolutely decimated and employees would be leaving left and right if they proclaimed that, yes OpenAI is selling DOD autonomous killing machines.

But we all know how OpenAI is desperate for money, its the weakest link in the bubble quite frankly burning Billions and failed at Sora and there isn't much moat as well economically.

DOD giving them billions for a deal feels like a huge carrot on the stick and wink wink (let's have autonomous killing machines) with the skepticism that you, me or perhaps most people of the community would share.

I for what its worth, don't appreciate Anthropic in its whole (I do still remember perhaps the week old thread where everyone pushed on Anthropic for trying to see user data through API when they looked at the chinese models whole thing) but I give credit where its due and Enemy of my Enemy is my friend, and at the moment it seems that OpenAI might be more friendlier to DOD who wishes to create autonomous killing machine and mass surveillance systems which is like Sci-fi level dystopia rather than anthropic.

reply
We all know who's lying... The guy who's track record is constantly lying.. your boss.
reply
Ouch but true - he is the Elon of AI.
reply
Isn’t Elon the Elon of AI?
reply
> One thing I hope we can agree on is that it would be good if the contract (or its relevant portions) is made public

Until they volunteer evidence that the deal is being misdescribed or that it won't be enforced, you can honestly say that you haven't seen any. What a convenient position!

reply
> Whether Anthropic’s clear mistreatment means that all other companies should refrain from doing business with the US government isn’t as clear to me.

You're conflating the Trump administration and their fascist tendencies with all US government. You want to work for fascists if you get paid well enough. You can admit that on here.

reply
Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.
reply
Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it”
reply
Never try to convince someone of something they're paid to not believe.
reply
OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.
reply
The problem is, the vague safeguards are not worth anything.

"we will comply with US law" The problem is, the US government does not actually comply with US law.

reply
That’s not evidence. You’re effectively saying “trust me bro” without a shred of proof to backup your claims.
reply
deleted
reply
deleted
reply
As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.
reply
This isn't even close to true. VCs aren't silly, and it's not the 2010-2015 days of free money any more. Having a big company on your resume is not enough to land your seed round. You need a product, traction, and real money revenue in most cases.
reply
Oh no, principles with a price... what will they think of next. Obviously principles only matter when there is a price attached.
reply
I mean, even if that's the case Facebook was hiring 100 Million$ just a few months ago though even poaching from OpenAI and I do think that these employees will always have an easier time getting a decent job offer from major companies in general as well. They may or may not be making the same money but, I do think that their morals have to be priced in as well.
reply
Getting a job offer is very different to raising a funding round.
reply
Yes I agree, I don't know the current VC market so I am not gonna comment about that but my point was that the OpenAI employees would still be considerably well off even if they switch jobs.

My point was I don't think that Money (whether from VC or taking Jobs from other massive AI employers) should be as important issue to them atleast imo.

reply
I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:

1. Department of War broadly uses Anthropic for general purposes

2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons

3. Anthropic disagrees and it escalates

4. Anthropic goes public criticizing the whole Department of War

5. Trump sees a political reason to make an example of Anthropic and bans them

6. The entirety of the Department of War now has no AI for anything

7. Department of War makes agreement with another organization

If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.

I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.

reply
Well at least we know now that the department of war is less capable than before. All because the big man shit his pants while Anthropic was in view.
reply
>5. Trump sees a political reason

Like, they haven't paid me a bribe? That seems to be the only "politics" at play in Trumps head.

reply
Nah, they just respectfully said no to their face, which prompted him to make a big threat display and post another message with caps and exclamation signs on social media.
reply
It's all a test of loyalty, crucial for fascist regimes.
reply
That is pretty optimistic, i hope it is true, and just a miss-understanding.

But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.

reply
These people are drunk on power. They have been running around dictating things to everyone so for someone to push back is pretty novel _and_ it will inspire (I hope) other people to push back.
reply
And unless GP has a security clearance, they can't know for sure what OpenAI is allowing on classified networks.
reply
Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.
reply
> while another agrees the the same terms that led to that

One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.

reply
Are you saying that everything so far in this administration has been 100% rational?
reply
> one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that

Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.

reply
Or corruption, in which Trump/Hegseth are getting a kickback from OpenAI, but giving the money to Anthropic would be "worthless" to them.
reply
>or there's another reason for the loud attempt to blacklist Anthropic

This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.

reply
And Sam is a habitual liar.
reply
He literally just got community noted for lying. So much for a non-profit CEO or whatever it is now.
reply
And an abuser, but they keep covering that one up.
reply
Are you talking about his sister?

https://x.com/sama/status/1876780763653263770

If so, I believe the lawsuit is still going on. I'm personally withholding judgment on him on this matter since I don't know the details.

But it's easy to criticize and judge him on other stuff he's said in public.

reply
anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems

openai can deploy safety systems of their own making

from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident

this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model

reply
Huh, that's an interesting and new perspective. I'd love to know what you mean by safety systems, and what OpenAI can do that Anthropic can't.
reply
Source?
reply
This is entirely nonsense.

- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.

- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.

- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities

reply
> Cope and cognitive dissonance
reply
There's a critical mass of Trump Derangement Syndrome in SV, as this site exemplifies almost daily. The amount of vitriol and hatred spewed here is not healthy, nor are those who spew it. It kills rational debate, nuance and leads to foolish choices like someone cutting off their nose to spite their face as the old saying goes.
reply
The president of the United States sets the tone that hated without reason or explanation is the way the system works now. Belligerence and power are the currency.

Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.

reply
They aren’t the same terms. You are clearly an enemy bot or an uneducated fool. OpenAI has agreed to mass surveillance of those who are not Americans. Anthropic refused. OpenAI’s term was a restriction of surveillance not to be on Americans
reply
You believe who ever said that?
reply