It is a great day for both America’s national security and AI leadership that two of our leading labs, OAI and xAI have reached the patriotic and correct answer here
He's an administration official openly cheerleading his team. This should be characterized as the insider perspective/spin, not a neutral analysis of the relevant facts.Nothing in the quoted text comes anywhere close to the realm of justifying the retaliatory actions.
“You are impinging on my freedom to force you to participate in activities you have expressly indicated it is against your will to engage in! You bully! I am such a victim!”
https://xcancel.com/SecWar/status/2027507717469049070?s=20
This is endemic of the entire current administration. It is as disappointing as it is unsurprising.
(Just in case anyone was wondering, I live in Israel)
Conversely, I’m glad that we’re looking a little further than that, and are worried about what happens after this missile exchange. After living through an endless “global war on terror” that gave us the biggest mass surveillance enabling act, it’s hard to not dismiss “it’s just until the end of this war, and we promise it’ll end well!”
According to Anthropic, their terms have been in their contract from the beginning. The only decision they made recently is not to be strong-armed into renegotiating their contract to allow things they don't want to allow. I don't see how that's a bad thing.
What’s the difference between a company not building something that’s fit for purpose for fighting a war (like a nursery refusing to build land mines), and thus not being a qualified supplier to the Government for conducting military operations, vs. being tarred with the “supply chain risk” brush? The former seems uncontroversial; the latter seems petty and retaliatory. “Supply chain risk” designations are for companies that you would do business with but might be compromised by the enemy, like when a supplier agrees to provide the DoW grenades, but the grenades could be intentionally defective such that they detonate prematurely in the soldier’s hand.
Besides, as an Israeli, imagine a world in which the manufacturers of Zyklon B refused to sell Hitler their product for the purposes of gassing human beings. It might not have prevented the Holocaust, but at least maybe impeded it a little.
Apropos to this controversy, this story appeared yesterday—after 31 years following the Balkan wars, Croatia finally eliminated the last land mine: https://glashrvatske.hrt.hr/en/domestic/croatia-declared-fre...
Honestly, if the Holocaust was today, we would probably get 10% of comments here trying to defend "both sides". Some people have a need to try to defend every side, even if one of the sides it's asking for them to be murdered.
1. We've seen government lawyers write memos explaining why such-and-such obviously illegal act is legal (see: torture memo). Until challenged, this is basically law.
2. We've seen government change the law to make whatever they want legal (see: patriot act)
3. We've seen courts just interpret laws to make things legal
A contractor doesn't realistically have the power to push back against any of these avenues if they agree to allow anything legal.
(At the risk of triggering Godwin's Law, remember that for the most part the Holocaust was entirely legal - the Nazi's established the necessary authorization. Just to illustrate that when it comes to certain government crimes, the law alone is an insufficient shield.)
So the question is: do you trust the government to effectively govern its own use of AI? or do you trust Anthropic's enforcement of its TOS?
Does the qualifier "domestic" for mass surveillance mean that OpenAI allows the use of its models for whatever isn't "domestic"?
... Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force ...If his characterization of the agreement is correct, which I will not believe and you should not believe until a trustworthy news outlet publishes the text, I suppose this would convince me that Hegseth does not literally plan to build a Terminator for democracy-ending purposes. There's a lot of inexcusable stuff here regardless, but perhaps merely boycotting OpenAI and the US military would be a sufficient response if this all checks out.
It seems like you chose to immediately disbelieve it.
> until a trustworthy news outlet publishes the text
If you've found one of these, let me know. I'm still looking...
> If you've found one of these, let me know. I'm still looking...
I do not assume, and I would recommend that you do not assume, that there is such a thing as a text of the contract. It's much easier to lie about contents of documents that don't actually exist yet. Then you can craft the text in response to public feedback, writing it down in early March and telling people that it's totally a copy of what was agreed to on February 27.
As a corollary, you should be skeptical of any purported text that is not widely published soon. If there is indeed such a contract, and it says what Altman claims, he will desperately want to ensure that his employees have read a "leak" of the text by Monday morning.