i once ran into someone in london in 2023 who was doing their thesis on AI regulation. they had essentially ended up doing a case-study on sam. their honest non-academic conclusion (which they shared quietly) was that they were absolutely terrified of sam altman.
fear is one of those signals we ought to listen to more often
It’s well established that belligerents can use mines, to separate the tactical decision of deploying for purposes of area denial; from the snap-second lethal decision (if one can stretch that definition) to detonate in response to an triggering event.
Dario’s model prohibits using AI to decide between enemy combatant and an innocent civilian (even if the AI is bad at it, it is better than just detonating anyways); Sam’s model inherits the notion that the „responsible human” is one that decided to mine that bridge; and AI can make the kill decision.
How is that fundamentally different in the future war where an officer might make a decision to send a bunch of drones up; but the drones themselves take on the lethal choice of enemy/ally/no-combatant engagement without any human in the loop? ELI5 why we can’t view these as smarter mines?