> Though Anthropic has maintained that it does not and will not allow its AI systems to be directly used in lethal autonomous weapons or for domestic surveillance
Autonomous AI weapons is one of the things the DoD appears to be pursuing. So bring back the Skynet people, because that’s where we apparently are.
1. https://www.nbcnews.com/tech/security/anthropic-ai-defense-w...
You don't need an LLM to do autonomous weapons, a modern Tomahawk cruise missile is pretty autonomous. The only change to a modern tomahawk would be adding parameters of what the target looks like and tasking the missile with identifying a target. The missile pretty much does everything else already ( flying, routing, etc ).
As I remember it the basic idea is that the new generation of drones is piloted close enough to targets and then the AI takes over for "the last mile". This gets around jamming, which otherwise would make it hard for dones to connect with their targets.
https://www.vp4association.com/aircraft-information-2/32-2/m...
The worries over Skynet and other sci-fi apocalypse scenarios are so silly.
This situation legitimately worries me, but it isn't even really the SkyNet scenario that I am worried about.
To self-quote a reply to another thread I made recently (https://news.ycombinator.com/item?id=47083145#47083641):
When AI dooms humanity it probably won't be because of the sort of malignant misalignment people worry about, but rather just some silly logic blunder combined with the system being directly in control of something it shouldn't have been given control over.
I think we have less to worry about from a future SkyNet-like AGI system than we do just a modern or near future LLM with all of its limitations making a very bad oopsie with significant real-world consequences because it was allowed to control a system capable of real-world damage.
I would have probably worried about this situation less in times past when I believed there were adults making these decisions and the "Secretary of War" of the US wasn't someone known primarily as an ego-driven TV host with a drinking problem.
e.g. 50 people die due to water poisoning issue rather than 10 billion die in a claude code powered nuclear apocalypse
I really doubt that Anthropic is in any kind of position to make those decisions regardless of how they feel.
In theory, you can do this today, in your garage.
Buy a quad as a kit. (cheap)
Figure out how to arm it (the trivial part).
Grab yolo, tuned for people detection. Grab any of the off the shelf facial recognition libraries. You can mostly run this on phone hardware, and if you're stripping out the radios then possibly for days.
The shim you have to write: software to fly the drone into the person... and thats probably around somewhere out there as well.
The tech to build "Screamers" (see: https://en.wikipedia.org/wiki/Screamers_(1995_film) ) already exists, is open source and can be very low power (see: https://www.youtube.com/shorts/O_lz0b792ew ) --
ardupilot + waypoint nav would do it for fixed locations. The camera identifies a target, gets the gps cooridnates and sets a waypoint. I would be shocked if there wasn't extensions available (maybe not officially) for flying to a "moving location". I'm in the high power rocketry hobby and the knowledge to add control surfaces and processing to autonomously fly a rocket to a location is plenty available. No one does it because it's a bad look for a hobby that already raises eyebrows.
Sounds very interesting, but may I ask how this actually works as a hobby? Is it purely theoretical like analyzing and modeling, or do you build real rockets?
And people who don't see it as an existential problem either don't know how deep human stupidity can run, or are exactly those that would greedily seek a quick profit before the earth is turned into a paperclip factory.
Another way of saying it: the problem we should be focused on is not how smart the AI is getting. The problem we should be focused on is how dumb people are getting (or have been for all of eternity) and how they will facilitate and block their own chance of survival.
That seems uniquely human but I'm not a ethnobiologist.
A corollary to that is that the only real chance for survival is that a plurality of humans need to have a baseline of understanding of these threats, or else the dumb majority will enable the entire eradication of humans.
Seems like a variation of Darwin's law, but I always thought that was for single examples. This is applied to the entirety of humanity.
Over the arc of time, I’m not sure that an accurate characterization is that humans have been getting dumber and dumber. If that were true, we must have been super geniuses 3000 years ago!
I think what is true is that the human condition and age old questions are still with us and we’re still on the path to trying to figure out ourselves and the cosmos.
I definitely think we are smarter if you are using IQ, but are we less reactive and less tribal? I'm not so sure.
That's my theory, anyway.
In my opinion, this is a uniquely human thing because we're smart enough to develop technologies with planet-level impact, but we aren't smart enough to use them well. Other animals are less intelligent, but for this very reason, they lack the ability to do self-harm on the same scale as we can.
The positives outcomes are structurally being closed. The race to the bottom means that you can't even profit from it.
Even if you release something that have plenty of positive aspects, it can and is immediately corrupted and turned against you.
At the same time you have created desperate people/companies and given them huge capabilities for very low cost and the necessity to stir things up.
So for every good door that someone open, it pushes ten other companies/people to either open random potentially bad doors or die.
Regulating is also out of the question because otherwise either people who don't respect regulations get ahead or the regulators win and we are under their control.
If you still see some positive door, I don't think sharing them would lead to good outcomes. But at the same time the bad doors are being shared and therefore enjoy network effects. There is some silent threshold which probably has already been crossed, which drastically change the sign of the expected return of the technology.
Perhaps not in equal measure across that spectrum, but omnipresent nonetheless.
You misspelled greedy.
There was never consensus on this. IME the vast majority of people never bought in to this view.
Those of us who were making that prediction early on called it exactly like it is: people will hand over their credentials to completely untrustworthy agents and set them loose, people will prompt them to act maximally agentic, and some will even prompt them to roleplay evil murderbots, just for lulz.
Most of the dangerous scenarios are orthogonal to the talking points around “are they conscious”, “do they have desires/goals”, etc. - we are making them simulate personas who do, and that’s enough.
I am not specifically talking about this issue, but do remember that very little bad happens in the world without the active or even willing participation of engineers. We make the tools and structures.
Anyways, I don't expect Skynet to happen. AI-augmented stupidity may be a problem though.
Bunch of Twitter lunatics and schizos are not “we”.
> "AI is dangerous", "Skynet", "don't give AI internet access or we are doomed", "don't let AI escape"
group. Not the other one.
Claw to user: Give me your card credentials and bank account. I will be very careful because I have read my skills.md
Mac Minis should be offered with some warning, as it is on pack of cigarettes :)
Not everybody installs some claw that runs in sandbox/container.
Much of the cheerleading for doomerism was large AI companies trying to get regulatory moats erected to shut down open weights AI and other competitors. It was an effort to scare politicians into allowing massive regulatory capture.
Turns out AI models do not have strong moats. Making models is more akin to the silicon fab business where your margin is an extreme power law function of how bleeding edge you are. Get a little behind and you are now commodity.
General wide breadth frontier models are at least partly interchangeable and if you have issues just adjust their prompts to make them behave as needed. The better the model is the more it can assist in its own commodification.