upvote
The reality looks more like the worst of both worlds to me.

If you genuinely needed only a handful of "surgical strikes", thete would be no need to "compress the kill cycle".

What we see in Gaza, Lebanon and Iran looks more like "smart carpet bombing": Some AI system generates a continuous stream of "targets" from sensor and intelligence data, according to whatever criteria political leadership defines and according to a given level of allowed "collateral damage", then those targets are immediately fed to drones or warplanes to destroy - essentially a continuous "pipeline" that probably "ideally" (in the dreams of those people) should become fully automated.

For THAT kind of vision, "efficiency" in destroying any particular target and checking all legally required boxes as quickly as possible is probably paramount.

(And in addition to that, there are probably still enough "dumb bombs" if no one is looking)

reply
Specially AI Hallucination bombs, that hit a park named "Police Park", because it thinks it's killing policemen[1], or a children school with Shahed in the name[2], because it thinks It has something to do with drones.

[1] https://x.com/MarioNawfal/status/2029575052535173364

[2] https://www.aljazeera.com/news/2026/3/6/elementary-school-in...

reply
There's also a chasm of (non-)accountability.

You or your subordinates target an elementary school: that's a war crime.

Your "battlefield AI" targets an elementary school: software bug, it happens, can't be helped.

reply
This isn't even that new. Part of the motivation for building autonomous nuclear response programs during the cold war was specifically to remove accountability, and guilt, from human operators. But AI does bring it to a new level.
reply
The software is never accountable, so the human running it is always accountable.
reply
that is how it should be, not how it is.
reply
Your links talk about the places that were bombed, but I don't see anything apart for conjecture that this was the product of AI targeting.

Also this is a vast underestimate of the ability of organizations that were able to locate most of Iranian leadership throughout the war in their hiding places, but suddenly their Farsi is so bad they need a twitter account to tell them this is a Park

reply
It's a popular conspiracy theory, without evidence, and without any perspective on any information that intelligence had. Using civilians as shields is well documented/known for Iranian military and groups they sponsor. For example, hospitals [1].

Shitty, but possibly a valid military target.

[1] https://www.gatestoneinstitute.org/8666/yemen-human-shields

reply
> Gatestone Institute is an American far-right think tank known for publishing anti-Muslim articles.

> The organization has attracted attention for publishing false or inaccurate articles, some of which were shared widely.

> The Gatestone Institute has been frequently described as anti-Muslim, regularly publishes false reports to stoke anti-Muslim fears, and has published false stories pertaining to Muslims and Islam.

- https://en.wikipedia.org/wiki/Gatestone_Institute

The US and Israel have repeatedly claimed that schools and hospitals are legitimate military targets with no evidence. A highly partisan think tank which is known for putting out misinformation is not a valid source.

If you're going to destroy hospitals and target civilian infrastructure and kill children, you should be accountable on a world stage and provide evidence. Unless you would you accept Iran bombing elementary schools in the US because they claim to have intel that there are terrorists hiding under them?

reply
This has nothing to do with AI, the school got hit because it was directly next door to a military base.
reply
You're mistaking it for Shajareh Tayyebeh Elementary School[1], double tapped with tomahawks in the opening salvo of the war. That was another school, hit later. There was multiple schools attacked.

[1] https://en.wikipedia.org/wiki/2026_Minab_school_attack

reply
Channeling my inner Socrates:

You want consensus from non-experts for a plan to use 20 smart bombs.

Your opponent wants consensus for a plan to live-stream a demo of 1 smart bomb, and then use 19 dumb ones.

Your team has more expertise.

Your opponent's plan saves enough money to buy a better PR team than yours, and is still more cost effective than your plan.

Who wins?

reply
That “smart” vs “dumb” distinction doesn’t apply here though. What is discussed has nothing to do with the ability to physically land a bomb in a precise location, that problem seems to be solved reasonably well already. “Smart” in this case has more to do with using ML/LLM to select a target.
reply
You can rationalize anything by only considering the upside relative to alternatives' downsides.
reply
You might be right, but that's terrible
reply
Smart bombs are no good if they are directed by a dumb AI targeting system, a dumb alcoholic accelerationist religious fanatic Secretary of War, or a dumb narcissistic genocidal pedophile President.
reply
There is one more layer - America voted for this.
reply
deleted
reply
In fact, it didn’t. Trump continued to make “no new wars” a plank of his platform.

Some of his base will follow wherever he goes, but he would not have been elected without those who supported him on the basis of this (broken) promise.

reply
Trump said this wasn’t a war.

Americans voted for this confrontational, disrespectful, and chaotic way of governing.

reply