upvote
No, not really. There was a real wolf and the person dusturbed the operation.

"South Korean police have arrested a man for sharing an AI-generated image that misled authorities who were searching for a wolf that had broken out of a zoo in Daejeon city.

The 40-year-old unnamed man is accused of disrupting the search by creating and distributing a fake photo purporting to show Neukgu, the wolf, trotting down a road intersection"

reply
But there are real wolves when shepherding too. That’s why crying wolf has any power.

To cry wolf is to say there’s a wolf here when it’s actually located elsewhere. The AI photo said there was a wolf at a certain intersection when it was actually located elsewhere.

In fact crying wolf is doubly appropriate because it means disturbing an operation looking for a wolf.

reply
Crying wolf is normally starting the operation while there isn‘t a wolf.

This is misdirection while there is a wolf

Similar but different

reply
That's completely pedantic and besides it's false because there literally wasn't a wolf there where he faked the photo in the first place
reply
Crying wolf is crying for help when there is no danger not when there is a danger just at different place.

That's not pedantic, that's the meaning of the idiom.

reply
If you stipulate that everyone must be relaxing at the time, sure. But the core concept of crying wolf is IMO simply a false alert with no particular constraints placed on those responding. I think in this case it simultaneously qualifies as crying wolf as well as misdirection.
reply
deleted
reply
But this isn't a false alert. The alert is real, people just got misdirected.
reply
It was a false alert in that particular place. I doubt those residents who were alerted had felt like they were previously in immediate danger.
reply
This is real life there's always a danger just at a different place.
reply
what if the real criers of wolves were the sheeple we misled along the way?
reply
deleted
reply
le reddit mentality
reply
The biggest difference now is wolf is actually sought to protect him¹ from the crowd of the super-predators in town, so they can "give him a calm environment for recovery".

¹ Following pronoun variant used in the fine article here.

reply
what an incredibly dumb thread this is. OP pointed out something amusing and it's being ruined by completely useless pedantry
reply
Welcome to HN, I guess
reply
If this was America there would be 20 think pieces in the Atlantic about how AI is ruining our culture and no one would get arrested.
reply
> the person dusturbed the operation

Did they? The article says it's unclear as to their intent.

> Authorities did not specify if the man had intentionally sent the photo to authorities during their search or simply shared it online.

reply
Intent or not, it did disturb as it misslead. And .. how can one imagine not disturb a search, when posting a wrong location?
reply
There was a real wolf in "The Boy Who Cried Wolf", too.
reply
The fable was always relevant, afaic it is still a part of the curriculums. It's also a nice illustration of how LLMs screw up everything they touch - and please don't serve me the old "guns don't kill people - people kill people" argument over this.
reply
Guns primary purpose is to kill. The primary purpose of genAI (image generation goes beyond the scope of LLMs) is not to mislead, they are used successfully by millions of people for purposes that are in no way nefarious. It includes valuable contributions to fields like medicine.

Like most important advances like plastics, nuclear power, diesel engines, synthetic fertilizers, computers and the internet, good and bad things came out of it.

It is like saying that plastics screw up everything they touch, for example when a plastic part is used to replace a more durable metal part, but before realizing that plastics are everywhere in our lives, often without a suitable replacement material.

reply
:) Wow you are getting ahead of yourself aren't you. LLMs are dangerous tools that any moron nowadays has access to. They can fabricate images of wolves roaming the streets, hallucinate fake arguments that sound really convincing and even coach people into committing a suicide, as you probably heard in the recent at least a dozen cases. I can't quite see the comparison you are making. It's not like you have access to a nuclear reactor or whatever other dangerous technology you wanted to lump in with it, at your finger tips, do you? This is because those other dangerous technologies are carefully managed. So now follow where I am taking this, I'll be explain it really simple. Guns are really easily accessible to people in large parts of the US. So some people will use guns to kill other people. Sometimes its an accident, like kids playing with daddy's gun and shooting their sibling. Some people argue that guns should be restricted, as it would reduce such accidents and incidents. But some other people say "guns dont kill people - people kill people". Now LLMs are as a dangerous technology, accessible to most anyone not just in the US, but around the world. Also easier to use. So anyone with basic command of language and ability to clank on a keyboard can "use" it. To the point that some people not only harm others, like this Korean champ, but also themselves, like those people who were goaded into committing suicide. Now my point was, and it should not have been that hard to see, that your argument is precisely of the "guns don't kill people" variety. The point is, if the chatbots that we pompously resigned to call "artificial intelligence" make mistakes 30-40% of the time, and we use them to verify information, they are dangerous and should not be allowed to use for such purposes as misleading generating public. Because that is dangerous. Now, in your small little selfish world, maybe they are "everywhere", meaning, you can offload your thinking to them, and maybe you even use them to write emails and summarise other people emails so you don't completely drown in your boring office job. But it does not mean you should compare them to anything you listed above. Those small "benefits" do not account for overall shittines of this so-called technology.
reply
> It's also a nice illustration of how LLMs screw up everything they touch

And you'll be shocked what the kids have been doing with databases and API calls

reply
Is there a reason you felt the need to slip this non sequitur in your reply?
reply
I am not sure, but it probably isn't because I wanted to sound smart by using smart sounding words :)
reply
deleted
reply
> somebody has been (rightfully or wrongfully) arrested for literally ‘crying wolf’?

Willfully diverting limited public service resources, that might potentially be assigned to saving someone's life or health?

Practically a social DoS

reply
Yeah, I really don't see the difference with false bomb alerts.
reply