upvote
Yup, seems pretty easy to spin up a bunch of fake blogs with fake articles and then intersperse a few hit pieces in there to totally sabotage someone's reputation. Add some SEO to get posts higher up in the results -- heck, the fake sites can link to each other to conjure greater "legitimacy", especially with social media bots linking the posts too... Good times :\
reply
With LLMs, industrial sabotage at scale becomes feasible: https://ianreppel.org/llm-powered-industrial-sabotage/

What's truly scary is that agents could manufacture "evidence" to back up their attacks easily, so it looks as if half the world is against a person.

reply
The entire AI bubble _is_ a big deal, it's just that we don't have the capacity even collectively to understand what is going on. The capital invested in AI reflects the urgency and the interest, and the brightest minds able to answer some interesting questions are working around the clock (in between trying to placate the investors and the stakeholders, since we live in the real world) to get _somewhere_ where they can point at something they can say "_this_ is why this is a big deal".

So far it's been a lot of conjecture and correlations. Everyone's guessing, because at the bottom of it lie very difficult to prove concepts like nature of consciousness and intelligence.

In between, you have those who let their pet models loose on the world, these I think work best as experiments whose value is in permitting observation of the kind that can help us plug the data _back_ into the research.

We don't need to answer the question "what is consciousness" if we have utility, which we already have. Which is why I also don't join those who seem to take preliminary conclusions like "why even respond, it's an elaborate algorithm that consumes inordinate amounts of energy". It's complex -- what if AI(s) can meaningfully guide us to solve the energy problem, for example?

reply
One thing one can assume is that AI really is intelligent we should be able to put it in jail for misbehavior :-)
reply
As with most things with AI, scale is exactly the issue. Harassing open source maintainers isn't new. I'd argue that Linus's tantrums where he personally insults individuals/ groups alike are just one of many such examples.

The interesting thing here is the scale. The AI didn't just say (quoting Linus here) "This is complete and utter garbage. It is so f---ing ugly that I can't even begin to describe it. This patch is shit. Please don't ever send me this crap again."[0] - the agent goes further, and researches previous code, other aspects of the person, and brings that into it, and it can do this all across numerous repos at once.

That's sort of what's scary. I'm sure in the past we've all said things we wish we could take back, but it's largely been a capability issue for arbitrary people to aggregate / research that. That's not the case anymore, and that's quite a scary thing.

[0] https://lkml.org/lkml/2019/10/9/1210

reply
Great point.

Linus got angry which along with common sense probably limited the amount of effective effort going into his attack.

"AI" has no anger or common sense. And virtually no limit on the amount of effort in can put into an attack.

reply
This is a tipping point. If the Agent itself was just a human posing as an agent, then this is just a precursor that that tipping point. Nevertheless, this is the future that AI will give us.
reply