Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture. The main selling point of AI so far is that it makes things cheaper to produce while still looking good at a glance.
"The thing I used to like costs the same or more but is now cheaper quality and worse and they think I'm dumb enough not to notice" really isn't a selling point, but pretty much the universal western post-2008 experience, and nothing quite embodies this transformation like AI.
But yeah, you also have all the AI CEOs chewing the scenery like Jeremy Irons in the DnD movie which really hasn't done the image of AI any favors either.
There are at least some redeeming features of AI, but I think it's become this scapegoat for a lot of things that it touches that are also larger unsolved problems with the economy, and it's even used that way, e.g. to motivate layoffs that would otherwise signal to investors that a company isn't doing as well as they'd like you to think.
I really love this comparison. Everyone bitches about Ikea, but at the end of the day unless you're rich as fuck then "buying new furniture" means either Ikea or some other shop that adopted exactly the same business model, because we all know that the price/quality ratio is unbeatable. Ikea furniture can easily outlive you as long as you pick the correct product for your use case. "I put my fat ass on a dining table that's explicitly marketed for light distributed load and it broke in half, boo-hoo Ikea bad" like no shit, if you need a table you can stand on then choose one with extra support beams, Ikea has these too. "But if you disassemble and reassemble Ikea it falls apart" okay cool but the cost of transporting old furniture to your new house is often higher than just buying new furniture anyway. Not to mention that the chances that your old furniture will match your new house are pretty much zero.
This translates to engineers not being able to grasp the concept of "good enough" where end user doesn't care about quality improvements beyond certain threshold. Cue the audiophiles remaining perplexed to this day why nobody uses 24-bit FLAC.
That's my personal impression of the anger. It's not so much luddite anger, its like Clippy anger and millenial anti-Boomer anger mixed together.
It's like a twist on the Turing test, where some humans can't tell the difference between a human and a computer, but others can, and they tend to be younger on average. The Turing test ironically ends up telling you more about the person taking the test.
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
Where did you get this notion? Did you hallucinate it?
Thirty-one percent being smaller than half.