(www.waltonfamilyfoundation.org)
Sue me, I have that right.
I still haven't found a single person willing to go to the movies, and watch an AI movie. If it wasn't made by a person, there is no 'personal'-ity to it. It's just bland.
Eventually things will slow and slide back to thoughtful first, crapload second.
The last 27 marvel movies might as well have been written by ai, plenty of people have been to see those.
I feel like a lot of the stuff my nieces listen to are AI music. It's like a hodgepodge of popular songs with little rhyme or reason. Very 'sloppy' but if they like it....
It's hard for me to confirm if they really are AI or not. But I'm willing to bet that (random Roblox game they're interested in today) == heavily AI made. Maybe there's some real human effort here or there but I have heavy suspicions.
Didn't we all start as kids listening to music that is so formulaic that it could as well be AI-generated? A subset of people iteratively refines their music tastes and starts listening everything from bebop to obscure Canadian hardcore bands and will recognize quality in music.
But I am of the opinion that AI slop is displacing a lot of would-be beginner musicians and making it even harder for them to break out.
For better or worse, a lot of beginner artists were relying upon my nieces and their classmates) clicking on their music and sharing them for Spotify $$$.
When I started in tech, at the dawn of the internet, it was an exciting field full of hope and the promise to empower and enrich the lives of people. Tech now is largely the opposite.
Enshitification is making things progressively worse. tech companies are creating systems and tools with dark patterns abound to ensure you no longer own anything, are under constant surveillance, and populations at large are manipulated through the magic of propaganda and illusory truth. Even the productivity gains are perversely used to not give people more time through fewer work days/hours but to instead give them more work. People are losing their connection to others and the world around them.
Everyone tends to focus on Orwell’s 1984, but I find Fahrenheit 451 to be the more prescient book. I used to be annoyed by the book people’s choice to leave society and wait for it to collapse so they could help rebuild. In my mind, they should have been mounting an resistance. Fair to say I understand the book people’s perspective so much more now.
So at least they are quite happy during winter.
And they were all right.
I speculate it has a lot to do with surveillance capitalism. It's the same type of tactics that have been used for things like the banning of marijuana, or the health merits of cigarettes. Fear mongering and lying so a few robber Barron's can profiteer.
I think AI is useful. I think it was rolled out haphazardly similar to how people used to gargle radioactive isotopes or slather them on as after shave so others can profit quickly. There are so many issues with the technology that the press won't even cover yet because we all have to play stupid until trends emerge to report on otherwise billionaire defense contractors will send their figurative or possibly literal hit squads after us. We have to wait for the tumors to grow, the jaws to fall off before society will remember "maybe we shouldn't be slapping radioactive stuff all over ourselves so some wealthy white dude gets wealthier"
The future of ai is in small local models people pay 0 dollars to upgrade or use. Anything else is meritless exploitation and destruction. That's why the US will lose. Reality has a liberal bias. Tough pill for ai libertarians to swallow. So they mud fling.
Interesting results regardless when they compare the shift of 2025 to 2026
I love the cognitive dissonance.
Even in the best case scenario where the generated wealth will be distributed, and somehow we will be able to keep them in check (unlikely), what would be the point of life in a world where machines can best us at everything?
And all the benefits that brings. Not just in raw economic terms, but in quality of (family, community, recreational, commercial, ecological, medical) life.
Kind hard to imagine it will suck if another order-of-magnitude leap along that long line happens.
The world is changing quickly. Our most coveted defining traits - our minds - are under attack. This is a technology that seeks to replicate your thought processes and critical thinking and then to execute it at machine speeds.
If you think this is like the industrial revolution, you're actually right. We're still replacing animals with machines. But now we are the animals.
Anything other than a serious discussion about UBI or a post-labour economy is a joke. This is technology that aims to displace most of us.
A bit of a tangential anecdote from my dad, who is a retired a biologist. He was one of the first in the department to use a computer in the 1970s and wrote some programs to do tedious calculations that had to be done by hand before and took days of human labor. Even a 1970s computer could finish the calculations with his programs in a few minutes.
His boss, an older tenured professor, could not believe that 'these damn computers' can possibly be right. Doing the same calculations in a few minutes? Impossible. So for a few weeks (or months, I forget), he did all the calculations done on the computer by hand to prove that the computer must be wrong.
One day he comes to my dad and says "can you show me how to use one of these computers?"
The main social problem with automation in general was that less intelligent people have been left behind as only boring physical tasks are left for them to do, and people don't generally want to go back destroying their body from the prospects of an office job.
At some point frontier AI will only getting only worthwile to use for only super highly intelligent and motivated AI researchers which is a tiny part of the population.
May I also add that this isn't just (or at all) about intelligence.
I'm lucky enough to be at a company where I have a large budget in terms of what I can spend in tokens. This gives me an enormous advantage over someone who is just as intelligent as me and who has the same experience as me minus the interaction I have with LLMs.
In this case the crucial difference is not intelligence, it's that I found myself in the right place to be able to go up, whereas a lot of people which are otherwise like me didn't get that opportunity through no fault of their own.
People tend to attribute their successes to their own merit and their failures to happenstance, but if we're honest with ourselves the real world has a lot of randomness in it.
I guess cynicism is trendy.
It's not an anomalous sense of cynicism, hundreds of thousands of people are looking at their options and feeling hopeless. I'm glad I am not in that camp. The reason I'm not is because I was born sooner than they were. I don't blame them at all, it's looking a lot like the generation after them is cannon fodder if things trend the way they are now.
This is the problem to fix. Taking your anger out on AI is the dumbest, most shortsighted thing.
AI is fundamentally the automation of labor, and we can all see the incredible fruits of similar past leaps in capability.
Structure your society for a post-labor world. Don't halt the progress that has dramatically improved the human condition. To do so is a disservice to the species and all future humans - concretely, your own loved ones and especially your children.
Of course no one sees it as a collective achievement when the announcements are aimed at either scaring people about how even the team behind them is worried about releasing it or for CEOs to replace workers.
Artemis II, at least in the states, was an example of people genuinely feeling collective achievement. There is absolutely no reason this AI moment couldn't be that. Instead though the companies involved have explicitly chosen fear and capital as their marketing tools. We should be seeing this as an incredible time but those involved do not want us to and plan to keep the spoils for themselves so we shouldn't.
It is a completely coherent position to like most technological progress, but at the same time be critical of some uses of ML/AI.
You are just making straw men here by suggesting that people that are critical of AI are critical of all technology.
AI psychosis is real and the billionaires who own the AI chatbots know this.
Sell NVIDIA!!!
31% seems remarkably high. Here we seem to be running up against the limitations of statistics. It is hard to interpret whether this is a scared-and-angry sort of angry or if there is something AI-related happening that is making them angry. I might have been lucky in my experiences, but generally if people get angry there is a reason other than "things are changing".
That's my personal impression of the anger. It's not so much luddite anger, its like Clippy anger and millenial anti-Boomer anger mixed together.
It's like a twist on the Turing test, where some humans can't tell the difference between a human and a computer, but others can, and they tend to be younger on average. The Turing test ironically ends up telling you more about the person taking the test.
Most people who aren't in AI sees plain as day how everything AI touches is turning into the digital equivalent of flimsy IKEA furniture. The main selling point of AI so far is that it makes things cheaper to produce while still looking good at a glance.
"The thing I used to like costs the same or more but is now cheaper quality and worse and they think I'm dumb enough not to notice" really isn't a selling point, but pretty much the universal western post-2008 experience, and nothing quite embodies this transformation like AI.
But yeah, you also have all the AI CEOs chewing the scenery like Jeremy Irons in the DnD movie which really hasn't done the image of AI any favors either.
There are at least some redeeming features of AI, but I think it's become this scapegoat for a lot of things that it touches that are also larger unsolved problems with the economy, and it's even used that way, e.g. to motivate layoffs that would otherwise signal to investors that a company isn't doing as well as they'd like you to think.
Silicon Valley’s leaders have been one upping themselves on messaging to the public that they’re building a doomsday device. And then, bewilderingly to the outside, all of us who read through that bullshit then appear to merrily go along with the apparent suicide pact.
Most Gen Z, it appears, can also see through the bullshit. But about a third of them taking the message sincerely seems par for the course, and as you said, I wouldn’t assume it’s just aversion to change.
What I can't decide, for Anthropic, OpenAI, and xAI, is if the part which is BS is that they don't take the doom risk seriously at all*, or if the BS is that despite taking it seriously they think they are best placed to actually solve the doom. Or both.
Meta at least it is obvious they don't even understand the potential of AI, neither for good nor ill.
Google and Microsoft seem to be treating it as normal software, with normal risks. If they have doom opinions, they are drowned out by all the other news going on right now.
* xAI obviously doesn't care about reputational risk, porn, trolling, propaganda, but this isn't the same question as doom.
Where did you get this notion? Did you hallucinate it?