upvote
I feel like taking in GenAI content, even if it makes me laugh, probably does something bad to my brain. It looks like real life, but the physics is just wrong in ways that range from obvious to very subtle. I don’t want to feed my brain videos of things that look photorealistic but do not depict reality, that just seems foolish somehow.

Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.

reply
Do you feel the same about special effects in professionally produced media?
reply
I was thinking about this while typing. I don’t really care about classically animated content; it’s generally not trying to be indistinguishable from real life and I don’t feel like my brain trains on it.

But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.

Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)

reply
if it worked this way, we could get good at golf by watching TV, writing songs by listening to the radio, or doing math by watching 3b1b. but it doesn't - we don't learn that way, for better or worse.
reply
That's not a great comparison. People absolutely do learn by watching, especially when they do so actively.

Your counter-examples have the property that most of the things you need to learn are absent from the media being watched, leading to an observation which is "obviously" true, but they ignore the impact of media on a journey properly incorporating other pieces of information. To compare to the mental models being discussed, you'd have to actually consider effects you're writing off as negligible, and when it comes to something like a world model which we've only learned by observation and which doesn't have a lot of additional specialized knowledge those effects might be much more impactful.

reply
I agree with rogerrogerr, and your comparisons don’t make sense to me. Getting good at complex motions and understanding theory is far different than building a simple model of cause and effect in the real world.

Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.

reply
But you do get good at driving by playing realistic driving games.
reply
To your point about cars - such an expectation could well save your life now that there are so many EVs on the road. You do not want to hang out in one of those after a collision. Regardless, I agree that it's probably a bad idea to instill defective mental models in people.
reply
Eh, the stats don’t seem to support EVs being terribly explosion-prone either. In comparison to gas cars, maybe, but both are very safe in absolute terms. Harder to extinguish if they do catch fire, but I think if I came upon a fresh accident and there’s no immediate signs of a battery fire (airbags smoke, it’s normal), I would still leave the victim in the car seated until someone trained shows up.

Sure, be ready to get them out, and if they’re trapped and it’s going to be a while until fire shows up start working on that. But my mental model is that for any road legal car that is not currently on fire, there is a higher chance you’ll cause harm by rashly moving a victim than that a victim will be suddenly consumed by an enormous Hollywood style conflagration.

reply
The likelihood or lack thereof is not the problem. My mental model might be off because it largely isn't based on EVs but I've seen plenty of videos of e-bikes and more generally cheap lithium batteries going up in flames and I don't think it's at all comparable to a pool or stream of gasoline catching on fire. The issue is how rapidly it develops since it doesn't require an external oxidizer which is exactly the same as a firework.
reply
Media has warped people's mental models of what car wrecks are like at different speeds, being stabbed, being shot, drowning, seizures, falls from different heights, falls into water, giving CPR, when it is/isn't appropriate to give CPR, appropriate responses to natural disasters, etc.
reply
When I watch a film, I know it is fiction and special effects. But most of the fake AI-generated videos are being passed off as real on social media. It is exhausting (and increasingly difficult) to analyze every video on my feed to try figure out if its real.
reply
I feel like people do sometimes have a warped sense of reality from consuming too much media, ie porn
reply
Not op but if I’m being honest, I don’t feel as if that’s the case until I see a film whose special effects are limited to mise en scene and matte paintings and then I always have this overwhelming feeling that we’re all missing out.

Films on film using in camera effects are still made on occasion but they’re art films for niche audiences.

But we’ll never get another Ben Hur. And that doesn’t sit well with me even if society can’t yet fully explain why.

reply
I'm not OP, but I do get annoyed by bad car physics on movies.

The worst offenders are brake sounds not correlating to the car movement, engine sounds not correlating to the car's acceleration, nonsensical car deceleration while braking, and steering wheel not correlating to car steering.

reply
Yes, I think consuming too much media, and creating too little is bad for the brain.
reply
special effects make most people think that they could jump farther or from higher ground that they actually can. and most people think that all cars explode in massive fireballs.
reply
Effort makes a great deal of difference for me. The effort itself, the fact that it's there.

I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.

But it is insulting to feed slop to your audience; it shows you didn't even try.

I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.

reply
Are there energy consumption differences between CGI and AI?
reply
We also need to take into account, that CGI only consumes energy when the actual creation of particular video happens.

"AI" consumes energy before user even started (during training).

That is on top of comparison for each particular case.

reply
Right idea, but the application is incorrect.

Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.

Both a movie and a language model can cost tens or hundreds of dollars to produce.

In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.

At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.

reply

  > Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output
I did not say anything about consumption of the output. Maybe you misread what I wrote, it is about energy consumption.

  > Both a movie and a language model can cost
But we weren't comparing cost of the movie to cost of a language model

  > can cost tens or hundreds of dollars
But we weren't talking about dollars, we were talking about energy.

We're clearly exploring different questions.

reply
And that energy costs money, both at the training/cgi stage and at the inference/consumption stage. It's not even an externality.

CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.

reply

  > CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I've literally laughed at loud after reading this.

I can't believe you're stretching this in a good faith.

But if you are - well, you're certainly have a unique perspective.

reply
that's just empty consumption, there's nothing that makes art great in algorithmically generated content except at the shallowest of levels. I mean no disrespect, but that is extremely sad and all too indicative of the instrumental reasoning of the industrial milieu. It's about 2 steps above marrying a sex doll.
reply
[dead]
reply
There's such a fascinating divide on this.

I am 100% with you. I didn't ever _use_ Sora, but some of it trickled down to me (mostly through Instagram reels). I think it's amazing that we have such great new tools to express ourselves, and that we are trying out new platforms, paradigms, and approaches.

Is there money involved? Absolutely, but I don't fault companies for trying to earn their keep.

It 100% takes work to use these tools in the right way to make something funny. Ask an LLM to make them on their own and they'll hardly evoke laughs (I'm sure that'll change too, though).

reply
Yes, I don’t doubt that there was some very high quality human-moderated output. The point is that you likely can’t accurately distinguish the human-moderated output from the entirely generated slop (especially as it’s being trained and refined on the rest of the content), and so what chance does the average non-technical person have?

Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.

reply
> created by people with a great sense of humor

The real problem with AI slop is not the AI. It's the people. It's always the people.

The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).

Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.

With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.

reply
There were some genuinely very, very funny videos made on there. A lot of slop, but some definite nuggets of gold.
reply
[flagged]
reply