Over time we're probably going to see some really broad and strong use cases of AI, but I think in the case of social media or generative content, we have to be a lot more thoughtful about it. And I'm glad that they're shutting down this app as much as it's great to see innovation and technology and to see how far it's pushed. I prefer to see it when someone like Google does it? Because they're really doing it from the standpoint of this has broad applicable applications to something like simulation or training. Not whatever open AI was doing which honestly just doesn't feel very truthful. I feel like they say one thing and do something else or they say one thing and the agenda or something else. And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it.
The addictive toxic content will go the way of tobacco and explore new markets.
Back in 2010 around 11% of the population of Indonesia was connected to the internet. Currently it's closer to 80% - largely via mobile phones. That's approximately 200mln new users.
Nigeria and Pakistan are going through the same change, just started later.
Since 2016 India alone added more users than the mentioned countries combined.
That's a lot of first generation users. More than the entire western population.
Short form video is a special kind of crack. I see even old people getting hypnotized by it. And even worse, they're terrible at determining if something is AI.
Which is usually back to back with the thought that in bygone times "the human mind used to be cleaner / healthier / smarter and it was slowly destroyed by modern living"
There's not that much difference between our behavior and that of a chicken fixated on the chalk line in front of it.
Not to say it's a hallucination, but, to modern standards, if this were publicly funded research, it seems like it would have been a gross violation of ethics or other non-technical criteria. Interested to see how people think of it in later years, e.g., now.
In a sufficiently isolated population, you get the same effect from a sound-making greeting card, or a battery powered light and/or sound toy from a carnival.
And for what it's worth, tomorrow they don't miss whatever “indistinguishable from magic” thing, so no harm done.
// grew up near such areas
Is it?
I have the impression GenAI deteriorates the internet both from a content and tech perspective.
Bots that waste your time because they don't work well or because they are pushing an agenda, and low quality content that floods social media from people who want to make a quick buck.
GitHub and AWS became increasingly unstable. X, Instagram, and WhatsApp are suddenly sprinkled with subtle bugs.
Everything just got faster and we got more of it, but nothing of it is good anymore because everyone tries to replace 90% of their work with GenAI instead ofmaybe starting at 10-20% and then add more when you're sure it works.
I just have the feeling that it doesn't get the job done anymore.
I hope we will see the rise of alternatives.
I think it factors into why public perception is increasingly anti-AI. It'd be one thing if people were losing jobs, but on the other hand, their daily chores were done by a robot. Instead, people are losing (or fearing losing) their jobs, while increasingly having to fight with AI chatbots for customer support and similar cost-center use cases.
It's like AI is the "high fructose corn syrup" of tech. Nobody's arguing the output is better--it's just a lot cheaper and faster to get there, so that's its legacy. Making things cheaper and worse.
Saves the company a ton of money
I am not convinced. Nobody is making money, every player is losing money hand over fist.
Take Uber as an example: yes they've raised prices to become profitable, but not to the insanely profitable levels they could if they had a true monopoly. People will stay on Uber when the competition is still at a roughly equivalent price, but will switch if Uber raises its prices enough.
Uber Eats is different, since its a 3 sided market where the cost is paid by the restaurant rather than the user.
AI appears it's going to be more like Uber the car service. Claude can charge $200/month, but charging $2000/month seems unlikely to work. I'm sure many would be willing to pay $2000/month if they had no alternative, but there are alternatives.
I like to call this the "Yahoo Effect"
Some of that is seeking to kill competitors before they can get established. That's normal and has been around for generations, if not since trading was invented.
But most of what we've seen during the "enshitification age" has been to burn money until you achieve a critical mass of users. However, this only really applies to social platforms where the point of it is communicating with people you know. That's the lock-in. You convinced Grandma to join Bookface and now you feel bad leaving if she doesn't leave at the same time, and more importantly, who wants to join Google Square if nobody else uses it?
That's not going to work for AI platforms.
What I do see potentially working is one method that email platforms use to lock in users: having tons of data you can't export/migrate. If you spent lots of time training your AI by feeding it your data, that's going to make it harder to leave.
So far none of them have capitalized on this (probably due to various technical reasons) but I expect it to start eventually.
Coincidentally, bringing your own address that can be migrates away is somewhere between impossible and expensive.
https://www.zoho.com/mail/zohomail-pricing.html
A few DNS hosting companies still bundle in a few free email mailboxes with registration costs but that is becoming more rare.
So it is stated, but is it actually true? I am not convinced.
Besides, it's not as if they can suddenly stop training models, the moment you do that you've spelled a death sentence for profitablity because Google and open source will very quickly undercut a 15 year break even timeline.
I'll believe it when I see it.
OpenRouter makes it easy to use them, just add credits to your account.
I thought this was common knowledge to anyone looking to use an inference API, but it seems it isn't. Well, even AWS is in this business with Bedrock.
Because few people really care much about the commodity hosting world. They're not making waves, they're just packaging things made by others for a low-ish cost. They're also not very consumer-focused, as they're a bit lower level than what most people prefer to think about. It doesn't mean they don't exist or that they're not profitable though, just not headline-reaching numbers in the end.
Right but I think a lot of these use cases aren't replacing any jobs because it wasn't anyones job. It's just a little polish on existing work (did spell correction in Word kill jobs?) or the stuff that voice assistants have been promising for 10 years.
That alone is huge, if they let go of their egos about putting the entire white collar class out of work..
You will have an agent like your seo expert, this agent will be able to use common tools like google seo, facebook seo etc. and you will teach how you want it to do its 'job'.
You will have a way of delivering your requirements to it, it will run in the background, might ask for feedback but will otherwise do stuff similiar to whatever person was doing it before.
There might be some transition phase like verifing the data of the real person vs. the agentic ai then moving over to only validation until the agentic agent is in avg as good as a human. Then the human will be gone.
Agentic will take basic support tasks (its actually already doing this) first, then more complicated things etc.
For this we need an ecosystem aka the agentic ai platform, interconnect between agent and tools and this stuff is currently getting build by someone one way or the other.
On scale we need more capacity and these agents will also cost more money than a 20$ subscription.
But if you have a, lets say SAP agent, it will be build once, trained once and than used by everyone. Instead of a person using a HR system or billing system, the agent will bridge the gap between data and system.
Who will be held responsible when an AI agent messes up the HR system and the company is exposed to losses due to a mistake? Who is going to be responsible when your SEO agent overspends?
Ultimately, it's going to be you most likely, because I can't see AI firms taking this responsibility.
You might argue that right now it also falls on the employer, since employees are rarely held responsible for genuine mistakes, even if it ends in disaster, however you have a lot of agency over what an employee is doing. Their motivation is generally correlated with doing well, because past success ensures future career growth.
An AI agent has no such incentives. The AI company will just charge you some minimal fee to provide the service, and if it messes up, will wash their hands of responsibility and tell you that you should've been more careful in using it.
I dislike Taleb for various reasons, but using AI agents is basically the definition of a fragile system. It works 99% of the time, lulling people into this sense of security where they can just offload all their work very conveniently. And then 1% of the time (or 0.01% of the time), it ends in utter disaster, which people are very bad at dealing with.
Encoding more rules, more precise rules and alerting a human in case it thinks its off. Like salary increase by 20% gets flagged automatically. Revenue drop bey x % too.
It could even go so far that the maker of these systems will insure you for their use.
It just needs to be cheaper than all the humans in the loop and if you train it once, you can copy it unlimited time. Scaling effect of software for tasks we need to train a human again and again.
It could also be agent systems which do this. Like a company building and designing the HR USA Healthcare agent specialized in SAP HR. Another one for HR Brazil Healthcare agent specialized in another HR software.
Humans are really expensive and you have to train them regularly and every single on of them.
In this case, maybe not enough to offset the costs; or maybe it just wasn't addictive enough. But it's still early days.
I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.
I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.
It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).
I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.
Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.
For short format low stakes stuff like online ads, then the AI slop actually probably works however.
Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.
A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.
Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.
Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.
What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.
OpenAI just proved you cannot burn money indefinitely.
I like the framing of trying explosive things to escape the pull of gravity. When applied to rockets, it means a lot of stuff blowing up, which again seems apt.
They're not, they just already have the habit formed with the place they go to do that. Ultimately anything worth seeing on sora will be reposted to Tiktok.
Having Disney on their side was def quite a smart/interesting move.
At least from one interview, they def had resource issues last year and teams had to fight for it. Can easily be that sora was always priortized down and they realized it doesn't make sense to spend that much capacity while then not being able to push their main model.
It reeks so much of desperation. They know they are running out of goodwill and money at breakneck speed. They are just flailing and throwing shit against the wall to see if anything sticks.
So they need to be able to do image generation, for which they need image data. They also need to be able to analyze videos for more and better training data like learning or teaching there models from yt and other sources.
So they have image generation, image dataset and video dataset. Its not far fetched ata ll or desperate to leverage this base for playing around with video generation.
And despite how much money they burn, for a company that size, trying out video generation wasn't that high of a goal post.
I'm really surprised by there move and can only imagine that the progress of other models from google and antrophic pulls their teeth and no longer want to invest the compute (not money) to leverage their compute for their main models.
Nano Banana created a lot of noise.
But the reasoning of Gemini 3.1 Pro is really really good. Its hard to describe how good it became. I do not see the same quality from openai. Openai though is also super fast in response. A lot faster than just a few month ago.
For example: some german guy used the wrong word in describing an advantage of having a silencer and missuesd a word. Openai just said its nonsense, gemini suggested that its a typo and he wanted to write something else (gemini was correct).
It could also be that we are in a moat between "why is AGI not here yet" and "we need to build now the agentic platform stuff, that takes time".
Gemini pro is def slower than openai and I do not know if its because I use the pro version of gemini but not from openai. But it could also be that OpenAI has to work on subagents because Gemini def uses subagents and i was not able to find a source that OpenAI is doing this too.
Obviously caveat emperor but there are a lot of real world scenarios like this.
I think Anthropic and OpenAi are trying to all cool and apple-y with their branding but these use cases are just tools getting work done. Most normal people don’t need or want AGI, or even AI slop videos. They just want their invoicing system to just f-ing work for a change.
Nobody ever really solved making CRUD apps easier through better frameworks. So now we have a tool to spit out framework gunk, and suddenly everyone can have their own app.
s/emperor/emptor
I hope your friend's company spends $20K to harden the deployment of the new app so it doesn't become a deep liability.
The best part is is that they'll get popped because of it and have zero clue. Anyone building in any frontier provider currently, but has little background in software, is creating all kinds of new liabilities that didn't exist before.
In a school district where I live the IT department developed a password distribution app using Gemini on Google App Script (they didn't even need this part), sent out links with B64 encoded JSON that included: student name, student email, parent email and student password. Yet, when I found it and told them all the ways that it was technically a breach in our state they ran to their 2-bit "cyber security experts" and "legal". They were far more concerned with CYA than understanding the hole they dug themselves. And all of the advice they got back was that it wasn't a breach. They claimed their DPA with Google protected them. I explained how email works and they just ignored me, likely because in our state they are bound by GDPA and won't ever engage in a legitimate conversation via email.
The kicker here is they pay for an IDP with built-in mechanisms for password resets (that was the reason for building this: to reset students passwords). One of their cyber security "experts" (a lone guy who has zero credentials from what I found) told them that password resets using the IDP was "not recommended". When pressed on that they were, again, silent.
LLMs are creating a huge mess for people now empowered to go well beyond their capabilities and understanding. It's a second coming of the golden age of shitty software that's riddled with even the most basic of security flaws.
Either way, the instability of this industry due to the insane amounts of cargo culting every time <insert big thing> comes along has made me really question whether I want to stick around.
I hear you but at least as my bud described it, the software that most of the timber mill industry uses is buggy as hell, crashes all the time, and makes mistakes. One would wonder if even the licensed software is hardened.
Ironically, starting your response with this guarantees a lot of people won't read it. It's the same as going on reddit and starting a reply with, "Nobody will see this but", and hoping that people try to prove you wrong by reading and commenting on it. I stopped after the first sentence. People really have to stop with the clickbait vomit way of writing.
Considering the large million plus view counts I see AI slop getting on FB and YouTube I'm not seeing this behaviour play out.
If people don't read because the text is an unreadable mess, none of the points get across.
A long time ago on the myspace forums there was this slightly weird but also very wise and smart person who wrote without any punctuation or paragraphs, ever. Although they were generally liked and part of the community, I think I was the only person who read every single one of their comments in full, religiously, once I realized how insightful they were, and I was richer for it. I could have told them the obvious, how their posts differ from most others on the forums; and they would have posted with less joy and maybe less overall, that would have been it.
[...] do not ye after their works: for they say, and do not.
For they bind heavy burdens and grievous to be borne, and lay them on men's
shoulders; but they themselves will not move them with one of their
fingers.
But all their works they do for to be seen of men [...]
> And again, I don't know how helpful it is to comment like this, but I feel like if you understand the truth then you should speak the truth even if it only benefits one other person to hear it. [...] they seeing see not; and hearing they hear not, neither do they understand.
That man was later nailed to a plank for literally no reason.Nothing is new under the sun.
After those first two weeks though, we just… didn’t use it again. The novelty wore off and there wasn’t anything really to bring us back. That was the real downfall of Sora.
There will be (or is, I'm behind the times / not on the main social networks) an undercurrent or long tail of AI generated videos, the question is whether those get enough engagement for the creators to pay for the creation tool.
The AI art I have seen creatives produce is far beyond anything I have been able to come up with. We're not at the point yet where you can just prompt "Make me a video that is visually stunning and captivating" and get something cool.
ah, but what a persona that would be if you were a Kai's Power Tools settings menu!
.. such as? What's the "Mona Lisa of AI art"? Is there, like, a gallery? Awards?
TikTok and social media is a strange mix of both, people posting response videos to everything.
Personally, I've stopped subscribing to Spotify, YT music, etc because the slop from Suno is good enough to replace mainstream music or whatever lofi playlist. It's free, it's good enough, and it's not grating to hear after a few days of that favorite song.
The video slop can well replace TikTok and Reels. Make educational content about your hometown. Explain how to throw an uppercut.
But I guess the desire to create something that others would consume is also different from the desire to simply create.
This is a vocaloid break up song: https://youtu.be/9pQR4a5sisE
The first isn't bad by any means. There's a million break up songs and that's one of the best sad ones. Most are just... angry? Blaming? Empowering? They work fine. They sell records. Many have have a billion views.
But the second one, even with the clunky translation, strikes somewhere deeper. It's written by someone who had enough time ruminating on a break up. The ending hits a little harder, because break up songs are about endings.
Both are sincere, but the first feels more formulaic. I'm inclined to think the first one is the soda.
I feel Suno leans towards this group of songwriters and poets who have something to say. Sora doesn't.
The musician in me just shed a tear
Hopefully AI outcompeting humans at slop sparks a renaissance of humans creating truly beautiful human artwork. And if it doesn't, then was anything of value truly lost?
I get my modern music from Bandcamp. If you can't find good stuff to listen to, that's a 'you' problem.
What are you talking about? There’s lots of modern music that’s not corporate slop and that’s absolutely great. Never in history was access to great music as easy as it is now.
From wikipedia: Many Daft Punk songs feature vocals processed with effects and vocoders including Auto-Tune, a Roland SVC-350 and the Digitech Vocalist. Bangalter said: "A lot of people complain about musicians using Auto-Tune. It reminds me of the late '70s when musicians in France tried to ban the synthesiser. They said it was taking jobs away from musicians. What they didn't see was that you could use those tools in a new way instead of just for replacing the instruments that came before. People are often afraid of things that sound new."
It's a neat tool for genuine creators, and a crutch for people interested in slop.
So I quit riding the overpriced subway altogether and now consume AI-generated subway imagery and soundscapes for free, they are just good enough to feed my passion for boring tunels.
Some ego-bloated edgelords had nerve to tell me that there are, like, other modes of transportation, but I honestly find their high-horse elitism despicable.. Damn morons.
I wonder what OP categorises as 'mainstream'. As a classical musician this breaks my heart.
There are exceptions though. FUKOUNA GIRL by STOMACH BOOK, for example. AI can't come close to replicating something like this. Not the cover art, not the off-key voices, not the relatable part of the lyrics. I don't believe this is a top #100 song, though it certainly is popular.
There is a fundamental issue of trust here. Facebook has me tagged as history nerd so I get to see those slop videos. They are fun, but always superficial and often plainly wrong. So unless the slop comes from a known, trustworthy source, the educational element is simply not there.
For throwing an uppercut it's even more important, if you follow wrong slop instructions you can end up breaking your wrist or fingers.
You wouldn't care to order the food as I personally like it -- might be too spicy (or too bland) for your taste.
Suno songs are overtuned for personal preference in the same way.
^ this is important.
Otherwise you may very well be missing anything really surprising or novel.
See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where
> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.
On the other, Google might not have done much to upgrade the podcast feature since them.
Sometimes I'll take deep research output and listen to it too that way.
This somewhat makes whole NotrbookLM less useful, but still.
Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.
For example, I can give it 8 papers on best practices in online marketing, it will turn it into a 20 minute podcast.
There are errors, but also with real podcasters.
Or before! Either is mandatory to actually learn the content.
Those 100 videos probably cost $100+ for them to create. Did you pay them $100+? (not a critisism, just a re-framing)
24/7 titillation is boring
And this is the challenge that these tools have - they have to have a free tier to get people to explore it, but unless they can make it a habit, those people will never upgrade to a paid subscription.
I have no figures, but if I'm being optimistic, these freemium subscription services have 10% conversion rate at best; can that 10% pay for the other 90%? For a lot of services that's a yes, but not for these video generators which are incredibly compute intensive.
I'm sure there's a market for it, but it's not this freemium consumer oriented model, not without huge amounts of investments. Maybe in 5-10 years, assuming either compute becomes 10-100x cheaper / more available, or they come up with generators that run cheaper.
There's some market for b2b I'm sure, but as a consumer facing product it's tough to see how it could ever come close to paying for itself.
I think this is starting to play out.
When I personally see a blog post which didn't need an image, but still does have an AI-slop image banner, I mentally check out. I might have Claude summarize it, or (more likely) just skip it altogether.
Essentially you are watching the same videos over and over subconsciously
Procgen has a niche, but it never became ubiquitous, because for most people exploring a nice hand-made intentional environment is better.
I think people attach to other people more than “ai”. When there isnt a narrative “person” behind the content it is way less interesting.
> This is the right question but hard to answer in practice ...
> The brownfield vs greenfield split is the real answer to ...
> The babysitting point is the one people keep glossing over ...First it looked like it was crazy inventive, good at writing snappy dialouge, and in general a very good font of ideas.
Then the same concepts, turns of phrase, story ideas kept reappearing, and I kinda soured on the concept.
I haven't done it in a while, but that kind of usage really shows the weakness of LLMs - if you keep messing with its generations, editing what it made, and as the context length keeps increasing, its more end more likely it goes into dumb mode, where it feels like talking to GPT3, constantly getting confused, contradicting itself etc.
Sometimes people want to paint, sometimes people want a painting.
To have wonderful time with their mom… I bet they had absolutely zero interest in the act and process of making silly videos.
Read the main comment out loud to yourself while imagining it’s someone sitting at a table at a pub.
Now imagine someone turning to this person in the pub, and speaking the subsequent comment, word for word.
No seriously, try it out.
Your reply is more interesting. Hence my (albeit maybe snarky) chiming in. So the original comment does end at a very specific app/sora related conclusion. "Sora didn't keep us coming back."
If I may amend your scenario: imagine this bar is actually in the center of SF or across the street from Open-AI or whatever. We're on HN discussing a post on X about Sora.
The appeal to humanity is not wrong. My point is more let's keep the connection with that humanity in relation to AI, to Sora, to what's going on in this forum.
You didn't at least puff a little ack through your nostrils for that one?
Sora was the first product OpenAI shipped where I felt that fell into that second category, and for that I was very disappointed. You have all those GPUs, and the most incredible technology in the world, and the most brilliant engineers, and all you can think to do with them is to make an app that just makes meme videos? I mean, c'mon!
Still, I am mystified by how rapidly Sora went from launch to shutdown. Does anyone have any guess what happened there? Even if Sora wasn't a spectacular success, it seems to me like subsequent model improvements could have moved the needle - shutting it down so soon seems premature. I mean, what if this is the equivalent of making ChatGPT with GPT 3?
i recently used gpt for the first time in several months (i'm a daily claude user) and didn't find this at all. it is most certainly trying to pull you into engagement with how it ends each response. "if you want, i could tell you about this thing that's relevant to what you are discussing and tease just enough so that you addictively answer yes"
Not about Sora, but about ChatGPT. I felt the same way for quite a while until I noticed that its response pattern has changed, apparently aiming for higher engagement. Someone aggressively pursued a metric.
At some point, ChatGPT started leaving annoying cliffhangers in its every response, like "Do you want me to share a little-known secret of X that professionals often use?" Like, come on!
To me it seems it was "Disney gets shares and we get to use their characters in Sora".
Even if Sora breaks even, why would you gift Disney stock? It's not like they actual gave 1B to openai.
I think if you had to foot the bill for generating a bajillion gigabytes of slop with no real utility, you wouldn't be too mystified.
They showed off their technology and proved it was impressive. That's all it had to do.
I'm curious if you still feel this way about current iterations of ChatGPT? It seems like it's now primed to engagement bait the user, especially when used through the web UI. You can ask it a simple question with a straight forward answer and it will still try to get you to follow up with more.
> What is the minimum thickness for Shimano M8100 disc brake rotors?
> For Shimano XT M8100-series rotors (like RT-MT800 / RT-MT900 commonly used with M8100 brakes), the minimum thickness is 1.5 mm. If the rotor measures 1.5 mm or thinner, Shimano says it should be replaced.
> (a bunch of pointless details in bullet points)
> If you want, tell me the exact rotor model (e.g., RT-MT800, RT-MT900, size), and I can confirm the spec for that specific one and what typical wear looks like.
The entire query could have been answered with "1.5mm". The "if you want" follow ups are so annoying.
I really thought he wasn't like the previous generations of tech leaders - as you mentioned OpenAI (with him in charge) seemed to be genuine about making a product that could improve people's lives.
He'd go on podcasts and quite convincingly talk about how ChatGPT could prevent real world harm like suicide, and possibly even contribute to helping disease too.
Then they drop this and it just doesn't gel. So much of what they've done since has just doubled down on the Zuck-esque scumminess and greed too.
Part of me still sees Dario as genuine in the way that Sama seemed back in 2024, but I'm sure once he has enough investor pressure he'll cave the same way too.
He is a con man. Of course he’s charming and convincing, that’s how he ended up where he is. But he’s just as full of it as Musk when he was waxing lyrical about saving the world and going to Mars. They lie very convincingly.
I think his board fight within OpenAI where essentially lied to the board, his obsession with retinal scanning everyone for his biometric cryptocurrency (Worldcoin), how he left Y Combinator are just evidence that he’s not very heroic. Most cringe to me is that he and many others seem aware that what their are doing is corrosive and harmful to society on some level as Altman has admitted to having a bunker somewhere around Big Sur [0]. Which…WTF.
[0] https://www.newyorker.com/magazine/2016/10/10/sam-altmans-ma...
Not too familiar with that history, but he still is listed as a courtesy credit/reviewer at the end of PG's blog entries, so I assume he didn't have too much of a bad exit?
This is a conflict of interest and I think one a very obvious one. He tried to have it both ways and was forced to choose in the end. I think putting himself in that situation rather than resigning up front to pursue OpenAI ambitions says a lot about his character.
It could prevent suicide, maybe, but we know that it does cause suicides, at least in some cases. Seems like a poor value proposition.
The things he does is convince investors to give him billions of dollars to build what he wants. Where exactly does that leave us?
To me, this just came off as pathetic. It hasn't solved anything and there's no reason to believe it ever will. The whole question is completely pointless except to put the idea in viewers heads that ChatGPT will soon revolutionize science, with no actual substance behind it. It's not even a question, there's only one possible answer. He's holding the guy verbally hostage just to manipulate dumb viewers.
So anyway that's the only memorable clip I've seen of Sam Altman, and based on that alone, fuck that guy.
Altman's reaction was very telling of the kind of person he is, just immediately lashing out at Gerstner in a childish way, asking if Gerstner wanted to sell his shares because he could find a buyer in no time.
It was a pathetically immature reaction, I wouldn't expect that from any kind of professional, even less someone who has held positions as Altman has and now sits at the top of the leadership for a company sucking hundreds of billions of investment.
Apart from that clip there's also the whole saga of sama @ Reddit, full of lies, deceptions, and the same kind of immature attitude peppered across Reddit itself.
After glazing OpenAI and Sam personally for 45 minutes straight. But as soon as Sam was questioned in the slightest, he exploded.
I suspect they promised synthetic movies but it quickly became clear that they were never going to be able to deliver on this.
Slick fifteen second lulz-clips, sure, but I don't think they can make several of them consistent enough to fit into a larger video narrative without the audience finding it jarring and incoherent.
Perhaps legal at Disney also concluded that the output wouldn't be possible to copyright, which is their core business.
My guess is they over committed server/energy resources, since they were generating ~30 images per frame of 1 second of video for results that may be discarded and then tried again.
Now that energy costs are increasingly less predictable because of the war, they're prioritizing what is sustainable. Willing to blow up the $1 billion Disney deal for Sora, because that's a popular IP that would have increased discarded server time.
Might be why the latest Iran propaganda video could be created in PowerPoint: https://bsky.app/profile/rachelbitecofer.bsky.social/post/3m...
(This sort of question, and the Grok sexual abuse, is why I'd like to see mandatory invisible watermarks on generated images/video)
Most people serious about this stuff usually have their own pipelines.
These are open weight models, so you can fine tune them on Lego content… But presumably they already have enough training data since they were made by Chinese companies who don’t give a shit about Western IP rights.
I'd like to know what self hosted models they've been using, if any, and who provided them, trained on Lego IP.
Not a great look that either the teams responsible for Sora didn't know this was coming or the decision was so brash that things changed overnight.
In practice people would just generate the videos with the app then post them on regular social media in which case OAI would not get the ad revenue for that
Its the age-old "your product is just a subset of another product"
The other one is TV ads/cinamatic ads. For a 30 second clip expect to pay an agency $5-10k. Within a couple of days, I can make a video ad and have like $50 in api costs. Cost of production is so crazy in marketing.
Obv this is under the assumption ai is good to do either of those things. Which it hasn’t so far, best I’ve gotten is doing b-roll shots to stick together for an ad
Most People do not care about the technology and frankly they don’t want to know about it. They want great experiences. That’s it.
Technologists seem to have a reallyyyy hard time getting it.
Not every place has LEGO incest porn… or whatever the kids are into these days.
1. There's an AI-based virtual girlfriend industry that mixes text and images
2. There's an AI-based virtual boyfriend industry that is essentially all text (and not always distinguishable from the normal chat models)
3. There's a much shadier AI-based "undress this specific woman" industry
https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...
revenge porn or deepfakes in general are hugely harmful to people.
in the german-speaking world there's a scandal right now about a husband creating deepfakes of his wife, https://www.hollywoodreporter.com/movies/movie-news/christia...
> One fake video, which she claims was sent to 21 men, depicted her being gang-raped
i think you're taking this topic lightly because you just assume that it's not a big deal. try to keep in mind that people's mental health and with this their life is at stake.
as with lots of things, the problem is not the tech itself, but the existence of men. it's not all men, but it's usually men. not sure how we'll solve this issue.
Yes, revenge porn is very effective at causing harm, even though it can be generated.
No, because 'plausibly deniable' has never worked for social consequences and shame.
Yeah, marketing. Which is a huge market...
It's not just dirty talk. It's a whole new paradigm in verbal filth.
On the topic of sora, though: current models are astounding. I watched a clip of Leonidas, Aragorn, William Wallace, Gandalf etc. all casually riding into a generic medieval town together, and if you showed that to me a few years ago, it would have seemed like magic. We're not far off from concerts featuring only dead artists, and all video and image testimony becoming unreliable. Maybe Sora was a victim of timing or mismanagement, because I don't see how this isn't still a seismic shift in the entertainment industry.
This is a "seismic shift" in the sense of the Big One hitting California. The knock on effects of trust erosion caused by AI are going to huge and potentially unrecoverable.
I've no doubt that content creators outside of social media were using it as well, either for their brand or other video work.
Yes we see AI reels all over the place, but that's not only what it was used for
I guess you haven't watched hours of AI cat videos cheating on their husbands with bulls, or Lemons having babies with strawberries and fighting over custody of the child. It's absurd, it's stupid and I know it's a waste of time but I have to admit that it amuses me. I'm quite sure there are millions like me that just want some downtime to relax at the end of the night and end up watching slop like this.
It was legitimately fun until the IP guardrails came up and we couldn't do anything with the characters and culture we know.
If you look at US top videos on YouTube any given day, 40-60% of the videos are IP-based. Star Wars, Nintendo, Marvel, music, etc.
I'd rather eat poison
Big IP is strong arming OpenAI, Suno, and all the rest.
It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands and IPs at a fast enough rate to displace the lack of being able to use corporate IP.
I also think the lawyers at the MPAA, RIAA, gaming industry, etc. will ultimately require all of social media to install VLMs to detect if their properties are being posted. Forget generation - that's hard to squash - they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses to their characters and music. We'll see cable TV era "blackouts" when a social network has to renegotiate their IP license.
People really wanted to use Sora for about a week. After the app/model debuted, they lost the ability to generate IP within the first week. The interest faded almost immediately. The same thing happened with Seedance 2.0.
People want to generate IP.
edit: clarity
It opens the precedent for those creators to now also hold these companies responsible. That’s not a bad thing under the current legal system in this way.
Also, seeing genuine original creations created with AI assistance is much more interesting to me
The great disappointment about how all of this is marketed is what AI should be good at doing - enhancing a tiny budget - is all but forgotten. I don't want a video of Pikachu fighting Doctor Strange, I want some weirdos fantastical horror movie that he could never get financed, but was able to green screen and use AI to generate everything. I don't want a goofy top 40 country song full of silly lyrics, I want musicians to use AI to generate new sounds as part of composition.
In the same way that there's a difference between vibe coding and using a coding assistant...
As a onetime semi-pro musician, with decades of live performance and sound design experience:
I would rather burn my beloved instruments publicly and pee on the fire.
Integrating AI with existing tools to improve productivity is harder and requires effort and investment...
Could you use the bullshit machines to generate sounds that were nuanced, musical, and original, with enough time and effort?
Maybe. I'm not sure original is something they can do, but it's not totally implausible.
I would strongly recommend learning to use other tools for that purpose, instead of feeding the plagiarism monstrosities.
I understand your entire world model is shaped by your past and that this machine is changing the fundamentals.
As an outsider to music, I'm excited that I have access to something I previously did not through the use of Suno and other tools. I'm excited that I can come in and just try things and not hit a skill wall or quality barrier that would cause me to quit with the limited time and effort a working adult has. It's something I've wanted to do for a long time, but just never had the time for.
Attempting to learn costs thousands of hours before you can even start to feel good about it, and I don't have that time. Life is short and I'm already thinking about the end.
I used to be sympathetic to folks with your view, but now that programming and engineering are impacted by this - I'm in the crosshairs too. I'm subject to the same forces.
I've decided I love this tech even more. Claude Code is a tool, just like all of these other tools.
This rising tide of capabilities is so awesome. This is the space age stuff I dreamed about as a kid, and it's real and tangible.
So no, I won't restrict myself to your set of pre-approved tools. I'm going to have fun and learn my way.
And it is fun.
You can keep having fun the way you like to. What other people do shouldn't be ruining the fun you have, and if it is, then you should reevaluate why you do it.
Taking away the precision, control, and serendipity afforded by modules and cables, or a programming language, and telling me "Just describe what you want and the plagiarism machine will spit out whatever correlates with that description on average" would destroy everything I love about synthesis.
> It'll be interesting to see whether creators at the bottom of the pyramid can effectively create new brands
The problem is, to create a brand, you need to be able to protect it against rivals either ripping you off, or diluting it.
The same mechanism that protects "big" IP is also protect everyone else, even the small people.
> they'll go directly to Instagram, TikTok, YouTube, and Reddit and force them to obtain licenses
They already do that for music. But the issue is this, if we want culture, we need to find a way to pay for it. Is it possible for a bunch of mates to make enough money to live on playing in a local band? not really. They can only really make money if they either have a viable local gigging scene, or large enough online following to sell merch/patreon.
The big IP merchants were quite keen for videogen, because they sense that its possible to cut out the expensive artists. If they can not pay actors, writers, artists, then its way more profitable for them. This is part of the reason why AI hasn't been hit with the napster ban hammer.
I think the other thing to remember is that creating good IP is hard, and you can't really just pull it out of your arse after 5 minutes. The original seed takes a long time to refine, test, evolve. Even the half arsed sequels require work.
Media like YouTube isn't consolidating because that's what people want, it's because that's what YouTube and IP holders want. They want death to people like Boxxy, and they want you to watch VEVO instead.
Or the novelty wore off in about a week, and then after that it also became harder to generate videos of baby yoda at Westboro Baptist Church protests
If you consider how the reading, audio, and video you consume either builds or degrades your capabilities and character, as the food or poison you consume either builds or degrades your physical health, then [looking at US top videos on YouTube any given day] literally IS taking poison for your mind.
Depending on the poison and the dosage, eating the poison for your body instead may be the lesser of the two evils.
Where can I get this data?
I find all of it lame and cringe, so I downvote all of that. However stuff still sneaks by…
https://variety.com/2025/digital/news/youtube-trending-page-...
Bummer. It used to be at:
https://www.youtube.com/feed/trending
So last year, these were the top videos:
https://web.archive.org/web/20250324155132/https://www.youtu...
There's this, but it's nowhere near as good as seeing the actual videos:
It's not an exaggeration to say that this is how millions of people use Facebook. It might be not how most HNers use it, but create a new account and you will be absolutely funneled toward prolific producers of video-based AI slop.
But the problem is that FB and Tiktok (and to a smaller extent, YT Shorts) have cornered the AI video doom scroll market, and no one really seemed to be inclined to use Sora and related models for anything more creative. Which probably made it not worth subsidizing.
SORA ( whatever that means) was one of the most astounding demos I’ve probably ever seen ( ChatGPT was more gradual ).
The shock and awe of rendered AI video blew my mind.
Yes months later everyone can do it and is bored by it and has strong opinions about what is right for society or not.
But it was a monumental piece of tech and I personally ( clearly incorrectly ) think the top comments should be appreciative of the release and the impact
Personally I think the lack of nudity destroyed the adult market But I don’t know enough tbh
So far that’s been exactly it. Now AI generated videos are primarily used to scam, deceive, and ragebait.
I really don't see the argument for this tech to be any kind of good, unless you think moving into an era where you cannot trust any image or video is somehow a neutral outcome, AND are happy about the people who are in control of this tech. which I guess captures a larger part of the HN crowd than I'd hoped
GenAI has presented tangible proof of such risks and is forcing society to reevaluate the way we trust evidence. In my eyes, it serves as an opportunity to improve our foundations of trust to something that relies less on the good will of random authorities onto something more objective.
Also, I haven't really seem anyone celebrating the large corporations who control AI tech. Could be simply the people I'm involved with, but most AI enthusiasts I've seem are more about, at least, open-weights AI models.
If you are autistic, I feel that it causes you to see reality a more accurately than most here on this thread.
The impact of easy AI generated video is a less certain and less secure world. You can't trust your eyes anymore because of how fast and easy it is to fake video and moments. You can't trust communications with someone because how easy it is to impersonate them over video and voice. Scams involving tools like this are already running rampant and it will only get worse. The sheer level of distrust these tools have unleashed into the world makes me wish they never existed. They have burned millions (billions?) of dollars on this when that money would have been better served going to the creators whose work they stole to build it. It's rotten.
As we've see from Grok, building the system for producing non consensual nude images of other people will get the legal and PR hammer brought down on you fairly quickly. It's just an incredibly unethical thing to do.
I also use ChatGPT as my default search engine and to help me learn Spanish.
But image generation and video generation were a nice parlor trick. But wasn’t useful for me except for images for icons for diagrams.
But light you said, porn makes money and there are people who pay $300 a month for Grok to generate AI Porn.
Did you just make that up?
Grok barely makes "M-rated" nudity, let alone porn. Musk recently claimed it can do "R-Rated content", but his post got a community note saying otherwise.
Grok has gotten a lot stricter about video from uploaded images. But it is still able to make realistic x rated porn from AI generated images it creates.
There are various jailbreaks that have been working for the longest and still work, just a brief look, half of them just involve “anime borders” and “transparent anime watermarks” over videos.
https://www.forbes.com/sites/martinadilicosa/2026/01/09/grok...
which is what I would hope would happen, but they're probably fine not thinking about the consequences of their actions looking at their 7 figure salaries
Me: damn that’s cool …………AAAAAHHH HELP ME
Doesn't matter if you agree that would happen, the analogy is valid - you're essentially admitting that you're ignoring the negative impacts of the tech for the sake of how impressive it is.
I have said about 3 times I am solely judging tech by how impressive it is technically.
I have no idea who you are arguing with.
Nothing exists in a vacuum and the way technologies affect people living in the world is a fundamentally important aspect of the technology itself. To ignore them would be like celebrating a cool new engine design but overlooking the fact that it has a tendency to explode and kill everyone in the car. If the primary effect of a technology is human suffering, then it isn't cool!
It was a party trick. I can't remember the last time I touched it. That's what SORA is, or was.
There were social games that used it as a feature, and it was fun when it worked, but it had to be disabled soon as it drained the battery so fast.
Coding is where the money is. https://news.ycombinator.com/item?id=46432791#46434072
That narrative will implode like Sora later this year.
Then of course the hype collapsed and now even the usecases where VR shines are deemed a flop. But no, it's exceptionally good at simulation (racing/flight) and visualising complex designs while 3D designing.
I see the same with generative AI and LLM. It's really good with programming. It's definitely good at making quick art drafts or even final ones for those who don't care too much about the specifics of the output. I use it a lot for inspiration.
But it's not good for everything that it's trying to be sold as. Just like the VR craze they're dragging it by the hairs into usecases where it has no business being. A lot of these products are begging to die.
For example an automation tool using real world language. For that it's a disaster, it's inconsistent and constantly confuses itself. It's the reason openclaw is a foot bazooka. It's also not very great at meeting summaries especially those where many speakers are in a room on the same microphone.
I don't think AI will disappear but a realignment to the usecases where it actually adds value, yes I hope that happens soon.
It is astonishingly poor at this. My intuition was that it should be good at this (it is basically a translation problem right? And LLMs are fundamentally translation systems) but the practical results are so poor. Not just mis-identifying speakers (frequently saying PersonX responded to PersonX) but managing complete opposite conclusions from what was actually said.
I'm genuinely intrigued as to what approaches have been taken in this space and what the "hard problem" is that is stopping it being good.
Generating pointless AI videos for pocket change or ad revenue is a loser in comparison.
However, I don't know a single developer who pays "thousands of dollars a month", not sure how you'd end up like that.
The top down push for AI is in line with the age old traditions of replacing highly skilled and highly compensated trade workers with automation. The writing is on the wall if folks care to look; many just don't want to. This has happened 1000 times before and it'll keep happening in the name of "progress" in capitalist systems for as long as there are "inefficiencies" to "resolve." AI is meant as our replacement, not as an extension of our skill as it happens to align with today.
Its increasingly obvious that the next phase in the evolution of the average programmer role will be as technical requirements writers and machine generated output validators, leaving the actual implementation outsourced to the machine. Even in that new role, there is no secret sauce protecting this "programmer" from further automation. Technical product managers eventually fall to automation given enough time and money poured into the automation of translating fuzzy, under specified ideas into concrete bulleted requirements where they can simply review the listed output, make minor tweaks and hit "send" to generate the list of jira-like units of work to farm out to a fleet of agents wearing various hats (architect, programming, validator, etc.)
The above is very much in progress already, and today I'm already spending the majority of my time reviewing the output of said AI "teams", and let me tell you: it gets closer and closer to "good enough" week by week. Last year's models are horse shit in comparison to what I'm using today with agentic teams of the latest frontier models (Opus 4.6 [1m] currently, with some Sonnet.)
Maybe we're at a plateau and the limitations inherent in GenAI tech will be insurmountable before we get to 100% replacement. But it literally won't matter in the end as "good enough" always prevails over the perfect, and human devs are far from perfect already.
I have been producing software (at fang scale) for several decades now, and I've been closely monitoring GenAI systems for coding specifically. Even just a few months ago I'd get a verbose, meandering sprawl of methods and logic scattered with the actual deliverables outlined in the prompt from these systems. Sometimes even with clear disregard of the requirements laid out, or "cheating" on validation via disabling tests or writing ones that don't actually do anything useful. Today I'm getting none of that. I don't know what changed, but I somehow get automated code with good separation of concerns, following best practices and proven architectural patterns. Sure, with a bunch of juniors let loose with AI you get garbage still, but that's simply a function of poor delegation of work units. Giving the individual developer and the AI too much leeway in the scope of changes is the bug there. Division of work into small enough units is the key and always has been for the de-skilling portion of automating away skilled human labor for machines. We're just watching Marxist theory on capitalist systems play out in real time in a field generally thought to be "safe." It certainly won't be the last.
So a good PM running 1-3 teams, will only need 1-3 agentic ai teams instead.
No they aren't. Any decently skilled human blows them out of the water. They can do better than an untrained human, but that's not much of an achievement.
No, by far no. I’m by all accounts “decently skilled human”, at least if we go by our org, and it blows anyone out of the water with some slight guidance.
And the most important part: it doesn’t get tired, it doesn’t have any mood swings, its performance isn’t affected by poor sleep, party yesterday or their SO having a bad day.
Modern models like Opus / Gemini 3 are great coding companions; they are perfectly capable of building clean code given the right context and prompt.
At the end of the day it’s the same rule of garbage in -> garbage out, if you don’t have the right context / skills / guidance you can easily end up with bad code as you could with good code.
Even with years as a principal engineer at a company with high coding standards and engineering processes?
Step 2: win back public trust by firing Sam Altman or dropping defense contracts or something else I can’t think of.
I also wonder if they got the $1B from Disney? Was that even a paid for deal? Or just another "announced" deal? Every article I found doesn't mention anyone signing any paperwork - which seems to be typical of AI journalism these days. Every AI deal is supposedly inked but if you dig deeper, all you find are adjectives like proclaimed, announced, agreed upon.
This did happen once. 3 people were laid off, I think directly based on things I said to drive the completion of some automation. That was the last time I ever measured something in man-hours to make a point. I’ll never do it again. That was over 12 years ago.
If anything software engineers have spawned in uncountable numbers of jobs that never would've existed before, is what my intuition tells me.
I never understood what this app was about. TikTok (and I would argue most modern social media platforms) isn’t really about sharing things with friends, it’s about entertainment. Most people watch TikToks and YouTube videos because they are entertaining. Beyond the initial 2-3 minutes of novelty, what do AI generated videos really have to offer when there is no shortage of people making professional, high quality content on competing platforms?
I don't know where they got September from; Sora launched in Feb 2024[0] which was a bit before people had become tired of awful AI-generated content. There was real belief that people would be willing to spend all day scrolling a social network with infinite AI-generated content. See the similar hype with Suno AI, which started a whole "musicians are obsolete" movement before becoming mostly irrelevant.
I think Sora 2 produced quite good videos, at least of a certain type. It was very good at producing convincing low-resolution cellphone footage. Unfortunately you had to have a very creative mind to get anything interesting out of it, as the copyright and content restrictions were a big "no fun allowed" clause, which contributed to its demise. Everything on the main Sora page was the same "cute animals doing something wholesome and unexpected" video.
My "favorite" part was how the post-generation checks would self-report. e.g. It was impossible to make a video of an angry chef with a British accent because Sora would always overfit it to Gordon Ramsey, and flag its own generated video after it was created!
[0] https://news.ycombinator.com/item?id=39386156 - only one mention of "AI slop" in the entire thread, though partial credit goes to "movieslop".
> In February 2024, OpenAI previewed examples of its output to the public,[1] with the first generation of Sora released publicly for ChatGPT Plus and ChatGPT Pro users in the US and Canada in December 2024[2][3] and the second generation, Sora 2, was released to select users in the US and Canada at the end of September 2025.
[0] https://en.wikipedia.org/wiki/Sora_(text-to-video_model)
For example, early TikTok had the Boss Walk.
Sora had no big content trends split into many micro trends in some established ~universe.
If I see an AI video and my options to participate are… prompt another AI video? What’s the point
I think they are in serious trouble, especially with the size of their cash burn. Their planned IPO could easily turn out to be their WeWork moment where the bottom suddenly falls out on the valuation if they cannot make their operation look more like a real business before investors lose confidence.
Will be interesting to see.
ChatGPT is an interesting product - I like it for certain things - but after last year's PR scramble almost all the news out of OpenAI is a disappointment, with hovering hints of retrenchment.
Want to hear the one TRICK most people forget when doing X...?Kind of insulting to lump google in with XAI? Like, is anyone even using XAI other than backwater government agencies?
xAI doesn't have "content moderation" around adult content, so that usage is quite popular.
https://www.hollywoodreporter.com/business/digital/openai-sh...
I feel like they are sailing into a red ocean with what look more like copycat tactics than innovation (e.g., Codex v Claude Code; Astral v Bun)
As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).
Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.
total disagree.
if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.
Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.
The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.
If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"
The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.
and others. There are free to use tools also.
I really don't think that using that term is appropriate when there's a multi-billion American macro corporation involved in the activity in question.
No it didn't; OpenAI had control.
Saying Sora democratised video generation is like saying that landlords democratised home ownership.
- sora was not great at making what you asked
- i probably got 3 good videos out of 100 gens
- every video that was good needed editing outside of sora (and therefore could not be shared within sora)
just my experience
I’ve given it different levels of open-endednes, give this flow chart an aesthetic like this mechanical keyboard, or generate an SVG of this graphic from a 70s slide show, but it never looks quite like what I have in mind.
In the end, I think you only use this stuff to generate images if you’re prepared to accept whatever comes out on approximately the first try.
When it does, it's more likely to be something popular and unoriginal, where the data is dense, and less likely to be something inventive and strange.
I wish we could use something like a simple DSL rather than English prose to work with these models, in order to have some real precision to describe what we want.
That will likely happen in the specialized fields. We can already see tools like Figma, Mira, and others that generate functional-ish frontend components in full typescript and corresponding styles (that are also selectable and configurable in the interface). Though, not quite as free, since they do load their base framework and components to ensure consistency and sanity / error-checking, etc., but even then it is in fact generating you useable, modifiable components that you can engage with in precision in your normal DSL.
For video, this likely exists, or is being worked on as we speak. All specialized domain tools will go towards this model to allow those domain experts to use the tools with the precision they expect AND the agentic gains we already take for granted.
My experience with AI image generation is similar, although with a higher success rate (depending on how accurate you want the result to be); but indeed, filtering is a major part of the process.
A lot of YouTube content is really talk, so it was easy to create Sora videos as video content while you talked over them.
However, its failure was that it watermarked everything. WTF? Leonardo didn't do that. Neither did other models. So while video gen was excellent, you always had these ridiculous floating watermarks.
They probably see how much Anthropic is absolutely crushing them in developer mind share (see, people who buy tokens) and want a piece.
So strange that they fell behind after leading the charge on video from Will Smith spaghetti through the spectacular launch of Sora.
Turns out anyone can get that look by appending “like an Octane render”
Beyond that, like Kling and Hailou quickly surpassed them on product, and OpenAI never even attempted text-to-3d as if they are entirely uninterested in rich media.
OpenAI reminds me more of Meta than any other company. They’re both pioneering in their space and yet are mere commandeers (not innovators) when it comes to technology and importantly end user products.
They’ll also be extremely valuable, like Meta due to their ad product and ever-growing user base over the next 10 years, and I guess by focusing on code they plan to capture a segment of the developer market à la React or Swift.
Will OpenAI release a language or framework? An IDE? I bet the chat paradigm stays for the ad product and aging user base (lol) while the exciting innovation will happen in code automation and product development - an area they are not really experts in.
There’s so many video gen models out there and given the cheaper Chinese models I’m not surprised they closed this down. Besides the initial push, any marketing regarding video gen has always been the Kling or Higgsfield models. Just never a reason to do sora
Just because one thing is a lesser/different kind doesn't mean we can't also be vigilant about it as well.
> RIP to one of the most evil products I've seen come out of the tech industry in my lifetime.
I'm saying Sora isn't even in the top 100 of most evil products out of the tech industry.
There's nothing inherently evil about a knife. Standing outside of a high school and handing a knife to every kid walking in is pretty evil though.
Yes, literal weapons are bad, too. But that's not the current topic.
Disney Exits OpenAI Deal After AI Giant Shutters Sora
https://www.hollywoodreporter.com/business/digital/openai-sh...
A source familiar with the matter tells The Hollywood Reporter that Disney is also exiting the deal it signed with OpenAI last year, in which it pledged to invest $1 billion in the company and agreed to license some of its characters for use in Sora.
“As the nascent AI field advances rapidly, we respect OpenAI’s decision to exit the video generation business and to shift its priorities elsewhere,” a Disney spokesperson said. “We appreciate the constructive collaboration between our teams and what we learned from it, and we will continue to engage with AI platforms to find new ways to meet fans where they are while responsibly embracing new technologies that respect IP and the rights of creators.”
Also "exit the video generation business" seems somewhat notable, suggesting they're not just planning to launch a different video gen product to replace Sora?I used to think they were pretty clever but with this news and other recent ones (Jony Ive project cancelled, Stargate scaled down significantly, their models inflating token use on purpose) they just seem schizo.
Idk if it’s because I set codex to xhigh reasoning, but even then it still seems way higher than Claude. The input/output ratio feels large too, eg I have codex session which says ~500M in / ~2M out.
It used to give me precise answers, "surgical" is how I described it to my friends. Now it generates a lot of slop and plenty of "follow ups". It doesn't give me wrong answers, which is ok, but I've found that things that used to take 3-4 prompts now take 8-10. Obviously my prompting skills haven't changed much and, if anything, they've become better.
This is something that other colleagues have observed as well. Even the same GPT5.4 model feels different and more chatty recently. Btw, I think their version numbers mean nothing, no one can be certain about the model that is actually running on the backend and it is pretty evident that they're continuously "improving" it.
Just that they took down some "io" mentions because of some trademark dispute with a third party "iyo".
1. OpenAI killing off their own products aggressively, taking a page from Google’s book. (I think the way you meant it)
2. Products/companies that no longer exist because OpenAI, or AI in general, made them obsolete. (My first instinct when reading it)
What would you place here anyways? Chegg and Stack Overflow?
Weil's now heading "AI for Science": https://www.pymnts.com/personnel/2025/openais-chief-product-...
* It was (assumedly) expensive to run.
* It was not good enough for customers to seriously pay for.
* There were too many content restrictions for it to be fun for most people.
The issue is that Sora ended up getting the short end of the stick: by generating the footage, it became the primary target of complaints. Meanwhile, they were forced to remove the videos, but people simply took those videos and uploaded them to random social media platforms like Twitter, TikTok, or YouTube, which ended up hosting the content while being much less of a target, since the content wasn’t generated there.
Honestly, I think the only way forward will be to wait for local models to become good enough so that you can run something like Sora locally and generate whatever you want.
Sora had all of the downsides, and attracted all of the scrutiny. Local-first is definitely the way.
i think it's clear cloud hosted is the actual future, which people have predicted for decades. it will never make financial sense to duplicate what you can get for cheap, because it's oversubscribed, with economies of scale and "if we let this run idle it's losing us money" pressure, for hardware found in a datacenter.
this has been the case for a long while now, and will increasingly be so as data centers buy up all the everything.
With open models, you have multiple providers competing on inference speed, quality, and price, leading to healthier market without lock-in.
I actually thought the Sora app was promising at launch, at least on paper, but it seems like they failed to keep people's attention long term. With the failure of Sora i don't think they have good options left.
Never once did I bother to browse videos made by others on Sora itself. I wonder if anyone did.
Then they killed Dall-E 2 and my credits vaporized.
Anybody found themselves in the same situation? What have you done?
If it cost too much and others can do it cheaper, that looks bad from both fronts.
also, for a company carrying „open“ in their name, that pretends to still remember its origins, they could open source at least the projects they sunset…
Offerings like Kling and ByteDance are considered much better.
This sounds like there would be some kind of revenge, but I struggle to imagine any kind of consequence. Did you have something in mind?
We learned two things from this debate:
1. What most people hated was actually just “bad CGI”. Good CGI went entirely unnoticed.
2. A generation of people were raised with CGI present in almost every form of professional media (i.e. not social media). They didn’t have a preference for practical effects because the content they consumed didn’t really use them.
I expect the same thing to happen here. I don’t think many people want to consume AI generated content exlusively (like Sora’s app attempted). However I expect AI generated content to continue to improve in quality until it’s used as a component in most media we consume. You and I will eventually stop noticing it and kids will be raised with it as normal and the anti-AI millennials/GenX crowd will age-out of relevance.
Or, it's a clear signal that AI video is too expensive as a consumer product and/or not quite yet at a quality bar that the average person finds acceptable.
I think someone could have looked at computer graphics and SFX circa the '80s and decided that they would always pale in comparison to practical effects. And yet..
It's an annoying trope, but this is the worst and most expensive (at this quality level) that these models will ever be.
But it was largely fun to try to transgress against the limitations. Who could trick the AI to generate something outlandish and ridiculous.
Maybe it achieved its objective?
So whatever reason they say to shut this down, it was more important than 1B investment.
It says a lot about the current economy that consumers have no money. Will companies just stop making consumer products?
Yes. I have noticed that is close to impossible to get good deals on flights, hotels, or even good discounts on-line. Sellers have all the information from consumers that they need to maximize their profit and extract the maximum amount from consumers. Dynamic pricing is making it a personalized experience, so I personally pay the maximum I possible can.
No room to get a fair price anymore.
Let’s be real: OpenAI is circling the drain.
The company with the fraudster serial liar CEO who said he was gonna spend a trillion dollars can’t keep a video service alive right after signing a $1 billion dollar with Disney?
What kind of a joke is that?
This is a company that has blown its opportunity twiddling around with zero product. They still just run a plain chatbot interface with zero moat and zero stickiness.
There’s no “pivot” for a company that is in this deep.
I'm no fan of Altman or OpenAI, it's a pretty shady company and I am suspicious of their books, but this was a great demonstration of the uselessness of boards and how out of touch they are with the business they are supposed to be supervising. It's really rare to find an effective board, primarily they sit like a House of Lords enjoying ceremonial perks and a stipend in exchange for holding a few meetings a year.
But now that the deal is off, I'm sure their legal team will attempt to once again change copyright law in their favor.
Disinfo AI videos and the Coca Cola Christmas ad have also really soured my expectation of genuinely positive creative uses of video gen for the next couple years until more improvements are made, and I start seeing stuff go viral for being good instead of just being weird. I am still surprised that sora never had the grok problem of generating csam or seemingly anything along those lines.
I can appreciate that the technology and research behind Sora could be helpful for many things, but I do not see anything good coming out of the consumer facing application.
Sora was a perfect example of using a lot of compute to generate the video -> we need a lot of GPUs -> a lot of RAMs -> energy and land
I am predicting in the next 6 months RAM shortage will soften, not too much, because war in the Middle East will have additional impact for some time.
I think OpenAI had a brief delusion that it could become some huge social networking app. The App was heavily modeled after TikTok..
And two at Meta[2]: "A rogue AI agent at Meta took action without approval and exposed sensitive company and user data to employees who were not authorized to access it"
"director of alignment at Meta Superintelligence Labs, described a different but related failure in a viral post on X last month. She asked an OpenClaw agent to review her email inbox with clear instructions to confirm before acting. The agent began deleting emails on its own."
Even Elon Musk has shared the wisdom to proceed with caution! [3]
1. https://dev.to/tyson_cung/amazon-lost-63m-orders-after-ai-co... 2. https://venturebeat.com/security/meta-rogue-ai-agent-confuse... 3. https://x.com/elonmusk/status/2031352859846148366
Any platform which focusses on AI generated videos is doomed.
sir, have you seen tiktok?
https://finance.yahoo.com/news/openai-sora-app-struggling-st...
I dont do design, or make videos, or ask ai for legal advice, or medical advice cause I lack the skill and understanding of these fields. Dunning Kruger still applies...
There is interesting "AI" content out there, clearly the person(s) behind it put some thought into it and had a vision.
Sure, I can write the screenplay and Veo will generate it for me. But I don't have experience in video creation/production , so it is difficult for me to write good prompts which generate engaging video
May be. OpenAI shuttering Sora is line with them shifting focus towards b2b sales, instead of b2b2c or b2c.
Interestingly, Aditya Ramesh, who iirc was the Sora 1 lead, is now "VP of Robotics" at OpenAI per his Twitter bio: https://x.com/model_mechanic
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
There's a web interface as well.
I had thought this would be combined with OpenAI launching a set top box where you could talk to an AI avatar. Disney IP could have been skins to sell people for their AIs.
Hustle just to barely stay afloat water or drown, means no time to compete with our own output.
America is a financially engineered joke regurgitating its own recent history, collapsing like an LLM trained on its own output. The rich are not even pretending it's "a free country" as they have enough wealth for how many years left most of them have to live, and have seen the apathy to their own plight keeping the average person in theit lane they don't fear the public.
It’ll all collapse as they generationally churn out of life and the Millennials on down with zero skills but "data entry into a computer" will be holding an empty bag, taking orders from foreign nations that bought up all the American businesses we built.
The cost must have been a key reason for the shutdown.
End is near.
Better for OAI to spend their human and compute resources on something else.
The desire for something "new", for a Mildly Ethical product, killed off the most obvious path to success - to actually just make TikTok+AIGC, or in the present, Douyin+Seedance2.
The network effects of the other two platforms are too strong, and a value prop of “watch similar videos but they’re all AI” is not strong for consumers.
Also, say what you want about AI slop, but I was on sora a lot for a few weeks and there was a real explosion of creativity on there. It felt new and exciting and creators were engaging with each other and sharing feedback and tips. I generated a ton of videos and surprised myself with a flury of creative ideas.
There didn't seem to be any marketing for it. Like I can't even remember an ad for it or any content creator type of person pushing Sora actively.
To get access to Sora I believe you needed to be on a paid plan?
It's really difficult to get user generated content going when it's behind a paywall.
It's also hard to tell if this means that openai is in trouble, or if this is just a badly managed product that deserved to be killed. With the negative sentiment on openai, folks might think the former.
On a more serious note, it could be a sign of a more powerful and general model being developed/released in the near future, that would include Sora capabilities. Or AI-doomers were right, and this sunset is one of the proofs for them.
A record speed into AI slop. Is this what everything turns into when content creation becomes easy? what's happening here exactly?
OpenAI is bleeding money faster than they can afford to and they are literally running out of people that they can go to for more. They need to stop the bleeding.
"...the AI company exits the video generation business."
"OpenAI, led by CEO Sam Altman, is not getting out of the AI video business [...], of course... "
I hate journalism.
From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.
The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.
https://marginalrevolution.com/marginalrevolution/2025/04/o3...
Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.
I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.
Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.
If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?
At a higher level of intelligence than many humans, current experience suggests
We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.
Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?
And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.
Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other.
Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?
— https://www.businessinsider.com/openai-discontinues-sora-vid...
So yeah, focusing on world models
1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.
2) google and specialized video-only startups are simply doing a much better job than they were.
This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.
Fixed that for you :-)
https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...
> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.
At least they were able to recognize their mistake and course correct.
So OpenAI has done the right thing as a startup here, gotten lots of training data, and observed lots of user behavior that they can now apply going forward.
The Sora models, on the other hand, aren’t going anywhere, and I believe OpenAI will continue to invest in them. They’re getting better and better, just like Google’s Veo, which is quite good at generating videos as well.
Using Codex and agent skills, it’s actually quite easy to generate a storyboard and then have a list of shots in that storyboard. Then generate videos from those storyboard stills, and then finally assemble those individual video files into a final movie file using something like ffmpeg. It's also very easy to create a voiceover with TTS and even simple music using ChatGPT Containers (aka the python tool).
This will 'democratize' (ha ha, for people with money obvi) a lot of video creation going forward. Against all wisdom, I am actually quite bullish on this technology, especially in the hands of young people. They are very creative and have lots of stories to share.
Necessary disclaimer as usual around the ethics of how these models were created: all the AI companies have totally ripped off artists in service of creating these models. I wish something would be done about that but I'm not holding my breath. No politician seems to want to touch it.
This may well be a needed reprioritization in the face of resource constraints, but it ain't a masterful Xanatos gambit.
Agree, and didn't intend to imply that. This is just a good startup move that gets a big headline because it's OpenAI. Other startups around the world do the same thing all the time.
It’s quickly become the modern day equivalent of Comic Sans, WordArt, and the default clipart illustrations included in Word ‘98.
Perhaps most people are absolutely devoid of any taste of what makes art? I dont know.
That said, there are still people with exceptional aesthetic sensibilities in the tech field, obviously. They're just largely not in this space.
I had a lot of fun using Sora and got a lot of laughs with absurd videos of me in various situations.
But like everyone else, I kind of got it out of my system after a couple weeks. Not to mention that my family got sick of seeing them. And so my usage collapsed to zero. And that seems to have also been the pattern writ large.
But this kind of flash-in-the-pan dynamic is devastating for a product with this kind of profile, which requires insane amounts of compute hardware to serve while also having no short-term monetization path.
Meta could afford to invest in IG Reels even when it was burning money and costing them a fortune for hardware because it was building up what turned out to be sustainable usage patterns which persisted long after the initial spending ramp.
It’s basically impossible to effectively monetize anything that’s not sustainable on the order of multiple years.
A subscription-based model would see excessively high churn that would be ruinous to the economics, and also advertisers wouldn’t be interested either, for the obvious reasons.
So why couldn’t this work? I don’t think that it was because the models weren’t good enough or that the depictions weren’t realistic or lifelike enough. I still marvel at some of the better outputs I was able to get from Sora.
I think the fundamental problem that Sora faced is actually much broader and more general, and it comes down to the basic Pareto math of any content generation or creative app, which is that 95%+ of the users just want to passively consume content from the 5% or less that actually wants to generate it (and is capable of making anything that other people want to watch).
It was really dismal to see the repetitive, trite ideas that 99% of users generated in the public feed. Just the same few dumb jokes and things they copied from other users.
Or putting themselves in a scene with their favorite fictional or cartoon characters or whatever, which of course got banned pretty quickly for copyright issues.
Most people are not creative and don’t have a lot of original, interesting ideas. So that means that the vast majority of the content is always going to come from a vanishingly small number of creators in a power law distribution.
And those super-creators aren’t going to want to be limited to a simple text-based interface that can only generate for 10 seconds at a time with no continuity and where large portions of things you might want to try are strictly forbidden.
They’ll instead gravitate to more customized solutions for power users that regular users would find as overwhelming to use as AutoCAD.
And that’s what you’re seeing now with all the new viral AI slop videos that are made by a handful of creators who have figured out the workflows and are pumping out the worst junk you can imagine that gets people to click and watch.
Anyway, RIP Sora; it was fun while it lasted. Thanks, Sam, for blowing a few hundred million bucks so we could get some laughs.
> We’re saying goodbye to the Sora app. To everyone who created with Sora, shared it, and built community around it: thank you. What you made with Sora mattered, and we know this news is disappointing.
We’ll share more soon, including timelines for the app and API and details on preserving your work. – The Sora Team
So I agree with you, but also it makes me wonder what they're even selling when the IPO happens (supposedly as early as late summer 2026)? Data centers? Partnerships with the goverment?
After placing my hand on the red-hot stove, aren't I super smart for now removing my hand?
That is, hiring Meta-exec's who focus on gaming numbers with no care nor sensibility of product.
Wild really. Well done Sam.
For a litmus test of your perspective, try using sora. Try to make a video that makes someone genuinely laugh. Sora doesn't prompt itself. Human creativity and humor is still required.
Sure, it was moderated to heck, like all models attempting to avoid PR disasters (see Grok), but, just as with Youtube and broadcast TV, there's still a corporate friendly surface area that excludes porn, gore, etc, that people can enjoy. And yes, people like different things.
Like, imagine if you watched a bunch of GenAI videos of cars sliding on ice from the driver’s perspective. The physics is wrong, and surely it’s going to make you a worse driver because you are feeding your internal prediction engine incorrect training data. It’s less likely that you’ll make the right prediction in real life when it counts.
But I think I do have similar feelings about special effects. A difference is that special effects tend to depict scenarios very outside of the envelope of normal experience, so probably not very damaging if my model of “what does a plane crash look like” is screwed up.
Though some effects probably are damaging - how many people subconsciously assume cars explode when they are in an accident? A poor mental model of the odds of a car exploding could cause you to make poor real-life decisions (like moving someone out of a wrecked car in a panic instead of waiting for EMS, risking spine/neck injury)
Your counter-examples have the property that most of the things you need to learn are absent from the media being watched, leading to an observation which is "obviously" true, but they ignore the impact of media on a journey properly incorporating other pieces of information. To compare to the mental models being discussed, you'd have to actually consider effects you're writing off as negligible, and when it comes to something like a world model which we've only learned by observation and which doesn't have a lot of additional specialized knowledge those effects might be much more impactful.
Most people can’t explain the physics they see, but they can deduce enough to be able to predict the effects of physical actions most of the time.
Sure, be ready to get them out, and if they’re trapped and it’s going to be a while until fire shows up start working on that. But my mental model is that for any road legal car that is not currently on fire, there is a higher chance you’ll cause harm by rashly moving a victim than that a victim will be suddenly consumed by an enormous Hollywood style conflagration.
Films on film using in camera effects are still made on occasion but they’re art films for niche audiences.
But we’ll never get another Ben Hur. And that doesn’t sit well with me even if society can’t yet fully explain why.
The worst offenders are brake sounds not correlating to the car movement, engine sounds not correlating to the car's acceleration, nonsensical car deceleration while braking, and steering wheel not correlating to car steering.
I am willing to suspend disbelief for Terminator 1, even if it is clear, that it's a head of the doll in shot.
But it is insulting to feed slop to your audience; it shows you didn't even try.
I have actually seen one slop-video, that I kinda enjoyed - it was obvious, that a great effort was put in a script and details as much as it was obvious it isn't being passed for the real thing.
"AI" consumes energy before user even started (during training).
That is on top of comparison for each particular case.
Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output, and represent the up front cost for the producer.
Both a movie and a language model can cost tens or hundreds of dollars to produce.
In both cases additional infrastructure is needed for efficient usage: movie theaters or streaming platforms for movies, and data centers with the GPUs for LLMs. This is also upfront (capex) costs.
At consumption time, the movie requires some additional resources, per viewing, whether it's a movie theater or streaming. Likewise, an llm consumes some resources at inference time. These are opex. In both cases, the marginal cost for inference/consumption is quite low.
> Model training is similar to the creation of the cgi for the movie. Both happen before anyone consumes the output
I did not say anything about consumption of the output. Maybe you misread what I wrote, it is about energy consumption. > Both a movie and a language model can cost
But we weren't comparing cost of the movie to cost of a language model > can cost tens or hundreds of dollars
But we weren't talking about dollars, we were talking about energy.We're clearly exploring different questions.
CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
> CGI renders do use a lot of electricity relative to playing back the movie for individual viewers. It's perfectly analogous.
I've literally laughed at loud after reading this.I can't believe you're stretching this in a good faith.
But if you are - well, you're certainly have a unique perspective.
I am 100% with you. I didn't ever _use_ Sora, but some of it trickled down to me (mostly through Instagram reels). I think it's amazing that we have such great new tools to express ourselves, and that we are trying out new platforms, paradigms, and approaches.
Is there money involved? Absolutely, but I don't fault companies for trying to earn their keep.
It 100% takes work to use these tools in the right way to make something funny. Ask an LLM to make them on their own and they'll hardly evoke laughs (I'm sure that'll change too, though).
Then, when they start ratcheting the slop ratio up (likely under the justification of keeping up with declining creator engagement), the consumers get more and more adjusted to a pure-slop feed, until bingo you have a direct line into the midbrain of millions of consumers/voters/parents/employees/serfs.
The real problem with AI slop is not the AI. It's the people. It's always the people.
The clickbait has started fooling people more than before, with the latest videos being halfway believable (except for the circumstances of the videos).
Technology enables the most malicious and self-interested, and systems need to be adjusted to not reward that, or users need to become wise to it.
With the amount of early 2000's style clickbait ads still around, I'm not sure we ever vanquished Web 1.0 style clickbait, it just got crowded out by ever more sophisticated forms.
The percentage of AI videos over the internet will certainly not decrease after Sora is gone.
The question is when will Chinese coding models have their Seedance moment and squeeze Opus/Codex out of market. It weirdly feels impossible and inevitable at the same time.
It much easier to make Qwen animate tankman than it's to make any western model to generate indigenous people dancing because cough cough naked skin is baaaaad. Except this Musk one that will nonetheless affected by all the copyright mess.
Then it became synonymous with slop, lowest common denominator content made without care, instead of a tool for enabling people willing to put in a varying level of skill, kinds of expertise and effort, like coding models did.
If you want a video of a dancing cat, sure, you can get that. But if you want an orange tabby doing the moonwalk or the robot, that's a lot harder. You'll have to generate dozens of videos and fine tune prompt incantations before you get what you want, if you even do before you hit a rate limit or you get frustrated. If you want something specific and unique and interesting, you still need to put in a lot of effort. Therefore, most videos that people actually make and share are pretty generic.
I think most art models have subtle tells and limitations similar to textual LLMs too, just a little harder to recognize. Certain ideas and imagery will be easier to generate and more likely to fill in the gaps of your prompt. The technology is fascinating compared to the nothing that we had before, but it still has real limitations - try to get it to generate an Italian plumber wearing a red hat that isn't Mario, for example.
All that to say, the trend towards low effort, repetitive, and uncreative results is inherent in the medium. Most users will prompt for a generic dancing cat and get something resembling a cat doing something that resembles a dance and that will flood social media. The few people going for a more creative and specific artistic view will be frustrated by the constant rolling of dice, and if they do make something they work hard on, it will be drowned out by the low effort slop posts. And if you're frustrated by those limitations and want to make something intentional, then you'll eventually gravitate towards Photoshop or Blender where you can actually craft the exact thing you want.
These models do not really "democratize art", they just make it really easy to generate visually interesting noise. Once the novelty wears off, the limitations are apparent. Art has always been democratized anyway - Blender and Krita are free, and pencils are cheap.
The existence of inoffensive use cases doesn't invalidate anything OP is saying, that's just a natural human reaction to overexposure of a technology.
In the span of less than 2 years, pretty much everywhere I look has been inundated with zero-effort spam, manipulated imagery, etc that has had a net-negative impact on my life. Even if it may also be helpful for a small business making a flyer or whatever without actively making my life worse, that doesn't really move the needle on my overall attitude.
> manipulated imagery
And we thought iPhone camera videos were bad... (they were (and are) though)It’s so dumb that Zuck and Elmo want to inject^H^H^H^H^H^Hrecommend content into these people’s feeds while they’re checking in on their neices and nephews and local book clubs.
- You're making unsubstantiated claim
- personally targeting someone you don't even know
- in order to celebrate presumed success of a mass fraud?
Novels, cinema, television, comic books, etc.
They were all considered careless skill-free slop at some point.
For an app to suggest a personal relationship with you is ridiculous.
Which makes me wonder whether these companies actually dogfood their own tools with this sort of stuff? Was this announcement written by ChatGPT? Honestly, I would find either answer to be a little concerning in its own way. It's either vaguely insulting to their customers or showing a lack of faith in their own product.
it reads as "we want to tell you that what you made with sora mattered, but we all know it didn't".
I find myself increasingly nostalgic for the Clinton era. I am not at all sure I will enjoy the version of fuckedcompany that gets vibe coded when this bubble pops.
Is it happening? :) /s
Sora had to be shut down because it was the clearest, most consequential demonstration that OpenAI’s models are running way, way ahead of their ability to align/jail them effectively.
If you end up with nothing in aggregate for the chances you pay for, you're a loser. Not in a pejorative sense, just as a fact, you lost.
If you come out with more than nothing, in aggregate, you're a winner, in the same objective sense.
Probably controversial. Eh.
That story can’t be true
What happens if you turn a "human-level" intelligence off? Did you kill someone?
AGI is a pipe dream - and moreover it's not even something that anyone actually wants.
You seem to be mixing up intelligence and consciousness. Not only does intelligence exist outside of humans, and even mammals, but it exists outside of brains and even neurons. For example, slime molds have fascinating problem solving abilities: https://www.nature.com/articles/nature.2012.11811
It is clear that whatever we are...creating/growing with LLMs, it is very unlike human intelligence, but it is nonetheless some type of intelligence.
And obviously if such a system existed, the benefits (and risks) would be enormous, though the risks are smaller if you control it vs someone else, which is why every company is racing towards it.