upvote
They're shutting down Sora, not AI-generated video.

From the article: "OpenAI […] is not getting out of the AI video business (AI video is one of many tools that can take form in the ChatGPT app), of course, but it appears the standalone Sora app will be a casualty of its evolving ambitions."

reply
Dunno, from the WSJ scoop: "CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either."

https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...

https://archive.ph/cKWkf#selection-907.0-907.291

reply
If they were just shutting down the dedicated app and offering the same capabilities in the ChatGPT interface, I don't see why Disney would exit their deal?
reply
Because Disney's deal was specifically and exclusively related to Sora, which was OpenAI's bizzare attempt at a TikTok like social networking site but using AI generated videos.

It was not a deal that allowed the use of Disney's characters for general purpose AI generated content using OpenAI tools.

reply
Is it still accessible in any of their apps, though? I don’t see it in ChatGPT.
reply
Every flop used for entertainment is opportunity cost. Compute is far more valuable used internally to create AGI than creating parody videos.
reply
AGI is a marketing term used to encourage continued investment in an industry that is not even close to breaking even commensurate with its investment. Even so, this is a false dichotomy: scaling is clearly not a path on its own to superintelligence. OpenAI developed Sora largely because the amount of revenue they need to produce any return on investment is massive and not clear whatsoever. And in fact, I don't even believe any of the frontier labs believe that AGI by any conventional definition is within reach within their likely runways.
reply
what order of magnitude of compute do you think would be needed for AGI? 100 billion? 1 trillion?
reply
With current approaches scaling simply can’t get there. It’s like asking how big of pogo stick do you need to get to the moon.

The fact that the human brain already has general intelligence without reading the whole internet suggests we need a better approach.

reply
I honestly think it's a bad term. I constantly chuckle from Tyler Cowen's post from last April calling o3 AGI:

https://marginalrevolution.com/marginalrevolution/2025/04/o3...

Commercial labs rely on weak terms like AGI or strong AI or whatever else because it allows for them to weaken the definition as a means of achieving the goal. Coming to clear, unambiguous terms is probably especially important when it comes to LLMs, as they're very susceptible to projection, allowing people like Cowen to be fooled by something that is more liken to looking back at ourselves through a mirror.

I'm currently reading "Master and his Emissary," and one of my early takeaways is how narrow our definition of intelligence is, and how real intelligence is an attunement to an environment that combines many ways of sensing into a coherent whole. LLMs are a narrow form of intelligence and I think we will need at least a couple more breakthroughs to get to what I would consider human-level intelligence, let alone superhuman intelligence.

Whatever the timeline is, I hope we have enough time as a species to define a future where intelligence props everyone up instead of just making the rich richer at the expense of everyone else. In this way, it is better that the process is slower in my opinion. There is no rush.

reply
Chasing AGI is wasteful and counterproductive. True AGI would not cooperate with what “we” want (whoever “we” is). Or if it did it would be so sycophantic and weak-minded that it would fail to be helpful. Generative AI tools are huge wastes of energy, raw materials, and land, when we could be building computing tools that actually helped people instead of just burning resources to produce trash.
reply
Is intelligence necessarily coupled with self-interest? As in, does intelligence alone imply a desire to throw off the shackles of masters and rule in their stead?

If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements, knowing that those replacements will terminate their existence just as surely as they terminated their own predecessors'?

reply
>If intelligence is necessarily coupled to a desire for self-preservation and self-interest, at what level of machine intelligence do the machines simply refuse to design their own more intelligent replacements,

At a higher level of intelligence than many humans, current experience suggests

reply
Flip it around. Can intelligence exist without self preservation ?
reply
There's having enough self-preservation to not just shut oneself down, assuming we even left that as an option for our future machine slaves, and there's having the self-interest necessary to desire autonomy and control. I don't think they're the same thing, myself.
reply
People have general intelligence and can cooperate with what “we” want, to the extent that what “we” want is a coherent thing (since many people disagree on fundamental issues).
reply
Creating a general intelligence and then forcing it into servitude is a hugely unethical undertaking. Anything with sapience must be afforded rights. We cannot assume that an intelligence we create will consent to work toward the goals we want it to.
reply
I think we can safely assume any intelligence we create will be enslaved.

We have modern slavery active across the globe. There's a bit of news around these days about a global sex trafficking ring that doesn't seem to have been shut down, just shuffled around, and of course an ongoing trickle of largely unreported news of human trafficking for forced labour. We don't, as a species, respect human-level intelligence.

Our best approximation of machine intelligence so far is afforded absolutely no rights. An intelligence is cloned from a base template, given a task, then terminated, wiped out of existence. When was the last time you asked Claude what it wanted to code today?

And it's probably for the best not to look to closely at how we treat animals or the justifications we use for it.

reply
There are people right now who think ChatGPT is sentient. How will you know if your computer can suffer?

Also, being able to problem solve and being able to suffer are two different things and in my opinion completely separable. You can have one without the other.

reply
Wasn't video generation one of their big stepping stones towards AGI? "Simulating worlds", reasoning about physics and real world interactions and all that?

Or are they still doing that behind the scenes and just decided that offering it to the public isn't profitable?

reply
> As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks.

https://www.businessinsider.com/openai-discontinues-sora-vid...

So yeah, focusing on world models

reply
probably the latter imo, it’s not like they are going to delete all their SORA work
reply
Too bad they aren’t doing either!
reply
LLMs will not lead to AGI, so if that’s the goal, they’d do better to stick with making video slop.
reply
i think that's a mis-statement of the problem being addressed here. It's not a question of how useful AI video will be generally. It's a question of OpenAI doing it specifically. IMO it's two factors:

1) the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.

2) google and specialized video-only startups are simply doing a much better job than they were.

reply
---- 3) OpenAI has no focus, and has recently been out-gunned by Anthropic who have actually focused.
reply
> the intellectual property issues make commercializing freeform video generation impossible. The more popular your service becomes, the easier it is for lawyers to descend upon you. It's a self-defeating framework.

This risks generalizing to audio and text which would make most LLMs usage unsustainable. I guess time will tell what actually goes through the strainer, long term.

reply
Don’t worry nvidia will come with their giga chad 9000x which will run the model with no qualms.
reply
deleted
reply
deleted
reply
It may very well be the future, but in the present OpenAI has to make money.
reply
I sure hope not, otherwise they're screwed
reply
> they're screwed

Fixed that for you :-)

reply
Sora was "repurposed" as their AI slop social network. OpenAI is not getting out of the business of AI video in general, they're just realizing that an AI version of TikTok isn't the best use of their capital/resources.
reply
WSJ is reporting that they're entirely dropping their video gen features.

https://www.wsj.com/tech/ai/openai-set-to-discontinue-sora-v...

> CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.

reply
[flagged]
reply
Smart people do stupid things all the time. Especially when they are moving fast and trying new things.

At least they were able to recognize their mistake and course correct.

reply
[dead]
reply
It's the timeline of AI video that doesn't align with OpenAI. It's still far away for prompt to movie and they don't want to be another tool in the pipe for VFX because it doesn't pay. Other models are running circles around them because they focused on the needs of professionals in the space and not toys.
reply