upvote
It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy it has to be well rounded in a way it still isn't today, meaning: reliability, planning, long term memory, physical world manipulation etc. A system that can do all of that well enough so it can do the jobs of doctors, programmers and plumbers is generally intelligent in my view.
reply
> It's not a great definition but it's also not a terrible one either. For an AI system to be able to do all or even most of the jobs in an economy

That's not the definition they have been using. The definition was "$100B in profits". That's less than the net income of Microsoft. It would be an interesting milestone, but certainly not "most of the jobs in an economy".

reply
Yeah I think this is more coherent than people realize. Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.

It ties the definition to economic value, which I think is the best definition that we can conjure given that AGI is otherwise highly subjective. Economically relevant work is dictated by markets, which I think is the best proxy we have for something so ambiguous.

reply
It's maybe somewhat nice conceptually, and certainly an useful added value - but the elsewhere mentioned $100 billion profit is not the right metric.

And then I think coming up with the right metric is just as subjective on this field as the technological one.

reply
> Economically relevant knowledge work is things that humans find cognitively demanding. Otherwise they wouldn't be valued in the first place.

Deep scientific discoveries are also cognitively demanding, but are not really valued (see the precarious work environment in academia).

Another point: a lot of work is rather valued in the first place because the work centers around being submissive/docile with regard to bullshit (see the phenomenon of bullshit jobs). You really know better, but you have to keep your mouth shut.

reply
Was there a better way than setting an arbitrary $100b threshold?

e.g. average cost to complete a set of representative tasks

reply
Yeah, I'm sure there could be a better metric, if the metric's purpose was to check on the progress until the AGI target rather than doing business based on it (and so, hammering the metric to fit the shape of "realistic goal")
reply
> They redefined AGI to be an economical thing

Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it.

reply
Around the end of 2024, it was reported that OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
reply
> OpenAI and Microsoft agreed that for the purposes of their exclusivity agreement, AGI will be achieved when their AI system generates $100 billion in profit

Wow. Maybe they spelled it out as aggregate gross income :P.

reply
Yea, seems like this was stage setting for them to exit. They were already trying to break the deal then. So, I feel like that is lawyers find a way to bend whatever to get out of the deal.
reply
Companies that have created "AGI":

Apple, Alphabet, Amazon, NVIDIA, Samsung, Intel, Cisco, Pfizer, UnitedHealth , Procter & Gamble, Berkshire Hathaway, China Construction Bank, Wells Fargo, ...

reply
Those were all achieved by "GI".
reply
For some definition of Artificial this holds perfectly

A self-running massive corporation with no people that generates billions in profit, no matter what you call it, would completely upend all previous structural assumptions under capitalism

reply
So no human on Earth is intelligent by that metric.
reply
> So no human on Earth is intelligent by that metric.

That's a relevent aspect of the AGI concept.

reply
It’s a system that generates $100 billion in profit. [0]

[0] https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

reply
Are there inflation markers included?
reply
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity

From: https://openai.com/charter/

reply
All humanity will benefit, but some humanity will benefit more than others.
reply
i am highly skeptical "all" of humanity will benefit, and many will have extreme negatives.

if you think drone targeting in Ukraine is scary now, wait until AGI is on it...

ditto for exploiting vulns via mythos

reply
Marketing
reply
I'm so confused why I was down voted for answering the question that was asked?
reply
Because 1) your answer had nothing to do with the question, 2) you quoted a slogan that life verified as false.
reply
Are you illiterate? Do you not know how hackernews threads work, or what?

I responded to the below quoted question you dumb fuck. Can you figure out basic website navigation. Or is that too complex for you?

----- ' They redefined AGI to be an economical thing Huh. Source? I mean, typical OpenAI bullshit, but would love to know how they defined it. '

reply
The question was about their redefinition of AGI in economical terms for which others provided links, not the one from their (obviously fake) mission statement.

BTW I didn't downwote you (I hate it, if many people downvote a comment it's harder to read), I was just trying to explain why others did. On second thought, my comment was wrong, because your answer was related to the question but it wasn't really the intended one.

reply
> They redefined AGI to be an economical thing Huh. Source?

I don't think your original comment deserve to be downvoted. (Calling someone illiterate, on the other hand.)

But the "it" I was asking about was "AGI" as "an economical thing." You technically correctly answered how OpenAI defines AGI in public, i.e. with no reference to profits. But it did not address the economic definition OP initially alluded to.

For what it's worth, I could have been clearer in my ask.

reply
Yeah I deserve to be down voted for the last message no doubt on that lol.

But originally I was just trying to be helpful by quoting their charter on what they consider "agi" now.

reply
AGI is when the capitalists are not forced to share their profits with the intelligentsia.
reply
Translation: IPO.
reply
Here's the sauce you requested: [0]

"OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits."

Given that the definition of AGI is beyond meaningless, it is clear that the "I" in AGI stands for IPO.

[0] https://finance.yahoo.com/news/microsoft-openai-financial-de...

reply
Please reveal the “scientific” definition of AGI.
reply
When we are having serious conversations about AI rights and shutting off a model + harness was impactful as a death sentence. (I'm extremely skeptical that given the scale of computer/investment needed to produce the models we have _good as they are_ that our current llm architecture gets us there if there is even somewhere we want to go).
reply
It makes sense though. Humans are coherent to the economy based on their ability to perform useful work. If an AI system can perform work as well as or better than any human, than with respect to "anything any human has ever been willing to pay for", it is AGI.

I don't get why HN commenters find this so hard to understand. I have a sense they are being deliberately obtuse because they resent OpenAI's success.

reply
It doesn’t though, AGI have far greater implications than doing mundane work of today. Actual AGI would self improve, that in itself would change literally every single thing of human civilization, instead we are talking about replacing white collar jobs.
reply
An AGI that can do all that would also necessarily be able to do all white collar work. That latter definition I'd consider a "soft threshold" that would be hit before recursive self-improvement, which I imagine would happen soon after.

The current estimation on the time between this is fairly small, bottlenecked most likely by compute constraints, risk aversion, and need to implement safeguards. Metaculus puts it at about 32 months

https://www.metaculus.com/questions/4123/time-between-weak-a...

reply
Sure, but that’s like saying we’re close att infinite life because we’ve expanded our life expectancy.

I don’t really buy into the ”one part equals another”, we are very quick to make those assumptions but they are usually far from the science fiction promised. Batteries and self driving cars comes to mind, and organic or otherwise crazy storage technologies, all ”very soon” for multiple decades.

It’s very possible that white collar jobs get automated to a large degree and we’ll be nowhere closer to AGI than we were in the 70’s, I would actually bet on that outcome being far more likely.

reply
I think AGI by that definition (ability to self-improve) is closer than many people think largely because current models are very close to human intelligence in many domains. They can answer questions, derive theorems, write code, navigate websites, etc. All the work that current AI research scientists do is no more than these general information processing tasks, scaled up in terms of creativity, long-term coherence, sensitivity to bad/good ideas over the span of a larger context window, etc.

The leap between Opus 4.7/GPT 5.5 and what would be sufficient for AGI seems smaller than the leap between The invention of the Transformer model (2017) and today, thus by a very conservative estimate I think it will take no more time between then and now as it will between now and an AI model as smart as any human in all respects (so by 2035). I think it will be shorter though because the amount of money being put into improving and scaling AI models and systems is 100000x greater than it was in 2017.

reply
Not to worry, humanoid, generally useful robots are only a few years away.
reply