At the very least, Ilya Sutskever genuinely believed it, even when they were just making a DOTA bot, and not for hype purposes.
I know he's been out of OpenAI for a while, but if his thinking trickled down into the company's culture, which given his role and how long he was there I would say seems likely, I don't think it's all hype.
Grand delusion, perhaps.
Definitely interesting to watch from the perspective of human psychology but there is no real content there and there never was.
The stuff around Mythos is almost identical to O1. Leaks to the media that AGI had probably been achieved. Anonymous sources from inside the company saying this is very important and talking about the LLM as if it was human. This has happened multiple times before.
so just understand there’s a lot of of us “insane” people out there and we’re making really insane progress toward the original 1955 AI goals.
We’re going to continue to work on this no matter what.
1) True believers 2) Hype 3) A way to wash blatant copyright infringement
True believers are scary and can be taken advantage of. I played DOTA from 2005 on and beating pros is not enough for AGI belief. I get that the learning is more indirect than a deterministic decision tree, but the scaling limitations and gaps in types of knowledge that are ingestible makes AGI a pipe dream for my lifetime.
Seems more like an incredibly embarrassing belief on his part than something I should be crediting.
He doesn't need to be right but it's not crazy at all to look at super human performance in DOTA and think that could lead to super human performance at general human tasks in the long run