Karpathy's 0% own code / infinite possibility feeling makes sense for someone who can evaluate correctness at a glance across most domains. For that person, AI is a genuine multiplier. For someone earlier in their career, or working outside their strongest domain, the review overhead often consumes the writing savings entirely.
The 4-hour sleep thing is the tell. That's not productivity, that's stimulation. The slot machine feeling of "what will the AI build next" is real, and it's orthogonal to whether the output is actually useful.
The issue is obviously AI isn't that, it's a simulation of that that often fails...
Ted Chiang's Understand comes to mind.
Wait, why are you doing it? If you're already that far, have the AI agent do that for you as well.
The point is, none of this stuff just happens. I would have to be involved in all of it, and the more it does, the more I need to do from a guidance and oversight perspective.
Is this what I want my life to be? It sounds absolutely awful.