Moreover, if you spend that much on tokens, that sounds like a skill issue and you may be creating a lot of technical debt. I don’t see how anyone can have the brain capacity to handle enormous code bases.
(Sidebar: There is a prediction that the traditional roles of Designer, Product Owner and Programmer are disappearing and converging into one single specialized role. (Claude Code has a blog about this) and I feel there is truth in this.)
So, my runrate right now is ~$4,200 per year, but I won't be surprised if it goes up. It depends on several factors.
I cannot imagine productively spending $250k/year on LLM coding - you'd need some kind of massive tree of agents reviewing each other's work and I think even then you would struggle to keep them on-task and sanity-checked. However, I don't make $500k a year so what do I know...
The catch is that token spend and quality aren't correlated the way you'd expect. Low-spend months when I'm directing carefully and reviewing every diff tend to produce better code than high-spend months where I'm letting agents run longer chains. The expensive runs generate more code, not necessarily better code.
Jensen's $250k figure only makes sense if you're running dozens of parallel agents continuously. Most engineers are doing something more like augmented pairing. The unit economics are actually pretty good at $100-200/month per person. Beyond that you're hitting diminishing returns unless you've built actual agent infrastructure to parallelize and verify the work.