> If you haven’t spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement
Seems to me like if this is true I'm screwed no matter if I want to "embrace" the "AI revolution" or not. No way my manager's going to approve me to blow $1000 a day on tokens, they budgeted $40,000 for our team to explore AI for the entire year.
Let alone from a personal perspective I'm screwed because I don't have $1000 a month in the budget to blow on tokens because of pesky things that also demand financial resources like a mortgage and food.
At this point it seems like damned if I do, damned if I don't. Feels bad man.
I don't think you need to spend anything like that amount of money to get the majority of the value they're describing here.
Edit: added a new section to my blog post about this: https://simonwillison.net/2026/Feb/7/software-factory/#wait-...
I built a tool that writes (non shit) reports from unstructured data to be used internally by analysts at a trading firm.
It cost between $500 to $5000 per day per seat to run.
It could have cost a lot more but latency matters in market reports in a way it doesn't for software. I imagine they are burning $1000 per day per seat because they can't afford more.
Another skill called skill-improver, which tries to reduce skill token usage by finding deterministic patterns in another skill that can be scripted, and writes and packages the script.
Putting them together, the container-maintenance thingy improves itself every iteration, validated with automatic testing. It works perfectly about 3/4 of the time, another half of the time it kinda works, and fails spectacularly the rest.
It’s only going to get better, and this fit within my Max plan usage while coding other stuff.
If the tokens that need to attend to each other are on opposite ends of the code base the only way to do that is by reading in the whole code base and hoping for the best.
If you're very lucky you can chunk the code base in such a way that the chunks pairwise fit in your context window and you can extract the relevant tokens hierarchically.
If you're not. Well get reading monkey.
Agents, md files, etc. are bandaids to hide this fact. They work great until they don't.
I would expect cost to come down over time, using approaches pioneered in the field of manufacturing.
To be fair, I’ll bet many embracing concerning advice like that have never worked for the same company for a full year.
As for me, we get Cursor seats at work, and at home I have a GPU, a cheap Chinese coding plan, and a dream.
Right in the feels
Make a "systemctl start tokenspender.service" and share it with the team?
I didn't read that as you need to be spending $1k/day per engineer. That is an insane number.
EDIT: re-reading... it's ambiguous to me. But perhaps they mean per day, every day. This will only hasten the elimination of human developers, which I presume is the point.
At home on my personal setup, I haven't even had to move past the cheapest codex/claude code subscription because it fulfills my needs ¯\_(ツ)_/¯. You can also get a lot of mileage out of the higher tiers of these subscriptions before you need to start paying the APIs directly.
Takes like this are just baffling to me.
For one engineer that is ~260k a year.
The thing with AI is that it ranges from net-negative to easily brute forcing tedious things that we never have considered wasting human time on. We can't figure out where the leverage is unless all the subject matter experts in their various organizational niches really check their assumptions and get creative about experimenting and just trying different things that may never have crossed their mind before. Obviously over time best practices will emerge and get socialized, but with the rate that AI has been improving lately, it makes a lot of sense to just give employees carte blanche to explore. Soon enough there will be more scrutiny and optimization, but that doesn't really make sense without a better understanding of what is possible.
1) Engineering investment at companies generally pays off in multiples of what is spent on engineering time. Say you pay 10 engineers $200k / year each and the features those 10 engineers build grow yearly revenue by $10M. That’s a 4x ROI and clearly a good deal. (Of course, this only applies up to some ceiling; not every company has enough TAM to grow as big as Amazon).
2) Giving engineers near-unlimited access to token usage means they can create even more features, in a way that still produces positive ROI per token. This is the part I disagree with most. It’s complicated. You cannot just ship infinite slop and make money. It glosses over massive complexity in how software is delivered and used.
3) Therefore (so the argument goes) you should not cap tokens and should encourage engineers to use as many as possible.
Like I said, I don’t agree with this argument. But the key thing here is step 1. Engineering time is an investment to grow revenue. If you really could get positive ROI per token in revenue growth, you should buy infinite tokens until you hit the ceiling of your business.
Of course, the real world does not work like this.
But my point is moreso that saying 1k a day is cheap is ridiculous. Even for a company that expects an ROI on that investment. There’s risks involved and as you said, diminishing returns on software output.
I find AI bros view of the economics of AI usage strange. It’s reasonable to me to say you think its a good investment, but to say it’s cheap is a whole different thing.
The best you can say is “high cost but positive ROI investment.” Although I don’t think that’s true beyond a certain point either, certainly not outside special cases like small startups with a lot of funding trying to build a product quickly. You can’t just spew tokens about and expect revenue to increase.
That said, I do reserve some special scorn for companies that penny-pinch on AI tooling. Any CTO or CEO who thinks a $200/month Claude Max subscription (or equivalent) for each developer is too much money to spent really needs to rethink their whole model of software ROI and costs. You’re often paying your devs >$100k yr and you won’t pay $2k / yr to make them more productive? I understand there are budget and planning cycle constraints blah blah, but… really?!
Their page looks to me like a lot of invented jargon and pure narrative. Every technique is just a renamed existing concept. Digital Twin Universe is mocks, Gene Transfusion is reading reference code, Semport is transpilation. The site has zero benchmarks, zero defect rates, zero cost comparisons, zero production outcomes. The only metric offered is "spend more money".
Anyone working honestly in this space knows 90% of agent projects are failing.
The main page of HN now has three to four posts daily with no substance, just Agentic AI marketing dressed as engineering insight.
With Google, Microsoft, and others spending $600 billion over the next year on AI, and panicking to get a return on that Capex....and with them now paying influencers over $600K [1] to manufacture AI enthusiasm to justify this infrastructure spend, I won't engage with any AI thought leadership that lacks a clear disclosure of financial interests and reproducible claims backed by actual data.
Show me a real production feature built entirely by agents with full traces, defect rates, and honest failure accounting. Or stop inventing vocabulary and posting vibes charts.
Repeating for emphasis, because this is the VERY obvious question anyone with a shred of curiosity would be asking not just about this submission but about what is CONSTANTLY on the frontpage these days.
There could be a very simple 5 question questionnaire that could eliminate 90+% of AI coding requests before they start:
- Is this a small wrapper around just querying an existing LLM
- Does a brief summary of this searched with "site:github" already return dozens or hundreds of results?
- Is this a classic scam (pump&dump, etc) redone using "AI"
- Is this needless churn between already high level abstractions of technology (dashboard of dashboards, yaml to json, python to java script, automation of automation framework)
I will reformulate my question to ask instead if the page is still 100% correct or needs an update?
However I would argue there are significant gaps:
- You do not name your consulting clients. You admit to do ad-hoc consulting and training for unnamed companies while writing daily about AI products. Those client names are material information.
- You have non payments that have monetary value. Free API credits, and weeks of early preview access, flights, hotels, dinners, and event invitations are all compensation. Do you keep those credits?
- The "I have not accepted payments from LLM vendors" could mean receiving things worth thousands of dollars. Please note I am not saying you did.
- You have a structural conflict. Your favorable coverage will mean preview access, then exclusive content then traffic, then sponsors, then consulting clients.
- You appeared in an OpenAI promotional video for GPT-5 and were paid for it. This is influencer marketing by any definition.
- Your quotes are used as third-party validation in press coverage of AI product launches. This is a PR function with commercial value to these companies.
The FTC revised Endorsement Guides explicitly apply to bloggers, not just social media influencers. The FTC defines material connection to include not only cash payments but also free products, early access to a product, event invitations, and appearing in promotional media all of which would seem to apply here.
They also say in the FTC own "Disclosures 101" guide that states [2]: "...Disclosures are likely to be missed if they appear only on an ABOUT ME or profile page, at the end of posts or videos, or anywhere that requires a person to click MORE."
https://www.ftc.gov/business-guidance/resources/disclosures-...
[2] - https://www.ftc.gov/system/files/documents/plain-language/10...
I would argue an ecosystem of free access, preview privileges, promotional video appearances, API credits, and undisclosed consulting does constitute a financial relationship that should be more transparently disclosed than "I have not accepted payments from LLM vendors."
I don't think it's unreasonable to say that your enumerated list would be considered beyond simply being enthusiastic about a new technology
The moats here are around mechanism design and values (to the extent they differ): the frontier labs are doomed in this world, the commons locked up behind paywalls gets hyper mirrored, value accrues in very different places, and it's not a nice orderly exponent from a sci-fi novel. It's nothing like what the talking heads at Davos say, Anthropic aren't in the top five groups I know in terms of being good at it, it'll get written off as fringe until one day it happens in like a day. So why be secretive?
You get on the ladder by throwing out Python and JSON and learning lean4, you tie property tests to lean theorems via FFI when you have to, you start building out rfl to pretty printers of proven AST properties.
And yeah, the droids run out ahead in little firecracker VMs reading from an effect/coeffect attestation graph and writing back to it. The result is saved, useful results are indexed. Human review is about big picture stuff, human coding is about airtight correctness (and fixing it when it breaks despite your "proof" that had a bug in the axioms).
Programming jobs are impacted but not as much as people think: droids do what David Graeber called bullshit jobs for the most part and then they're savants (not polymath geniuses) at a few things: reverse engineering and infosec they'll just run you over, they're fucking going in CIC.
This is about formal methods just as much as AI.