upvote
This is a real danger that I think a lot of people will run into as prices go up more and more in the future.

Completely outside of the productivity debate, offloading cognitive tasks to LLMs leaves you less practiced in them and less ready to do them when the LLM isn't available. When you have to delegate only certain tasks to the LLM for financial reasons, you may find yourself very frustrated.

reply
I'm really hoping locally hosted llms get to the point of competing with current-day frontier models so that we all have "unlimited" usage.
reply
This is the bet of many of the big AI companies, and why they're subsidizing majorly the calls. With the latest cracks by the US gov, it seems Anthropic is starting to reduce those subsidies given their edge in the game. I am starting to consider local models more seriously beside just testing, but nowadays the ram/gpu market is bloated.
reply
Local models just don't seem that useful for me for these particular tasks yet - the most recent versions of Codex and Claude Opus are the first time I've found them to be particularly useful in a "real engineering" context that isn't just vibe coding.

Google's TurboQuant might help address this, but it also might just widen the gap even further.

I am far on the skeptic edge when it comes to the generative AI side of ML tools though, so do take my opinion with that weight.

reply