Open-source models have caught up tremendously recently. Those who can’t or don’t want to invest a lot of money can already develop with Kimi and GLM without any problems. We don’t have to wait another year for that.
From experience, the same level of usage would have left me stranded on my CC 5 hr limit within an hour.
There were some difficulties with tool calls, in particular with replacing tab-indented strings - but taking no steps to mitigate that (which meant the model had to figure it out every time I cleared context) only cost relatively few extra tokens -- and it still came in well under 4.6, nevermind 4.7. And of course, I can add instructions to prevent churning on those issues.
I have no reason to go back to anthropic models with these results.
"No moat" indeed.
I expect tomorrow’s models will be so much more capable that we will happily pay more.
But if not, we will still likely get today’s capabilities or more for cheap.
I don’t see a realistic scenario in which the AI genie is going back into the bottle because of affordability.
It seems like wishful thinking by people who dislike the new paradigm in software engineering.
(Timeframes are hyperbolical).