I routinely generate applications for my personal use using OpenCode + Claude Sonnet/Opus.
Yesterday I generated an app for my son to learn multiplication tables using spaced repetition algorithm and score keeping. It took me like 5 minutes.
Of course if you use ChatGPT it will not work but there is no way Claude Code/Open Code with any modern model isn't able to generate a one hundred line script on the first try.
Eh?
Ever hear the saying the first 90% of a problem is 90% of the work, the last 10% of the program is also 90% of the work.
AI/LLMs have improved massively in that context. That's not even including the other model types such as visual/motion-visual/audio which are to the point that telling their output from reality is a chore.
And one shotting a simple script simply doesn't mean much without context. I have it dump relatively complex powershell scripts often enough and it's helped me a lot with being able to explain scripting actions to other humans where before I'd make assumptions about the other users knowledge where it was not warranted.
In reality it's Logarithmic. Maybe with the occasional jolt. You'd think with Moores "law" that we'd know better by now that explosive growth isn't forever. Or at least that we're bound to physics as a cap to hit.