Once upon a time we wrote code in assembly language. Then we moved to C or other compiled languages. Assembly programming remained a very useful but niche skill. You compile your code and trust the compiler. You can examine the compiler output and that is at times necessary, but that's not something most developers know how to do.
We may be looking at something similar. Most development work moving to the LLM abstraction level, with the skills being writing good prompts, managing the context window, agents, memories and so on. Some developers will be able to examine LLM generated code and spot problems there, but most will not have that skill.
I'm not sure how to feel about it. Since ChatGPT showed up and until a couple months ago, I was firmly skeptical of LLM programming. We had new models every few weeks and I felt like each new model is just a different twist on the same low quality slop output. But recently the models seem to have crossed some threshold where their capabilities really improved and I have now used Claude - still using it sparingly - to implement features in much less time than I'd need myself or to locate a bug based on just log output. I don't yet buy the "coding is solved" hype but we're at least looking at the biggest change to programming since the adoption of high-level programming languages.