It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.
Humans rewire their mind to optimize it for the codebase, that is why new programmers takes a while to get up to speed in the codebase. LLM doesn't do that and until they do they need the entire thing in context.
And the reason we can't do that today is that there isn't enough data in a single codebase to train an LLM to be smart about it, so first we need to solve the problem that LLM needs billions of examples to do a good job. That isn't on the horizon so we are probably safe for a while.
It is not perfect yet but the tooling here is improving. I do not see a ceiling here. LSPs + memory solve this problem. I run into issues but this is not a big one for me.
It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love.
As far as I can tell, what's not to love about that?
In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.
Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.
Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements
Important business context can be provided to them, also
This isn’t an argument against LLMs capability. But the burden of proof is on the LLMs’ side.
But don’t take it from me - go compete with me! Can you do my job (which is 90% talking to people to flesh out their unclear business requirements, and only 10% actually writing code)? It so, go right ahead! But since the phone has yet to stop ringing, I assume LLMs are nowhere there yet. Btw, I’m helping people who already use LLM-assisted programming, and reach out to me because they’ve reached their limitations and need an actual human to sanity-check.
Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.
They aren't "the smartest minds in the world". They are slick salesmen.