It's learned-helplessness on a large scale.
So, you set up a long running agent team and give it the job of building up a very complete and complex set of examples and documentation with in-depth tests etc. that produce various kinds of applications and systems using SBCL, write books on the topic, etc.
It might take a long time and a lot of tokens, but it would be possible to build a synthetic ecosystem of true, useful information that has been agentically determined through trial and error experiments. This is then suitable training data for a new LLM. This would actually advance the state of the art; not in terms of "what SBCL can do" but rather in terms of "what LLMs can directly reason about with regard to SBCL without needing to consume documentation".
I imagine this same approach would work fine for any other area of scientific advancement; as long as experimentation is in the loop. It's easier in computer science because the experiment can be run directly by the agent, but there's no reason it can't farm experiments out to lab co-op students somewhere when working in a different discipline.
What makes you think that they can't incrementally improve the state of the art... and by running at scale continuously can't do it faster than we as humans?
The potentially sad outcome is that we continue to do less and less, because they eventually will build better and better robots, so even activities like building the datacenters and fabs are things they can do w/o us.
And eventually most of what they do is to construct scenarios so that we can simulate living a normal life.
So.......
Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.
All software has bugs already.
I'd say this is true for programmers at, say, 20, but they spend the next four decades slowly improving their understanding and mastery of all the things you name, at least the good ones.
The real question is whether that growth trajectory will change for the worse or the better.
To be clear, this is not an AI doomerist comment, because none of us have spent enough time with the tech yet. I've gone down multiple lanes of thought on this, and I have cause for both worry and optimism. I'm curious to see how the lives of engineers in an AI world will look like, ultimately.
Until the sexbots come out the other side of the uncanny valley, that is.