You are still necessary to push the frontier forward. Though, given the way some models will catch themselves making a conceptual error and correct in real time, we should be nervous.
They are completely, 100% useless, no matter what I do. Add on another layer of abstraction like "give me a function to calculate <engineering value>" and they get even worse. I had a small amount of luck getting it to refactor some really terrible code I wrote while under the gun, but they made tons of errors I had to go back and fix. Luckily I had a pretty comprehensive test suite by that point and finding the mistakes wasn't too hard.
(I've tried all of the "just point them at the documentation" replies I'm sure are coming. It doesn't help)
Yes, and that's a problem. If the advent of coding agents leads to people that are only in it for the money staying away from higher education - good. Those people are the reason why higher education turned to shit anyway and maybe it will be a nice change when people go into higher ed out of curiosity and not because they smell money.
Not necessarily going to be true by the time current first year students graduate, given that solved problems are most exposed to AI acceleration.
AIs are pushing many things forward, but due to training sets and context windows, I think meaningfully adding to actually valuable apps, at least as we currently write them (the kind with many DBs/caches/message queues, services) will take a fair bit longer.
To be fair to the parent poster, many people do seem to aspire only to be LLM operators, who will be a dime-a-dozen commodities accorded even less respect and pay than the average developer is today.