This is not the case:
- Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").
- After the burst of the first dotcom bubble, a lot of programmers were unemployed.
- Every older programmer can tell you how fast the skills that they have can become and became irrelevant.
Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.
What do you make of AI?
Let me put it this way: I do have my opinion on this topic, but this whole topic is insanely multi-faceted, and some claims that I am rather certain about are more at the boundaru of the Overton window of HN, so I won't post it here.
But the article which the whole discussion is about
> https://www.ivanturkovic.com/2026/01/22/history-software-sim...
offers in my opinion a rather balanced perspective regarding using AI for coding (which does not mean that this article is near to my opinion).
I will just give some less controversial thoughts and advices concerning AI:
- A huge problem when discussing AI is that the whole topic is a hodgepodge of various very diverse topics.
- The (current) AI industry has invested a lot of marketing efforts to re-define what AI stood for in the past (it basically convinced the mass of people that "AI = what we are offering")
- I cannot say whether AI will be capable of replacing lots of people in office jobs or not (I have serious doubts). Media loves to disseminate this topic, but in my opinion it does not really matter: the agenda is rather to spread fear among employees to make them more obedient.
- Even if AI will be capable of replacing only few office workers (a scenario that I rather believe in), it does not mean that management will not use "AI"/"replace by AI" as a very convenient excuse to get rid of lots of employees. The dismissed workers will then mostly vent their spleen on the AI companies instead of the management; in other work: AI is a very convenient scapegoat for inconvenient management decisions. And yes, I consider it to be possible that some event that leads to mass layoffs might happen in a few years (but this is speculative).
- While I cannot say how much quality improvement is possible for current AI models (i.e. I don't know whether there exists a technological barrier), the signs are clear that as of today AI companies have hit some soft "cost barriers". I don't know whether these are easily solvable or not, but be aware of their existence.
- So, my advice is: if an AI model is of use for some project that you have (e.g. generating graphics/content for your web platform; using it as a tool for developing the next scientific breakthrough; ...), do it now. Don't assume that the models will do this nearly freely for you anymore in the future (it can be that this will stay possible in the possible, but be cautious).
Experienced through old-school (pre-LLM) practice.
I don't clearly see a good endgame for this.
Some will dig into obscurities that LLMs don't or can't touch, others will orchestrate the tools, Gastown-style, into some as-yet-unknown form.
People will vibe themselves into a corner and either start learning or flame out.
And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?
I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.
[Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]