Oooh, let me dive in with an analogy:
Screwdriver.
Metal screws needed inventing first - they augment or replace dowels, nails, glue, "joints" (think tenon/dovetail etc), nuts and bolts and many more fixings. Early screws were simply slotted. PH (Philips cross head) and PZ (Pozidrive) came rather later.
All of these require quite a lot of wrist effort. If you have ever screwed a few 100 screws in a session then you know it is quite an effort.
Drill driver.
I'm not talking about one of those electric screw driver thingies but say a De W or Maq or whatever jobbies. They will have a Li-ion battery and have a chuck capable of holding something like a 10mm shank, round or hex. It'll have around 15 torque settings, two or three speed settings, drill and hammer drill settings. Usually you have two - one to drill and one to drive. I have one that will seriously wrench your wrist if you allow it to. You need to know how to use your legs or whatever to block the handle from spinning when the torque gets a bit much.
...
You can use a modern drill driver to deploy a small screw (PZ1, 2.5mm) to a PZ3 20+cm effort. It can also drill with a long auger bit or hammer drill up to around 20mm and 400mm deep. All jolly exciting.
I still use an "old school" screwdriver or twenty. There are times when you need to feel the screw (without deploying an inadvertent double entendre).
I do find the new search engines very useful. I will always put up with some mild hallucinations to avoid social.microsoft and nerd.linux.bollocks and the like.
This framing is exactly how lots of people in the industry are thinking about AI right now, but I think it's wrong.
The way to adopt new science, new technology, new anything really, has always been that you validate it for small use cases, then expand usage from there. Test on mice, test in clinical trials, then go to market. There's no need to speculate about "too much" or "too little" usage. The right amount of usage is knowable - it's the amount which you've validated will actually work for your use case, in your industry, for your product and business.
The fact that AI discourse has devolved into a Pascal's Wager is saddening to see. And when people frame it this way in earnest, 100% of the time they're trying to sell me something.
My theory is that executives must be so focused on the future that they develop a (hopefully) rational FOMO. After all, missing some industry shaking phenomenon could mean death. If that FOMO is justified then they've saved the company. If it's not, then maybe the budget suffers but the company survives. Unless of course they bet too hard on a fad, and the company may go down in flames or be eclipsed by competitors.
Ideally there is a healthy tension between future looking bets and on-the-ground performance of new tools, techniques, etc.
They're focused no the short-term future, not the long-term future. So if everyone else adopts AI but you don't and the stock price suffers because of that (merely because of the "perception" that your company has fallen behind affecting market value), then that is an issue. There's no true long-term planning at play, otherwise you wouldn't have obvious copypcat behavior amongst CEOs such as pandemic overhiring.
You're neglecting the cost of testing and validation. This is the part that's quite famous for being extremely expensive and a major barrier to developing new therapies.
When people talk about this stuff they usually mean very different techniques. And last months way of doing it goes away in favor of a new technique.
I think the best you can do now is try lots of different new ways of working keep an open mind
Note, if staying on the bleeding edge is what excites you, by all means do. I'm just saying for people who don't feel that urge, there's probably no harm just waiting for stuff to standardize and slow down. Either approach is fine so long as you're pragmatic about it.
Until coding systems are truly at human-replacement level, I think I'd always prefer to hire an engineer with strong manual coding skills than one who specializes in vibe coding. It's far easier to teach AI tools to a good coder than to teach coding discipline to a vibe coder.
Put another way, the ability to use AI became an important factor in overall software engineering ability this year, and as the year goes on the gap between the best and worst users or AI will widen faster because the models will outpace the harnesses
If you just mean, "hey you should learn to use the latest version of Claude Code", sure.
It's both. It's using the AI too much to code, and too little to write detailed plans of what you're going to code. The planning stage is by far the easiest to fix if the AI goes off track (it's just writing some notes in plain English) so there is a slot-machine-like intermittent reinforcement to it ("will it get everything right with one shot?") but it's quite benign by comparison with trying to audit and fix slop code.
Another good alike wager I remember is: “What if climate change is a hoax, and we invested in all this clean energy infrastructure for nothing”.
The real profits are the companies selling them chips, fiber, and power.