My project has a C++ matching engine, Node.js orchestration, Python for ML inference, and a JS frontend. No LLM suggested that architecture - it came from hitting real bottlenecks. The LLMs helped write a lot of the implementation once I knew what shape it needed to be.
Where I've found AI most dangerous is the "dark flow" the article describes. I caught myself approving a generated function that looked correct but had a subtle fallback to rate-matching instead of explicit code mapping. Two different tax codes both had an effective rate of 0, so the rate-match picked the wrong one every time. That kind of domain bug won't get caught by an LLM because it doesn't understand your data model.
Architecture decisions and domain knowledge are still entirely on you. The typing is faster though.
I don't think these are exclusive. Almost a year ago, I wrote a blog post about this [0]. I spent the time since then both learning better software design and learning to vibe code. I've worked through Domain-Driven Design Distilled, Domain-Driven Design, Implementing Domain-Driven Design, Design Patterns, The Art of Agile Software Development, 2nd Edition, Clean Architecture, Smalltalk Best Practice Patterns, and Tidy First?. I'm a far better software engineer than I was in 2024. I've also vibe coded [1] a whole lot of software [2], some good and some bad [3].
You can choose to grow in both areas.
[0]: https://kerrick.blog/articles/2025/kerricks-wager/
[1]: As defined in Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge, wherein you still take responsibility for the code you deliver.
There's a good reason that most successful examples of those tools like openspec are to-do apps etc. As soon as the project grows to 'relevant' size of complexity, maintaining specs is just as hard as whatever other methodology offers. Also from my brief attempts - similar to human based coding, we actually do quite well with incomplete specs. So do agents, but they'll shrug at all the implicit things much more than humans do. So you'll see more flip-flopped things you did not specify, and if you nail everything down hard, the specs get unwieldy - large and overly detailed.
But also, you don't have to upgrade every iteration. I think it's absolutely worthwhile to step off the hamster wheel every now and then, just work with you head down for a while and come back after a few weeks. One notices that even though the world didn't stop spinning, you didn't get the whiplash of every rotation.
At the end of the day, it doesn’t matter if a cat is black or white so long as it catches mice.
——
Ive also found that picking something and learning about it helps me with mental models for picking up other paradigms later, similar to how learning Java doesn’t actually prevent you from say picking up Python or Javascript
You'll probably be forming some counter-arguments in your head.
Skip them, throw the DDD books in the bin, and do your co-workers a favour.
Right now I see the former as being hugely risky. Hallucinated bugs, coaxed into dead-end architectures, security concerns, not being familiar with the code when a bug shows up in production, less sense of ownership, less hands-on learning, etc. This is true both at the personal level and at the business level.
The latter, you may be less productive than optimal, but might the hands-on training and fundamental understanding of the codebase make up for it in the long run?
Additionally, I personally find my best ideas often happen when knee deep in some codebase, hitting some weird edge case that doesn't fit, that would probably never come up if I was just reviewing an already-completed PR.
If you keep some for yourself, there’s a possibility that you might not churn out as much code as quickly as someone delegating all programming to AI. But maybe shipping 45,000 lines a day instead of 50,000 isn’t that bad.
The people on the start of the curve are the ones who swear against LLMs for engineering, and are the loudest in the comments.
The people on the end of the curve are the ones who spam about only vibing, not looking at code and are attempting to build this new expectation for the new interaction layer for software to be LLM exclusively. These ones are the loudest on posts/blogs.
The ones in the middle are people who accept using LLMs as a tool, and like with all tools they exercise restraint and caution. Because waiting 5 to 10 seconds each time for an LLM to change the color of your font, and getting it wrong is slower than just changing it yourself. You might as well just go in and do these tiny adjustments yourself.
It's the engineers at both ends that have made me lose my will to live.
We simply don’t know. But we have a lot of examples of the latter in the dataset. And we all know greenfield apps are quite a different beast than a mature product under severe constraints and high degrees of specific expertise. OTOH these mature projects may be more likely to have good test environments making them more suitable.
I’ve found also AI assisted stuff is remarkable for algorithmically complex things to implement.
However one thing I definitely identify with is the trouble sleeping. I am finally able to do a plethora of things I couldn’t do before due to the limits of one man typing. But I don’t build tools I don’t need, I have too little time and too many needs.
I have the exact same experience... if you don't use it, you'll lose it
idk what ya'll are doing with AI, and i dont really care. i can finally - fiiinally - stay focused on the problem im trying to solve for more than 5 minutes.
Fortunately, I've retired so I'm going focus on flooding the zone with my crazy ideas made manifest in books.
Which frankly describes pretty much all real world commercial software projects I've been on, too.
Software engineering hasn't happened yet. Agents produce big balls of mud because we do, too.
Maybe they need to start handing out copies of the mythical man month again because people seem to be oblivious to insights we already had a few decades ago