To me, architecture starts all the way from the top - even before you write a single line of code, you do the DDD (Domain-Driven Design) and then create a set of rulesets (eg. use the domain name as table prefix) and contexts and then define the functionality w.r.t to that architecture. LLMs can do all this - only if you ask them to explicitly. So, they are pretty useful to brainstorm with, but not autonomously design reliably and push it to production with your eyes closed and support a 100,000 user base. It's a far cry from that.
But sure, you can upsell to management about the vanity metrics like lines of code and get that promotion with LLM. But, it's still not software engineering.
It's "not software engineering" but neither was what most people writing code did before LLMs.
> Without a clear architectural pathway / direction, that code is just useless. It's not tech debt. It's just a bunch of random strings
This is pretty clearly false. It's a bunch of random strings that you can compile and run to do what you want. It's more akin to a black box. A compiled closed source dependency.
Software to do what, though ?!
Coding, maybe 10% of a developers job (Brooks "Silver Bullet" estimates 1/6), was never the bottleneck, and even if you automated that away entirely then you've only reduced development time by 10% (assuming you are not doing human code review etc).
I would also argue that software development as a whole (not just the coding part) was also typically never the bottleneck to companies shipping product faster, maybe also not for automating their business faster (internal IT systems), since the rest of the company is not moving that fast, business needs are not changing that fast, and external factors that might drive change are not moving that fast either.
I think that when the dust settles we'll find that LLM-assisted coding has had far less impact than those trying to sell it to us are forecasting. There will be exceptions of course, especially in terms of what a lone developer can do, or how fast a software startup can get going, but in terms of impact to larger established companies I expect not so much.
It still has nothing to do with software engineering. All good code was written by humans. AI took it, plagiarizes it, launders it and repackages it in a bloated form.
Whenever I look deeply at an AI plagiarized mess, it looks like it is 90% there but in reality it is only 50%. Fixing the mess takes longer than writing it oneself.
I think you might be in serious denial.
Of course writing code isn't the only task of a software engineer, but it's an important one.
There wouldn't be so much controversy if it wasn't the case
Your linter should identify all issues - including architectural and stylistic choices - and the AI agents will immediately repair them.
It's about 1000x faster than a human code at repairing its own mess.
If a linter could deterministically identify bad architecture, you wouldn't need an LLM, your linters could just write your code for you. The vibe coding takes are just getting more and more empty-headed...
a) that's not what a linter is built for, its a tool with very specific role
b) You must've never seen LLM expose secrets in plain text or use the most convoluted scenarios you can think of.
Well, this explains why so much software nowadays is so slow, buggy, and chaotic.
Many people are missing the fact that LLMs allow ICs to start operating like managers.
You can manage 4 streams now. Within a couple years, you may be able to manage 10 streams like a typical manager does today.
IME, LLMs don't speed you up that much if 1) you're already an expert at what you're doing (inherently not scalable), 2) you're only working on one thing (doesn't make sense when you can manage multiple streams), or 3) doing something LLMs are particularly bad it (not many remaining coding tasks, but definitely still some).
That's a failure of the existing infrastructure to allow someone to do this.
LLM coding will work like this.
If you're letting LLMs go wild with no system in place to automatically know they're moving in the right direction and "shipping" things up to your standards, the failure is you, not the LLM.
Can that happen without you? I would assume this is the next step. I don't find it either good or bad, but I'm genuinely curious where this all goes.
The MCP has now automated away all of the drudgery of programming, from summarizing emails, to generating confluence documentation, to generating slide decks.
I wonder about the hallucination. Reading someone's writing doesn't take all that long.
Is programming supposed to suck all the time? Am I doing it wrong? I mean yeah, sure, it sucks sometimes, but overcoming that "suck" is where I feel progress and growth. If we decide to optimise that away...What the fuck am I doing here? No offence to managers, but if everybody is a manager, is anybody?
You know this is the exact same thing said during Opus 4.6, right?
That makes it hard to believe because it's the same "last week's model was so much behind you can't even comprehend" meme that's been going on throughout last year.
More info dumped into tickets and projects is great for understanding for both people and LLM. But hopefully not LLM generated.
I.e. it's making good output better, but it's making mediocre output (which is most output) worse by adding volume and the appearance of quality, creating a new layer of FUD, stress, tedium, and unhappiness on top of the previously more-manageable problems that come with mediocre output.
I'm still seeing this even with the newest models, because the problem is the user, not the model - the model just empowers them to be even worse, in a new and different way.
https://somehowmanage.com/2020/10/17/code-is-a-liability-not...
Every american learns how to live with debt :)
https://www.federalreserve.gov/releases/z1/dataviz/z1/nonfin...