upvote
While this is a legitimate set of rules to follow for maintaining code sanity and a solid mental model of how a codebase may grow, it’s always challenging to stick to them in a workplace where expectations around delivery speed have changed drastically with the onset of AI. The sweet spot lies in striking a balance between staying connected to the codebase and not becoming a limiting factor for the team at the same time.
reply
That's kind of what I figured, sadly. I haven't experienced it personally yet since I got let go from my last job about 14 months ago, but it makes so much sense given how management is so willing to sacrifice quality for speed.
reply
Another frustrating thing that has emerged from this is where managers “vibe code” half-baked ideas for a couple of hours and then hand it off as if they’ve meaningfully contributed to the implementation. Suddenly you’re expected to reverse engineer incoherent prompts, inconsistent code, and random abstractions that nobody fully understands.

In their mind they’ve already done the “architectural heavy lifting” and accelerated the team. More often than not it just adds cognitive overhead where you spend more time deciphering and cleaning up garbage than actually building the thing properly from scratch.

reply
I am lucky to have never worked in a team where my manager wouldn't expect strong push back in this scenario. Many of the corporate environments described on here seem dystopian, this included.
reply
Vouching for this comment because my friend confided in me a week ago that her manager also does this and is like “oh yeah, here’s 80% done, you just do the rest so we can ship it” when a large part of it is slop that needs to be rewritten, due to not enough guidance and pushback during generation.
reply
That’s when you ask it to write tests to a good coverage, and then have it reimplement everything with the tests still passing…
reply
Unless the tests are written against logic that is in of itself subtly wrong and even the structure of code and what methods there are is wrong - so even your unit tests would have to be rewritten because the units are structured badly.

It’s a valid direction to look in, it just doesn’t address the root issue of throwing slop across the wall and also having unrealistic expectations due to not knowing any better.

reply
Yep. It’s very healthy to be suspicious of code. Any code. Whether generated or not. That’s where the bugs are.

If there’s one thing that’s disturbing with AI proponent is how trusting they are of code. One change in the business domain and most of the code may have turn from useful to actively harmful. Which you have to rewrite. Good luck doing that well if you’re not really familiar with the code.

reply
This is fine if it’s more enjoyable for you, that’s what’s important in personal projects most of the time.

But we don’t follow the same things for dependencies, work of colleagues, external services, all the layers down to the silicon when trying to work.

Why is AI suddenly different?

We just have to do this by risk and reward. What’s the downside if it’s wrong, and how likely is an error to be found in testing and review? What is the benefit gained if it’s all fine? This is the same for libraries and external services.

A complex financial set of rules in a non-updatable crypto contract with no testing?

A viewer for your internal log data to visualise something?

reply
It is and has always been immensely helpful to understand what you are doing in any context.

There are some programmers who treat the job as just plumbing together what is to them completely incomprehensible black boxes, who treat the computer as a mystery machine that just does things "somehow", but these programmers will almost always be hacks that spend their entire career producing mediocre code.

There are things such a programmer can build, but they are very limited by their lack of in depth understanding, and it is only a tiny fraction of what a more competent programmer can put together.

To get beyond being a hack, you need to understand the entire stack, including the code that you didn't write, including both libraries, frameworks and the OS, and including the hardware, the networking layers, and so forth. You don't have to be an expert at these things by any means, but you do need to understand them and be comfortable treating them as transparent boxes that you may have to go in and fiddle with at some point to get where you need to go. Sometimes you need to vendor a dependency and change it. Sometimes you need to drop it entirely and replace it with something more fit for purpose you built yourself.

reply
AI is different because it's a tool, and the user of the tool is responsible for the work performed.

An outsourced developer isn't a "tool". They're a human being, and responsible for their actions. They're being paid, and they either act responsibly or they get replaced.

A vibe coder is a human using a tool. The human is responsible for code quality, and if it's not good enough, they need to keep using the tool to make it better. That means understanding the tool's output.

If an artist used Photoshop to create a billboard ad that was ugly, they don't get to blame Photoshop. They have to keep using the tool until their output is good.

reply
> An outsourced developer isn't a "tool".

I'd think that depends on the model of responsibility at play.

For example, suppose I hire a building contractor to build a house, and the electrician he subcontracts makes mistake.

From my perspective, the prime contractor is equally responsible for that mistake regardless of whether he used a subcontractor, or did the work himself but used a broken tool.

This doesn't make the electrician any less of a "person" in the deeply important ways, but it's not a distinction that's relevant to my handling of the problem.

reply
I was trying to follow similar rules, until one day I had to solve a hard mathematical problem. Claude is a phd level mathematician, I am not. I, however, know exactly the properties of the desired solution and how to test it’s correct. So I decided to keep Claude’s solution over my basic, naive one. I mentioned that in the pull request and everyone agreed that was the right call. Would you open exceptions like that in your rules? What if AI becomes so much better at coding than you , not just at doing advanced mathematics? Would you then stop to write code by hand completely since that would be the less optimal option, despite you losing your ability to judge the code directly at that point (and as in my example, you can still judge tests, hopefully)? I think these are the more interesting questions right now.
reply
> Claude is a phd level mathematician

Unfortunately, it is not, and many of its attempts at mathematical proofs have major flaws. You shouldn't trust its proofs unless you are already able to evaluate them--which I think is pretty much all the OP is saying.

reply
To be fair, many of the proof attempts that mathematicians do also have major flaws. Most get caught before getting published.
reply
Trust isn’t a binary, and I can trust things I don’t understand enough that I can use them. OP was talking about needing to understand, which is quite a bit above the level of being able to validate enough to use for a task.
reply
I definitely wouldn't put math in my code I didn't understand just because Claude says so. I am not astonished that everyone agreed, that's why shit is going to hit the fan pretty badly pretty soon due to AI coding.

There is one exception to this: If the AI also delivers the proof of why the math is correct, in a machine-checked format, and I understand the correctness theorem (not necessarily its proof). Then I would use it without hesitation.

reply
I always found it weird when helping people with excel formulas how few people even try to check maths they don't understand, let alone try to understand it.

I struggle to remember even relatively simple maths like working out "what percentage of X is Y" so if I write a formula like that I'll put in some simple values like 12 and 6 or 10,000 and 2,456 just to confirm I haven't got the values backwards or something. I've been shown sheets where someone put a formula in that they don't understand, checked it with numbers they can't easily eyeball and just assumed it was right as it's roughly in their ball park / they had no idea what the end result should be.

Then again I've also seen sheets where a 10% discount column always had a larger number than the standard price so even obviously wrong things aren't always checked.

reply
I don't disagree, but whoever never put math they don't fully understand in their code gets to throw the first stone.

I've reached solutions by trial and error too, and tried to rationalize them later, quite a few times. And it's easier to rationalize a working solution, however adversarial you claim to be in your rationalization.

I don't see using gen AI for the (not so) “brute force” exploration of the solution space as that different from trial and error and post fact rationalization.

reply
How did you test that the solution is correct? Is the set of possible inputs a low-ish finite number?

Normally with mathematical problems you have to prove the solution correct. Testing is not sufficient, unless you can test all possible inputs exhaustively.

reply
How do you know what it spat out is correct though?

If it’s beyond our ability to review and we blindly trust it’s correct based on a limited set of tests… we’re asking for trouble.

reply
> Claude is a phd level mathematician , I am not

I’m going to guess that this is Gell-Mann amnesia more than anything, and it’s going to get a lot of organizations into a lot of weird places.

reply
> Claude is a phd level mathematician

... that can't even count.

reply
You do realize you can ask Claude about the things you don't understand?

"PhD level" just means you finished a bachelor and masters degree and are now doing a bit of original research as an employed research assistant.

Claude isn't "PhD level" anything. This shows a complete lack of understanding here. Claude has read every single text book in existence, so it can surface knowledge locked away in book chapters that people haven't read in years (nobody really reads those dense books on niche topics from start to finish).

Since Claude has infinite patience, you can just keep asking until you get it.

reply
I’ve also heard it being called “comprehension debt,” which I like a little more because I think it’s more precise: the specific debt being accrued is exactly a lack of comprehension of the code.
reply
I think it’s both in fact.

Comprehension debt just sounds like there are things you don’t (yet) understand.

Cognition debt means your lack of understanding compounds and the cognition “space” required to clear it increases accordingly.

An increasing comprehension debt that can be paid off one bit at a time within reasonable cognition space takes linear time to clear.

Cognition debt takes exponential time to clear the more of it you have. If it reaches a point where you simply don’t have the space for the cognition overhead required to understand the problem, you probably need to start over from your specifications.

reply
I like that too. However, “cognitive debt” points to the possibility of cognitive overload, that the code can become so complex and inscrutable that it may become impossible to comprehend. “Comprehension debt” sounds a bit weaker in that respect, that it’s just a matter of catching up with one’s comprehension.
reply
Yeah I like that better too, gonna start using that
reply
“You can outsource your thinking, but not your understanding.”
reply
This is great until the "gun to your head" is your skip-level manager demanding that a feature be implemented by the end of the week, and they know you can just "generate it with AI" so that timeline is actually realistic now whereas two years ago it would have required careful planning, testing, and execution.
reply
Well, that's nice.

Your manager is unknowingly helping you create a form of job security for yourself, with all the technical debt and bugs being accumulated.

He might not understand it, and it might not be the type of work you want to do, but someone is going to have to fix those issues. And the longer they wait, the bigger the task gets.

reply
That isn't new, though. Managers often pushed unrealistic timelines and showed lack of care about tech debt well before vibe coding, just the timelines where different, and the magnitude will be bigger this time. But we also have LLMs to help it clean it up faster, I guess.
reply
The question is, is it a job you actually still want once the poo pile reaches critical mass you are the only one with a shovel and the deadline is "yesterday"
reply
That is absolutely true. Unfortunately, this ship has sailed and we are not closing Pandora's box anymore. We'll have to adapt.

But we still hold good cards in hand.

Do they want their pile of steaming slop fixed, or not? Because no amount of complaints about the deadline being "yesterday" are going to change anything about the fact that time will be needed to fix the accrued technical debt, whether they like it or not.. And if AI dug you in that deep to start with, the solution is not to dig deeper.

I suspect some companies are going to find that out the hard (costly) way.

reply
If the manager is unreasonable, you were always going to have a problem with them, eventually. Nothing you can do with fix this.

If manage is reasonable, you can explain to them that there isn't time to check the work of the AI, and that it frequently makes obscure mistakes that need to be properly checked, and that takes time.

At this point, if they still insist you just give it the AI's work, they've made a decision that is their fault. You've done what you can.

And when the shit hits the fan, we're back to whether they're reasonable or not. If they are, you explained what could happen and it did. If they force responsibility on you, they aren't reasonable and were never going to listen to you. That time bomb was always going to go off.

reply
I hate this current trend of managers deciding, what tools developers have to use. Hopefully it ends soon.
reply
Time will tell if outages and defect resolution sky rocket or if ai can deal with it
reply
Does that matter that much in practice? I bet lots of costumers are okay with software that crashes 10x as much if it costs 10x less. There already is a ton of shitty software that still sells.
reply
I agree to this though it also depends on the nature of project.

Had a project idea which I coded with the help of AI and it became quite large to a point I was starting to have uncharted areas in the code. Mostly because I reviewed it too shallow or moved fast.

It was a good thing as that project never floated but if I were to do such a thing on my breadwinning project I would lose the joy.

reply
I already followed those rules mostly with StackOverflow and before AI.
reply
I just had a Claude episode. Instead of trying to fix the bug, it edited the data to hide the bug in the sample run. This kind of BS behavior is not rare. Absolutely, if you do not understand every bit of what's going on, you end up with a pile of BS.
reply
That’s why I love gemini - none of this bullshit ever happen.
reply
I do not think Gemini can relieve a developer from knowing what he is doing.
reply
There's some really weird and unusual posts glazing Google in here today. Bot accounts out in force!
reply
It's not worth fighting it at work. If the idiots you work for want everything vibe coded and delivered at 5 * 2025 speed then just vibe code and try to leave the company ASAP. That's where I am right now. Of course I might end up somewhere just as ridiculous or maybe not be able to even find another job. Shitty times we live in right now.
reply
This is about how I use it. I initially use it to carve out an architecture and iterate through various options. That saves a lot of time for me having to iterate through different language features and approaches. Once I get that, I have it scaffold out, and I go in and tidy things up to my personal liking and standards. From there, I start iterating through implementations. I generally have been implementing stuff myself, but I've gotten better at scaffolding out functions/methods through code instead of text. Then I ask it to finish things off. That falls into your first category of letting it implement stuff that I already know I could do. Not sure if it's faster. But it's lower cognitive load for me, since I can start thinking about the next steps without being concerned about straightforward code.

This all works pretty great. Where it starts going off the rails is if I let it use a library I'm not >=90% comfortable with. That's a good use of these tools, but if I let it plow through feature requests, I end up accumulating debt, as you pointed out.

For my uses, I'm still finding the right balance. I'm not terribly sure it makes me faster. What I do think it helps with is longer focused sections because my cognitive load is being reduced. So I can get more done but not necessarily faster in the traditional sense. It's more that I can keep up momentum easier, which does deliver more over time.

I'm interested in multi agent systems, but I'm still not sure of the right orchestration pattern. These AI tools still can go off the rails real quick.

reply