upvote
This depends on the industry. I work on industrial machine control software, and we spend a huge amount of time on tests. We have to for some parts (human safety crtitical), but other parts would just be expensive if they failed (loss of income for customers, and possibly damaged equipment).

The key to making this scalable is to make as few parts as possible critical, and make the potential bad outcomes as benign as possible. (This lets you go to a lower rating in whatever safety standard applies to your industry.) You still need tests for the less critical parts though, while downtime is better than injury, if you want to sell future machines to your customers you need to have a good track record. At least if you don't want to compete on cost.

reply
> make as few parts as possible critical, and make the potential bad outcomes as benign as possible

This is a good lesson for anyone I think. Definitely something I’m going to think more about. Thanks for sharing!

reply
One of the major things code review does is prevent that one guy on your team who is sloppy or incompetent from messing up the codebase without singling him out.

If you told someone "I don't trust you, run all code by me first" it wouldn't go well. If you tell them "everyone's code gets reviewed" they're ok with it.

reply
> people don't get promoted for preventing issues.

they do - but only after a company has been burned hard. They also can be promoted for their area being enough better that everyone notices.

still the best way to a promotion is write a major bug that you can come in at the last moment and be the hero for fixing.

reply
That could work but plenty of quiet heros weren’t promoted for fixing critical bugs.
reply
They fixed it too soon. You have to wait until the effect is visible on someone's dashboard somewhere.
reply
Goodhart's Law strikes again... "When a measure becomes a target, it ceases to be a good measure."
reply
You have to make sure it doesn't arrive at you before it is on the dashboard. Otherwise you are why it is blowing up the time to fix a bug metric. Unless you can make the problem so obscure other smart people asked to help you can't figure it out thus making you look bad.
reply
That is in no way guaranteed. Sometimes finding too many security issues makes you unpopular.

Two years afterward, we got hit with ransomware. And obviously "I told you so" isn't a productive discussion topic at that point.

reply
That's not preventing the issue, though. The closest you can get to this is to have another competitor be burned hard and demonstrate how your code base has the exact same issue. But even that isn't guaranteed. "that can't happen here" is a hard mindset to disrupt unless you yourself are already a C suite.
reply
I think of code review more about ensuring understandability. When you spend hours gathering context, designing, iterating, debugging, and finally polishing a commit, your ability to judge the readability of your own change has been tainted by your intimate familiarity with it. Getting a fresh pair of eyes to read it and leave comments like "why did you do it this way" or "please refactor to use XYZ for maintainability", you end up with something more that will be easier to navigate and maintain by the junior interns who will end up fixing your latent bugs 5 years later.
reply
Alternately, have a small team where you trust everyone.
reply
> The problem, I think, is people don't get promoted for preventing issues.

cleaning up structural issues across a couple orgs is a senior => principal promo ive seen a couple of times

reply