upvote
You can't measure the impact of not creating a steaming pile of complexity.
reply
One of my the papers I share around a lot is "Nobody ever gets credit for fixing problems that never happened (2002)"[1]. I like it because it's not purely about software so the examples resonate better with some exec level people in other teams I work with.

[1]https://ieeexplore.ieee.org/document/1167285

reply
Yes, this is the real cause, and the OP explanation is just a symptom of that.
reply
Really you can. You look at the engineers who create steaming piles, and you look at the ones who don't. Over a year or two, the difference is easy to spot. For people who care to spot it.

If there's no competent front-line technical management who can successfully make this simple comparison, then, sure, in that case the team may be fucked.

reply
It's easy to gloss over this assessment but ultimately this needs to be a key decision point for where you choose to work. No matter how well you manage complexity as an IC or a lower tier leader, if your upper tier of leaders don't value it, it won't last. Simplicity IME is not a "tail that wags the dog" concept. It's too easy to stomp out if nobody in power cares.
reply
Except it's not something you can really accurately assess before you start working somewhere.
reply
Yes, I should have added "...this way" because I meant that to address GP's claim of the metric-based numerical measurement.

In general, I agree that you can and should judge (not necessarily measure) thing like simplicity and good design. The problem is that business does want the "increased this by 80%, decreased that by 27%" stuff and simplicity does not yield itself to this approach.

reply
I think this is often true and it's the limiting factor that prevents complexity from spiraling out of control. But there's also a certain type of engineer who generates Rube Goldberg code that actually works, not robustly, but well enough. A mad genius high on intelligence and low on wisdom, let's say. This is who can spin complexity into self reward.
reply
Measure no, but only engineers care about that (and I'm not even saying that they're right, engineers care a whole lot too much about hard data). You can show alternative solutions, estimate, make assumptions, even make up numbers and boom, you have "data" to show you improved things. You don't even have to lie: you can be very open that these are assumptions and made-up numbers, that it's just a story, what's important is that people come out with confidence that thanks to you, things are better by a bit/a lot/enormously.
reply
The impact is that you get to go solve another problem. This absolutely does show up in a good performance review.
reply
You can. GitHub is about to hit zero nines of uptime[0]. But feedback like that is far too late to be useful. Maybe (principal or senior) engineers should be the ones to judge, and be trusted by management that their foresight is worth pushing the deadline?

[0]: https://mrshu.github.io/github-statuses/

reply
You can't. You can hypothesize about the counterfactual in which you shipped a "steaming pile of complexity," but you definitionally cannot measure something that does not exist.
reply
Won't that show up in roi numbers?
reply
Those verbs (reduced, decreased, increased) all assume the situation was "bad" already. Avoiding that in the initial design is what's poorly rewarded.

Building a system that's fast on day one will not usually be rewarded as well as building a slow system and making it 80% faster.

reply
Yes, and ironically there are promotion ladders that explicitly call out "staff engineers identify problems before they become an issue". But we all know that in reality no manager-leader is ever going to fix problems eagerly, if they even agree with someone's prediction that that thing is really going to become a problem.
reply
I've found simplicity rarely earns promotions because it's invisible on a P&L and executives respond to hard numbers. In one role I converted a refactor into a business case with a 12-month cost model, instrumented KPIs in Prometheus and Grafana, and ran a canary that cut MTTR by 60% and reduced on-call pages by two-thirds. Companies reward new features over quiet reliability, so slow feature velocity for a quarter while you amortize the simplification. If you want the promotion, make a one-page spreadsheet tying the change to SLO improvements, on-call hours saved, and dollar savings, then own the instrumentation so the numbers are undeniable.
reply
Absolutely. And if you asked them if they're rather have it sooner, or keep it simpler, they'd pick "sooner" every time.
reply
I once used the analogy of the PM coming to the shop with a car that had a barely running engine and broken windows, and he's only letting me fix the windows.

His response: "I can sell a good looking car and then charge them for a better running engine"...

https://www.youtube.com/watch?v=T4Upf_B9RLQ hits a little too close to home.

reply
deleted
reply
This used to be true. Companies love efficiency. How does this stack up with modern AI? Seems those metrics would go in the opposite directions.
reply
The "time to market" folks finally have everything they could hope for, let's see all of that business value they claim is being missed due to pesky things like security, quality, and scalability checks.
reply
Thanks for the sane take. This article is engagement-porn for every engineer who ever looked at a system they didn't understand and declared they could do it much simpler. It's not because people love to promote complexity-makers, soothing as that thought might be.
reply
Never seen these metrics in real life, especially in engineering.
reply
"Code footprint is 80% more efficient / less"

(when there is a simpler design over more complex "big ball of mud abomination" in contrast)

reply
You are citing negative metrics. The reality is that companies only care about positive metrics: increase marginal revenue by 30%

That's regardless of the lip service they pay to cost cutting or risk reduction. It will only get worse, in the AI economy it's all about growth.

reply
Except when one of the criteria for promos is "demonstrates complexity". Then you results do matter, but you don't have the "complexity" box checked.
reply
And mostly these numbers are made up BS. But management will eat them up.
reply
> "Reduced incidents by 80%", "Decreased costs by 40%", "Increased performance by 33% while decreasing server footprint by 25%"

My experience is no one really gets promoted/rewarded for these types of things or at least not beyond an initial one-off pat on the back. All anyone cares about is feature release velocity.

If it's even possible to reduce incidents by 80% then either your org had a very high tolerance for basically daily issues which you've now reduced to weekly, or they were already infrequent enough that 80% less takes you from 4/year to 1/year.. which is imperceptible to management and users.

reply
> All anyone cares about is feature release velocity.

And at the same time it's impossible to convince tech illiterate people that reducing complexity likely increases velocity.

Seemingly we only get budget to add, never to remove. Also for silver bullets, if Big Tech promises a [thing] you can pay for that magically resolves all your issues, management seems enchanted and throws money at it.

reply
You can reduce a single type of incident by 80%. The overall incident rate for this particular type wasn't high enough to kill your company, but it's still a big number on your promotion packet.
reply