upvote
As with any attempt to become more precise (see software estimation, eg. Mythical Man Month), we've long argued that we are doing it for the side effects (like breaking problems down into smaller, incremental steps).

So when you know that you are spending €60k to directly benefit small number of your users, and understand that this potentially increases your maintenance burden with up to 10 customer issues a quarter requiring 1 bug fix a month, you will want to make sure you are extracting at least equal value in specified gains, and a lot more in unspecified gains (eg. the fact that this serves your 2% of customers might mean that you'll open up to a market where this was a critical need and suddenly you grow by 25% with 22% [27/125] of your users making use of it).

You can plan for some of this, but ultimately when measuring, a lot of it will be throwing things at the wall to see what sticks according to some half-defined version of "success".

But really you conquer a market by having a deep understanding of a particular problem space, a grand vision of how to solve it, and then actually executing on both. Usually, it needs to be a problem you feel yourself to address it best!

reply
None of his math really checks out. Building a piece of software is or at least was orders of magnitudes more expensive than maintaining it. But how much money it can make is potentially unbounded (until it gets replaced).

So investing e.g. 10 million this year to build a product that produces maybe 2 million ARR will have armortized after 5 years if you can reduce engineering spend to zero. You can also use the same crew to build another product instead and repeat that process over and over again. That's why an engineering team is an asset.

It's also a gamble, if you invest 10 million this year and the product doesn't produce any revenue you lost the bet. You can decide to either bet again or lay everyone off.

It is incredibly hard or maybe even impossible to predict if a product or feature will be successful in driving revenue. So all his math is kinda pointless.

reply
> Building a piece of software is or at least was orders of magnitudes more expensive than maintaining it

This feels ludicrously backwards to me, and also contrary to what I've always seen as established wisdom - that most programming is maintenance. (Type `most programming is maintenance` into Google to find page after page of people advancing this thesis.) I suspect we have different ideas of what constitutes "maintenance".

reply
> that most programming is maintenance.

What do you mean by maintenance?

A strict definition would be "the software is shipping but customers have encountered a bug bad enough that we will fix it". Most work is not of this type.

Most work is "the software is shipping but customers really want some new feature". Let us be clear though, even though it often is counted as maintenance, this is adding more features. If you had decided up front to not ship until all these features were in place it wouldn't change the work at all in most cases (once in a while it would because the new feature doesn't fit cleanly into the original architecture in a way that if you had known in advance you would have used a different architecture)

reply
> If you had decided up front to not ship until all these features were in place it wouldn't change the work at all in most cases

In my experience (of primarily web dev), this is not true, and the reasons it is not true are not limited to software architecture conflicts like you describe (although they happen too). Instead the problems I usually encounter are that:

* once you have shipped something and users are relying on it, it limits the decisions you are allowed to make about what features the system should have. You may regret implementing feature X because it precludes more valuable features Y and Z, but now that X is there, the cost of ripping it out is very high due to the backlash it will cause.

* once you have shipped an application, most of the time when you add new features you are probably slightly changing at least some UI, and so you need to think about how that's going to confuse experienced users and how to address that in a way you wouldn't have to when implementing something de novo. For an internal LOB app, that might mean creating announcements and demos and internal trainings that wouldn't be necessary for greenfield work.

* the majority of professional web dev involves systems with databases, and adding features frequently involves database migrations, and sometimes figuring out how to implement those database migrations without losing data or causing downtime is difficult and complicated.

* as web applications grow their userbase, the scale of the business often introduces new problems with software performance, with viability of analysing business-relevant data from the system, or with moderation or customer support tasks associated with the system, and these problems often demand new features to keep the broader business surrounding the software afloat that weren't needed at launch.

* software that has actually launched and become embedded in existing business processes inherently tends to have many more stakeholders in the business that care about it than pre-launch software, and those stakeholders naturally want to get involved in decision-making about their tools, and that creates meeting and communication overhead - sometimes to such a degree that stakeholder management and negotiating buy-in ends up being an order of magnitude more work than actually implementing the damn feature being argued about.

To the extent that the amount of work involved in implementing a new feature is inflated by these kind of factors relative to what would have been involved in doing it de novo, I personally conceive of that as "maintenance" work; and in my experience my work on big teams at successful businesses has on average been inflated severalfold by those factors. (I also count work mandated by legal/compliance considerations that arise only after a successful launch as "maintenance". My rough conception of "software maintenance" is that the delta between "the work involved in building a product de novo with the same customer-pleasing features that ours has" and "the work we actually had to do to incrementally build the product in parallel to it being used" as "maintenance".)

Would most people agree with my broad notion of maintenance? I reckon they roughly would, but it's hard to say since people who talk about maintenance rarely attempt to define it with any precision. You give a precise but extremely narrow definition above. Wikipedia likewise gives a precise but extremely broad definition - that maintenance is "modification of software after delivery", under which definition surely over 99.999% of professional software development labour is expended on maintenance! I guess my definition puts me somewhere in the middle.

reply
I like the good ol' "80% of the work in a software project happens before you ship. The other 80% is maintaining what you shipped."
reply
The longer software is sold the more you need to maintain it. In year one most of the cost is making it. Over time other costs start to add up.
reply
As with most things, isn't the truth somewhere in the middle? True cost/value is very hard to calculate, but we could all benefit by trying a bit harder to get closer to it.

It's all too common to frame the tension as binary: bean counters vs pampered artistes. I've seen it many times and it doesn't lead anywhere useful.

reply
Here I think the truth is pretty far to one side. Most engineering teams work at a level of abstraction where revenue attribution is too vague and approximate to produce meaningful numbers. The company shipped 10 major features last quarter and ARR went up $1m across 4 new contracts using all of them; what is the dollar value of Feature #7? Well, each team is going to internally attribute the entire new revenue to themselves, and I don’t know what any other answer could possibly look like.
reply
Even if you could do attribution correctly (I think you can do this partially if you are really diligent about A/B testing), that is still only one input to the equation. The other fact worth considering is the scale factor - if a team develops a widget which has some ARR value today, that same widget has a future ARR value that scales with more product adoption - no additional capital required to capture more marginal value. How do you quantify this? Because it is hard and recursive (knowing how valuable a feature will be in the future means knowing how many users you have in the future which depends on how valuable your features are as well as 100 other factors), we just factor this out and don't attempt to quantify things in dollars and euros.
reply
You’re illustrating one of the points of TFA - a team that is equipped with the right tools to measure feature usage (or reliably correlate it to overall userbase growth, or retention) and hold that against sane guardrail metrics (product and technical) is going to outperform the team that relies on a wizardly individual PM or analyst over the long term making promises over the wall to engineering.
reply
Feature usage can't tell you that.

There's often a checklist of features management has, and meeting that list gets you in the door, but the features often never get used

reply
But surely you have to have at least an hypothesis of how software features you develop will increase revenue or decrease costs if you want to have a sustainable company?
reply