I ran this on the repo I have open and after I filtered out the non code files it really can only tell me which features we worked on in the last year. It says more about how we decided to split up the features into increments than anything to do with bugs and “churn”.
I’m not sure why HN attracts this need to poke holes in interesting observations to “prove” they aren’t actually interesting.
Very different from just counting commits - https://vectree.io/c/delta-compression-heuristics-and-packfi...
It shows places that are problematic much better. High churn, low complexity: fine. Its recognized and optimizef that this is worked on a lot (e.g. some mapping file, a dsl, business rules etc). Low churn high complexity: fine too. Its a mess, but no-one has to be there. But both? Thats probably where most bugs originate, where PRs block, where test coverage is poor and where everyone knows time is needed to refactor.
In fact, quite often I found that a teams' call "to rewrite the app from scratch" was really about those few high-churn-high-complexity modules, files or classes.
Complexity is a deep topic, but even simple checks like how nested smt is, or how many statements can do.
otherwise you're right, it could be a long linear list of appends where people are happy to contribute.
Nobody is afraid of changing it.