https://openai.com/index/harness-engineering/
> This translates to an average throughput of 3.5 PRs per engineer per day, and surprisingly the throughput has increased as the team has grown to now seven engineers.
We will see if this continues to scale up!
One pathological example: if you’re running a server-based product, quite often what stands between you and a new feature launch is literally couple of thousands of lines of Kubernetes YAML. Would adding someone who’s proficient in Kubernetes slow you down? Of course not.
One may say, hey, this is just the server-side Kubernetes-based development being insane, and I’ll say, the whole modern business of software development is like this.
Yes, they know how the feature they work on relates to other features, but actually implementing that feature is very often mostly involves fighting with technology, wrangling the entire stack into the shape you need.
In Brooks’s times the stack was paper-thin, almost nonexistent. In modern times it’s not, and adding someone who knows the technology, but doesn’t have the domain knowledge related to your feature still helps you. It doesn’t slow you down.
One may argue that I’m again pointing to the difference between accidental and incidental complexity, and my argument is essentially “accidental complexity takes over”, but accidental complexity actually does influence your feature too, by defining what’s possible and what’s not.
Some good thoughts (not mine) on the modern boundary between accidental and incidental complexity: https://danluu.com/essential-complexity/