Of course there is a balance to this, the engineering time to implement both options is an important consideration. But given both algorithms are relatively easy to implement I will default to the one that is faster at large sizes even if it is slower at common sizes. I do suspect that there is an implicit assumption that "fancy" algorithms take longer and are harder to implement. But in many cases both algorithms are in the standard library and just need to be selected. If this post focused on "fancy" in terms of actual time to implement rather than speed for common sizes I would be more inclined to agree with it.
I wrote an article about this a while back: https://kevincox.ca/2023/05/09/less-than-quadratic/
It is important to remember that the art of sw engineering (like all engineering) lives in a balance between all these different requirements; not just in OPTIMIZE BIG-O.
At 99% of shops it should be the other way around .
Most people don't need FFT algorithm for multiplying large numbers, Karatsuba's algorithm is fine. But in some domains the difference does matter.
Personally I usually see the opposite effect - people first reach for a too-naive approach and implement some O(n^2) algorithm where it wouldn't have even been more complex to implement something O(n) or O(n log n). And n is almost always small so it works fine, until it blows up spectacularly.
Same. People solve in ways that are very obviously going to cause serious problems in only a few short weeks or months and it’s endlessly frustrating. If you’re building a prototype, fine, but if you’re building for production, very far from fine.
Most frustrating because often there’s next to no cost in selecting and implementing the correct architecture, domain model, data structure, or algorithm up front.