One really fun algorithm involved optimizing an n^2 naive tree algorithm to O(n), ignoring logs.
For me, the way I reasoned about it was expanding the number of objects to consider (n^3), then there were observations you could apply to bring it down to O(n).
If you asked me what exactly the correspondence was between the final reduction and the original problem statement, I couldn't tell you. Maybe there was a more direct way to get to the solution.
But that style of thinking carries on with me to real tasks too. Sometimes it's easier to simplify, other times it might actually be easier to complexify, as long as you trust that you're not going down "arbitrary" complexity.
Basically I precomputed a table of masks and used the XLAT instruction in a very tight loop to fly though all the product descriptions for everything IBM offered back around 1983. I could accommodate case insensitivity and single character wildcards.
The test search was always "dos tech ref" to find the IBM DOS Technical reference manual.
code: https://github.com/aleda145/product_allocation_aframe/blob/m...
I'm not proud of this code, but it yielded 0.2% better result than the naive approach!
A = [2, 7, 1, 8, 28, 18]
sort = A => A.reduce((B, x) => (B[x] = x, B), []).filter(x => x)
Nothing says efficiency like a sorting algorithm where Big O is more influenced by the largest element's value rather than the number of elements in the array. /s