upvote
I completely understand why that line caused vicarious embarrassment. Looking back, I realize my brain was(is) operating on a completely different definition of that word based on my daily constraints. I plan to write more about this in Part Two, but at that point in time, I wasn't even aware of this alternative understanding of the term.

In telco, when a remote node crashes at a client's site, I often only have access to a heavily restricted subset of logs, and the debugging communication loop via email can take days to understand "what happens". Because of that, I write defensive, strictly encapsulated code, and I think in terms of domain-specific states and objects that can be explicitly tracked from an external PoV.

Similarly, during game jams, "debuggable and maintainable" means to me that the code is modular enough that I can completely rip out and rewrite a core mechanic in the final 3 hours just because the game design suddenly changed.

My habit of writing code optimized for remote logs and sudden architectural shifts actually became my biggest enemy under the algorthimic interview (or 45-minute LeetCode) constraint. It makes the core algorithmic state less clear and hides algorithmic mistakes under layers of defensive "if" statements (where I would normally drop a debug log).

I am simply used to not trusting the inputs, whereas in algorithmic problems, the whole point is to exploit constraints that you need to be absolutely sure about.

So the "if" statements that usually increase "debuggability" in telco or during game jams are the exact opposite of the "debuggability" term used in algorithmic thinking.

Thanks for naming this issue so clearly - it is a very valid reality check.

reply