But now and then, something beautiful happens. Something that used to be dreadful, becomes "solved". Not in the mathematical strict sense, but some abstraction or some tool eliminates an entire class of issues, and once you know it you can barely imagine living without it. That's why I keep coming back to it, I think.
As a species, I think we are in the infancy stages of software engineering, and perhaps CS as well. There's still lots of opportunity to find better abstractions, big & small.
There is an element of projection because there is in most things people talk about; I'm speaking about this through my filters and biases after all. But it's grounded in a fair chunk of experience.
As tech progresses and those abstractions become substantially more potent, it only amplifies the ability of small groups to use them to massively shape the world to their vision.
On the more benign side of this is just corporate greed and extraordinary amplification of wealth inequality. On the other side is authoritarian governments and extremist groups.
https://en.wikipedia.org/wiki/Competitive_exclusion_principl...
The Internet itself will likely further fracture into different ecosystems. =3
A file is a simple stream of bytes in Unix. (If you think what else it might be then compare to Multics’ segments). Separate processes that may be connected using simple standard I/O streams [pipe] (vs everything is DLL in Multics) — the concept of shell itself (policy vs. mechanism separation http://www.catb.org/esr/writings/taoup/html/ch01s06.html ).
https://retrocomputing.stackexchange.com/questions/15685/wha...
For comparison, you need a new app on iOS for what might have been a shell pipeline (hierarchical file system is absent at user level).
Language specific for JavaScript: Strict comparison operator === that disables type coercion, together with banning ==.
== allows "5" equals 5.
I'm working on a multiplayer game, for which I haven't touched the code in a while. The other day I asked myself, "what happens when I press the shoot button?"
Well, it sends a message to the server, which instantiates a bullet, and broadcasts a bullet spawn message to all clients, which then simulate the bullet locally until they get a bullet died message. (And the simulation on both ends happens indirectly via global state and the bullet update function).
My actual analysis was like 3x longer than that because it focused on functions and variables, i.e. the chain of cause and effect: the Rube Goldberg machine.
I laughed when I realized that name was actually too charitable because at least in a Rube Goldberg machine, all the parts that interact are clearly visible and tend to be arranged in a logical sequence, whereas in my codebase it was kind of all over the place.
So that made me realize, a function is not really a sensible unit of analysis. They're too isolated. You want to try and understand a feature.
Also, I'm experimenting with organizing the code by feature, rather than by "responsibility." i.e. the netcode for the bullet should be in the bullet module, not the netcode module.
I've built a small, opinionated tool for that [1]. It can rank files by a "Refactor Priority" score based on structural signals - size, callable burden, cyclomatic complexity, nesting - with churn and co-change from local git history layered on top.
It's more of an exploratory tool than a general solution, but it's been practically useful for quickly spotting painful files.
Part of why it was built: keeping coding agents in check. They tend to produce code that gets complex fast, don't feel the complexity building up, and eventually start making changes that break things. So the tool helps me catch files that are getting out of hand before that happens. It can also generate a refactoring prompt explaining why a given file is problematic - as a conversation starter for the agent.
The article gave me a few more metric ideas to try, thanks.
Complexity directly impacts security. Simple systems are: Maintainable: Easier to change and manage. Reliable: Less prone to logic errors. Testable: Easier to validate and test.
Mostly I use it to write unit tests (just dislike production code that is not exactly as I want). So there is a testing rule for all files in the test folder that lays out how they should be done. The agent writing the tests may miss some due to context bloat but the reviewer has a fresh context window and only looks at those rules. So it does result in some simpler code.
Now you're assuming a human is actually trying to understand the code. What a world we live in (sarcasm).
> The cognitive complexity of a function can only be determined by the reader, and only caring about the reader can enable the writer to improve the learning experience.
https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...
Maintainable coding practices are a skill like any other. =3