The exception being obviously a team approaching someone else's codebase -- including that of their predecessor, if they can factor in for Conway's Law -- to re-factor it.
But the same person or persons announcing re-factoring? I always try to walk away from those discussions, knowing very well they're just going to build a better mouse trap. For themselves.
Don't get me wrong, iteration of your own then-brain's product is all well and good, but it takes _more_ to escape the carousel. It takes sitting down and noting down primary factors driving poor architecture and taking a long hard look in the mirror. Not everything is subjective or equivalent, as much as many a developer would like to believe. It's very attractive to stick to "as long as we're careful and diligent, even sub-optimal design can be implemented well". No, it won't be -- this one is a poster-child exception to the rule if there ever was one -- your _design_ is the root and from it and it alone springs the tree that you'll need to accept or cut down, and trimming it only does so much.
A panacea is a cure-all. So if code refactoring is a panacea then we should refactor code often.
That meant the files has the entire "$Revision: 1.3 $" nonsense and "file changelog" at the top too - though many newer files never bothered to include the tags to actually get RCS to replace them. Inconsistent as hell.
And while the "family" of devices the software was for traces it's origin to the mid '90s, functionally none of the code was older than ~5 years at that time.
Naturally even with only a few tens of engineers it regularly messed up, commits stepped on each other's toes and the entire tree got corrupted regularly. For fun I wrote a script that read it all and imported the entire history into git - you only had to go back a few years before the entire thing was absolute nonsense.
I have no idea why that was still being used then, but I assume it had been in use from the very start of that entire hardware family. Perhaps as it was fundamentally a "hardware" company - which until surprisingly recently seemed to consider "source control" to be "shared folders on remote machines" - "software" source control wasn't considered a priority.
What’s interesting about the malware in this post is that it goes one step further: instead of exploiting mismatches, it corrupts the computation itself — so every infected system agrees on the same wrong answer!
More broadly: any interpretive mismatch between components creates a failure surface. Sometimes it shows up as a bug, sometimes as an exploit primitive, sometimes as a testing blind spot. You see it everywhere — this paper, IDS vs OS, proxies vs backends, test vs prod, and now LLMs vs “guardrails.”
Fun HN moment for me: as I was about to post this, I noticed a reply from @tptacek himself. His 1998 paper with Newsham (IDS vs OS mismatches) was my first exposure to this idea — and in hindsight it nudged me toward infosec, the Atlanta scene, spam filtering (PG's bayesian stuff) and eventually YC.
https://users.ece.cmu.edu/~adrian/731-sp04/readings/Ptacek-N...
The paper starts with this Einstein quote "Not everything that is counted counts and not everything that counts can be counted", which seems quite apt for the malware analyzed here :)
Where do you think the LLM is getting it from? ^_^
it was generated that way, or else this person happens to know the correct combination of buttons to make that happen.
in 2026, at least 20-40% of social media traffic is bots (and probably higher with better LLMs), so it is usually safer to just assume.
Do you mean skeptical on which government was responsible or that it was in fact a government effort?
I can see how attribution could be debatable (between two main suspects mainly), but are / were there any good arguments against this being a gov effort? I would find it highly unlikely that someone other than a gov could muster up so much domain knowledge, source pristine 0days and be so stealthy at the same time.
I still use RCS today. It's certainly not my preferred option, but my collaborator likes it, and it's not too annoying for me to use.
Rather this was developed by a team of 6-8 people. Maybe two or three of them working on the implant, another engineer handling the exploits and propagation, and yet another building the LP and communications channels. They are supported by a scientist with deep knowledge of the process they are messing around with (say developing nuclear weapons), and a mathematician that knows how to introduce subtle and undetectable errors.
Every academic institution, every school, all under the radar of recruitment and more. It's difficult to believe, but the network is real.
There are certainly people here on HN who've been solicited, most who'll never mention it.
It's fun to imagine, though, what tight groups of highly motivated, stupidly intelligent people can do when they collectively commit to doing so - and with a hefty budget to assist.
But then I am getting too utopian
Perhaps you meant cvs? Subversion was released in 2004 and git appeared in 2005.
The reference to the 70s and 80s code didn’t imply it was version controlled before svn/cvs though if that’s what you meant, but by that time it was and still had old timestamps commented in the text files.