upvote
> presumably this comprise was only found out because a lot of people did update

This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.

But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.

reply
Better for the cool down to be managed guaranteed centrally by the package forge rather than ad-hoc by each individual client.
reply
The cooldown is a defence against malicious actors compromising the release infrastructure.

Having the forge control it half-defeats the point; the attackers who gained permission to push a malicious release, might well have also gained permission to mark it as "urgent security hotfix, install immediately 0 cooldown".

reply
I have not heard anyone seriously discuss that cooldown prevents compromise of the forge itself. It’s a concern but not the pressing concern today.

And no, however compromised packages to the forge happens, that is not the same thing as marking “urgent security hotfix” which would require manual approval from the forge maintainers, not an automated process. The only automated process would be a blackout period where automated scanners try to find issues and a cool off period where the release gets progressively to 100% of all projects that depend on it over the course of a few days or a week.

reply
That’s tricky, sometimes you really need the new version to be available right away.
reply
There’s ways to handle that. But that’s the exception, not the rule.
reply
foobarizer>=1.2.3 vs foobarizer==1.2.5
reply
Cooldown sounds like a good idea ONLY IF these so called security companies can catch these malicious dependencies during the cooldown period. Are they doing this bit or individual researchers find a malware and these companies make headlines?
reply
It seems less likely that they'll find it before you're bitten by it if you intentionally race against them by choosing newest all the time, yea?
reply
Maybe we can let people that don't care about privacy try them first
reply
Does it matter? The individual researchers could look at brand-new published packages just the same
reply
For researchers who notice new releases as soon as they are published and discover malice based on that alone, I agree, and every step of that can be automated to some level of effectiveness.

But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.

reply
Novel obfuscation, with a novel idea, is hard to invent. Novel obfuscation, where it is only new to that codebase, is easy(ier) to flag as suspicious.

While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.

Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.

A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.

What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.

Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.

reply
That assumes discovering a security bug is random and it could happen to anyone, so more shots on goal is better. But is that a good way to model it?

Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.

A cooldown gives anyone who does investigate more time to do their work.

reply
If I were in charge of a package manager I would be seriously looking into automated and semi automated exploit detection so that people didn't have to yolo new packages to find out if they are bad. The checking would itself become an attack vector, but you could mitigate that too. I'm just saying _something_ is possible.
reply
It's a trade off for sure, maybe if companies could have "honeypot" environments where they update everything and deploy their code, and try to monitor for sneaky things.
reply
It's easy for malicious code to detect sandboxing.

Also, check out the VW Diesel scandal.

reply