upvote
This is presumably because libc just doesn't change very often (not meaning code changes, but release cadence). But the average native software stack does have lots of things that change relatively often[1]. So "native" vs. not is probably not a salient factor.

[1]: https://en.wikipedia.org/wiki/XZ_Utils_backdoor

reply
I think that article proves the opposite.

> While xz is commonly present in most Linux distributions, at the time of discovery the backdoored version had not yet been widely deployed to production systems, but was present in development versions of major distributions.

Ie if you weren’t running dev distros in prod, you probably weren’t exposed.

Honestly a lot of packaging is coming back around to “maybe we shouldn’t immediately use newly released stuff” by delaying their use of new versions. It starts to look an awful lot like apt/yum/dnf/etc.

I would wager in the near future we’ll have another revelation that having 10,000 dependencies is a bad thing because of supply chain attacks.

reply
Per below, xz is also an example of us getting lucky.

> I would wager in the near future we’ll have another revelation that having 10,000 dependencies is a bad thing because of supply chain attacks.

Yes, but this also has nothing to do with native vs. non-native.

reply
This is the security equivalent of having a better lock than your neighbour. Won't save you in the end but you won't be first. Then again, yours could also be broken and you don't get to tick of that audit checkbox.
reply
your link disproves your claim. no naive app depended on xz version >= latest. Most sane distros take time to up-rev. That is why the xz backdoor was, in fact, in NO stable distro

And not changing often is a feature, yes.

reply
I don't think it does; I think the industry opinion on xz is that we got lucky in terms of early detection, and that we shouldn't depend on luck.

(I don't know what a "sane" distro is; empirically lots of distros are bleeding-edge, so we need to think about these things regardless of value judgements.)

reply
Sane: debian-stable
reply
From experience, a lot of people using a "stable" distro are just bypassing that distro's stability (read: staleness) by installing nightly things from a language ecosystem. It's not clear to me that this is a better (or worse) outcome than a less stable distro.
reply
Native code still have plenty of attack surface. If you do everything through pip/npm you might as well publish your root password, but pretending a clean C build from source makes you safe is just cosplay for people who confuse compiler output with trust. If anything people are way too quick to trust a tarball that builds on the first try.
reply
100% with you. Anything that builds from the first try is 100% malicious. No real software builds without 5-30 tweaks of the makefile. And anything on npm/pip is malicious with a fixed chance that you have no control over, as seen in this attack.

But the data remains: no supply chain attacks on libc yet, so even if it COULD happen, this HAS and that merely COULD.

reply
None that we know of, just like we didn’t know of the attack on xz until we did.
reply
Native software? You mean software without dependencies? Because I don't see how you solve the supply chain risk as long as you use dependencies. Sure, minimizing the number of dependencies and using mostly stable dependencies also minimizes the risk, but you'll pay for it with glacial development velocity.
reply
Slower development velocity but no third-party-induced hacks surely has a market. :)
reply
Sure, but this is a pretty onerous restriction.

Do you think supply chain attacks will just get worse? I'm thinking that defensive measures will get better rapidly (especially after this hack)

reply
They will certainly get worse. LLMs make it so much easier.
reply
Agreed, as proven quite brutally over the last two weeks and especially the last three days.
reply
> Do you think supply chain attacks will just get worse? I'm thinking that defensive measures will get better rapidly (especially after this hack)

I think the attacks will get worse and more frequent -- ML tools enable doing it easily among people who were previously not competent enough to pull it off but now can. There is no stomach for the proper defensive measures among the community for either python or javascript. Why am i so sure? This is not the first, second, third, or fourth time this has happened. Nothing changed.

reply
Not only do the tools enable incompetent attackers, they also enable a new class of incompetent library developers to create and publish packages, and a new class of incompetent application developers to install packages without even knowing what packages are being used in the code they aren't reading, and a new class of incompetent users who are allowing OpenClaw to run completely arbitrary code on their machines with no oversight. We are seeing only the tip of the iceberg of the security breaches that are to come.
reply
So basically the attacker and the dev who caught it were probably using the same tools if the malware was AI-generated (hence the fork bomb bug), and the investigation was AI-assisted (hence the speed). Less "tip of the iceberg" and more just that both sides got faster.
reply
deleted
reply