upvote
This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in. If the sandbox, or the operating system the sandbox runs in, will get breaking changes and force everyone to always be on a recent release (or worse, track main branch) then that will still be a huge supply chain risk in itself.

reply
The secure boot "shim" is a project like this. Perhaps we need more core projects that can be simple and small enough to reach a "finished" state where they are unlikely to need future upgrades for any reason. Formal verification could help with this ... maybe.

https://wiki.debian.org/SecureBoot#Shim

reply

  > This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.
For the most part you can. Just version pin slightly-stale versions of dependencies, after ensuring there are no known exploits for that version. Avoid the latest updates whenever possible. And keep aware of security updates, and affected versions.

Don't just update every time the dependency project updates. Update specifically for security issues, new features, and specific performance benefits. And even then avoid the latest version when possible.

reply
Sure, and that is basically what sane people do now, but that only works until something needs a security patch that was not provided for the old version, and changing one dependency is likely to cascade so now I am open to supply chain attacks in many dependencies again (even if briefly).

To really run code without trust would need something more like a microkernel that is the only thing in my system I have to trust, and everything running on top of that is forced to behave and isolated from everything else. Ideally a kernel so small and popular and rarely modified that it can be well tested and trusted.

reply
Virtual machines are that - tiny surfaces to access the host system (block disk device, ...). Which is why virtual machine escape vulnerabilities are quite rare.
reply
I feel like in some cases we should be using virtual machines. Especially in domains where risk is non-trivial.

How do you change developer and user habits though? It's not as easy as people think.

reply
I think Bootstrappable Builds from source without any binaries, plus distributed code audits would do a better job than locking down already existing binaries.

https://bootstrappable.org/ https://github.com/crev-dev/

reply
> This assumes that we can get a locked down, secure, stable bedrock system and sandbox that basically never changes except for tiny security updates that can be carefully inspected by many independent parties.

Not really. You should limit the attack surface for third-party code.

A linter running in `dir1` should not access anything outside `dir1`.

reply
>Which sounds great, but the way things work now tend to be the exact opposite of that, so there will be no trustable platform to run the untrusted code in.

This is the problem with software progressivism. Some things really should just be what they are, you fix bugs and security issues and you don't constantly add features. Instead everyone is trying to make everything have every feature. Constantly fiddling around in the guts of stuff and constantly adding new bugs and security problems.

reply
The NIH syndrome becoming best practice (a commenter below already says they "vibe-coded replacements for many dependencies") would also save quite a few jobs, I suspect. Fun times.
reply
I've been doing that too. The downside is it's a lot of work for big replacements.
reply
I've been thinking the same thing. And it's somewhat parallel to what happened to meditation vs. drugs. In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem. Mostly self-harm problem in that case. But the dynamic is somewhat similar to what's happening now with LLMs and coding.

Software people could (mostly) trust each other's OSS contributions because we could trust the discipline it took in the first place. Not any more.

reply
In the old world the dangerous insights required so many years of discipline that you could sort of trust that the person getting the insight would be ok. But then any idiot can get the insight by just eating some shrooms and oops, that's a problem.

I would think humans have been using psychedelics since before we figured out meditation. Likely even before we were humans.

reply
Ah yes the stoned ape hypothesis. I don't know if there is or will ever be evidence to support the hypothesis.

I also like the drunk monkey hypothesis.

reply
What in the world are “the dangerous insights”?
reply
“Society is a construct”, for starters?
reply
That's babby's first insight. Most people figure this out on their own in kindergarten.
reply
Supply-chain attacks long pre-date effective AI agentic coding, FWIW.
reply
What we need is accountability and ties to real-world identity.

If you're compromised, you're burned forever in the ledger. It's the only way a trust model can work.

The threat of being forever tainted is enough to make people more cautious, and attackers will have no way to pull off attacks unless they steal identities of powerful nodes.

Like, it shouldn't be a thing that some large open-source project has some 4th layer nested dependency made by some anonymous developer with 10 stars on Github.

If instead, the dependency chain had to be tied to real verified actors, you know there's something at stake for them to be malicious. It makes attacks much less likely. There's repercussions, reputation damage, etc.

reply
> The threat of being forever tainted is enough to make people more cautious

No it's not. The blame game was very popular in the Eastern Block and it resulted in a stagnant society where lots of things went wrong anyway. For instance, Chernobyl.

reply
> What we need is accountability and ties to real-world identity.

Who's gonna enforce that?

> If you're compromised, you're burned forever in the ledger.

Guess we can't use XZ utils anymore cause Lasse Collin got pwned.

Also can't use Chalk, debug, ansi-styles, strip-ansi, supports-color, color-convert and others due to Josh Junon also ending up a victim.

Same with ua-parser-js and Faisal Salman.

Same with event-stream and Dominic Tarr.

Same with the 2018 ESLint hack.

Same with everyone affected by Shai-Hulud.

Hell, at that point some might go out of their way to get people they don't like burned.

At the same time, I think that stopping reliance on package managers that move fast and break things and instead making OS maintainers review every package and include them in distros would make more sense. Of course, that might also be absolutely insane (that's how you get an ecosystem that's from 2 months to 2 years behind the upstream packages) and take 10x more work, but with all of these compromises, I'd probably take that and old packages with security patches, instead of pulling random shit with npm or pip or whatever.

Though having some sort of a ledger of bad actors (instead of people who just fuck up) might also be nice, if a bit impossible to create - because in the current day world that's potentially every person that you don't know and can't validate is actually sending you patches (instead of someone impersonating them), or anyone with motivations that aren't clear to you, especially in the case of various "helpful" Jia Tans.

reply
Accountability is on the people using a billion third party dependencies, you need to take responsibility for every line of code you use in your project.
reply
If you are really talking about dependencies, I’m not sure you’ve really thought this all the way through. Are you inspecting every line of the Python interpreter and its dependencies before running? Are you reading the compiler that built the Python interpreter?
reply
It's still smart to limit the amount of code (and coders) you have to trust. A large project like Python should be making sure it's dependencies are safe before each release. In our own projects we'd probably be better off taking just the code we need from a library, verifying it (at least to the extent of looking for something as suspect as a random block of base64 encoded data) and copying it into our projects directly rather than adding a ton of external dependencies and every last one of the dependencies they pull in and then just hoping that nobody anywhere in that chain gets compromised.
reply
> real-world identity

This bit sounds like dystopian governance, antithetical to most open source philosophies.

reply
Would you drive on bridges or ride in elevators "inspected" by anons? Why are our standards for digital infrastructure and software "engineering" so low?

I don't blame the anons but the people blindly pulling in anon dependencies. The anons don't owe us anything.

reply
A business or government can (should) separately package, review, and audit code without involving upstream developers or maintainers at all.
reply
This option is available already in the form of closed-source proprietary software.

If someone wants a package manager where all projects mandate verifiable ID that's fine, but I don't see that getting many contributors. And I also don't see that stopping people using fraudulent IDs.

reply
Do you know who inspected a bridge before you drive over it?
reply
There is no need of that bullshit. Guix can just set an isolated container in seconds not touching your $HOME at all and importing all the Python/NPM/Whatever dependencies in the spot.
reply