1. They are not going to include everything. This includes things like new file formats.
2. They are going to be out of date whenever a standard changes (HTML, etc.), application changes (e.g. SQLite/PostgreSQL/etc. for SQL/ORM bindings), or API changes (DirectX, Vulcan, etc.).
3. Things like data structures, graphics APIs, etc. will have performance characteristics that may be different to your use case.
4. They can't cover all nice use cases such as the different libraries and frameworks for creating games of different genres.
For example, Python's XML DOM implementation only implements a subset of XPath and doesn't support parsing HTML.
The fact that Python, Java, and .NET have large library ecosystems proves that even if you have a "Batteries Included" approach there will always be other things to add.
There's clearly merit to both sides, but personally I think a major underlying cause is that libraries are trusted. Obviously that doesn't match reality. We desperately need a permission system for libraries, it's far harder to sneak stuff in when doing so requires an "adds dangerous permission" change approval.
For C#, I think they achieved that.
You might want to elaborate on the "etc.", since HTML updates are glacial.
The PNG spec [7] has been updated several times in 1996, 1998, 1999, and 2025.
The XPath spec [8] has multiple versions: 1.0 (1999), 2.0 (2007), 3.0 (2014), and 3.1 (2017), with 4.0 in development.
The RDF spec [9] has multiple versions: 1.0 (2004), and 1.1 (2014). Plus the related specs and their associated versions.
The schema.org metadata standard [10] is under active development and is currently on version 30.
[1] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... (New)
[2] https://web.dev/baseline/2025 -- popover API, plain text content editable, etc.
[3] https://web.dev/baseline/2024 -- exclusive accordions, declarative shadow root DOM
[4] https://web.dev/baseline/2023 -- inert attribute, lazy loading iframes
[5] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... (Baseline 2023)
[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/... (2020)
[7] https://en.wikipedia.org/wiki/PNG
[8] https://en.wikipedia.org/wiki/XPath
[9] https://en.wikipedia.org/wiki/Resource_Description_Framework
[10] https://schema.org/
Yes, they cannot include everything, but enough that you do not _need_ third party packages.
Django and Spring
So Python's clearly not "batteries included" enough to avoid this kind of risk.
I think packages of a certain size need to be held to higher standards by the repositories. Multiple users should have to approve changes. Maybe enforced scans (though with trivy’s recent compromise that wont be likely any time soon)
Basically anything besides lone developer can decide to send something out on a whim that will run on millions of machines.
It's like the difference in protecting your home from burglars and foreign nation soldiers. Both are technically invaders to your home, but the scope is different, and the solutions are different.
Well, there’s other things. Maven doesn’t allow you to declare “version >= x.y.z” and doesn’t run arbitrary scripts upon pulling dependencies, for one thing. The Java classpath doesn’t make it possible to have multiple versions of the same library at the same time. That helps a lot too.
NPM and the way node does dependency management just isn’t great. Never has been.
Unless you are Python, where the standard library includes multiple HTTP libraries and everyone installs the requests package anyways.
Few languages have good models for evolving their standard library, so you end up with lots of bad designs sticking around forever. Libraries are much easier to evolve, giving them the advantage in terms of developer UX and performance.
I removed the locks from all the doors, now entering/exiting is 87% faster! After removing all the safety equipment, our vehicles have significantly improved in mileage, acceleration and top speed!
Initially I assumed this is sarcastic, but apparently not. UX and performance is what programmers are paid to do! Making sure UX is good is one of the most important things in programmer job.
While security is a moving target, a goal, something that can never be perfect, just "good enough" (if NSA wants to hack you, they will). You make it sound like installing third party packages is basically equivalent to a security hole, while in practice the risk is low, especially if you don't overdo it.
Wild to read extreme security views like that, while at the same time there are people here that run unconstrained AI agents with --dangerous-skip-confirm flags and see nothing wrong with it.
And yes, we agree that running unconstrained AI agents with --dangerous-skip-confirm flags and seeing nothing wrong with it is insane. Kind of like just advertising for burglars to come open your doors for you before you get home - yeah, it's lots faster to get in (and to move about the house with all your stuff gone).
Depends. If you had to add to a Makefile for your dependencies, you sure as hell aren't going to add 5k dependencies manually just to get a function that does $FOO; you'd write it yourself.
Now, with AI in the mix, there's fewer and fewer reasons to use so many dependencies.
The amount of time defining same data structures over and over again vs `pip install requests` with well defined data structures.
> Few languages have good models for evolving their standard library
Can you name some examples?(Please do correct me if this is wrong, again, I don't have the experience myself.)
[1] https://pkg.odin-lang.org/
[2] https://www.gingerbill.org/article/2025/09/08/package-manage...
Nor is fetch a good client-side API either; you want progress indicators, on both upload and download. Fetch is a poor API all-round.
No. Axios is still maintained. They have not deprecated the project in favor of fetch.
It’s not needed anymore.
I understand why this doesn't work well with legacy projects, but it's something that the language could strive towards.
Yes - the postinstall hook attack vector goes away. You can do SHA pinning since Git's content addressing means that SHA is the hash of the content. But then your "lockfile" equivalent is just... a list of commit SHAs scattered across import statements in your source? Managing that across a real dependency tree becomes a nightmare.
This is basically what Deno's import maps tried to solve, and what they ended up with looked a lot like a package registry again.
At least npm packages have checksums and a registry that can yank things.
In my experience, this works great for libraries internal to an organization (UI components, custom file formats, API type definitions, etc.). I don't see why it wouldn't also work for managing public dependencies.
Plus it's ecosystem-agnostic. Git submodules work just as well for JS as they do for Go, sample data/binary assets, or whatever other dependencies you need to manage.
The irony is that this is actually the current best practice to defend against supply chain attacks in the github actions layer. Pin all actions versions to a hash. There's an entire secondary set of dev tools for converting GHA version numbers to hashes
Why wouldn't that work well with legacy projects? In fact, the projects I was a part of that I'd call legacy nowadays, was in fact built by copy-and-pasting .js libraries into a "vendor/" directory, and that's how we shipped it as well, this was in the days before Bower (which was the npm of frontend development back in the day), vendoring JS libs was standard practice, before package managers became used in frontend development too.
Not sure why it wouldn't work, JavaScript is a very moldable language, you can make most things work one way or another :)(
It's true that system repos doesn't include everything, but you can create your own repositories if you really need to for a few things. In practice Fedora/EPEL are basically sufficient for my needs. Right now I'm deploying something with yocto, which is a bit more limited in slection, but it's pretty easy to add my own packages and it at least has hashes so things don't get replaced without me noticing (to be fair, I don't know if the security practices of open-embedded recipes are as strong as Fedora...).
just shipping from npm crap is essentially the equivelant of running your production code base against Arch AUR pkgbuilds.
I'd contrast Python with Go, which has an amazing stdlib for the domains that Go targets. This last part is key--Go has a more focused scope than Python, and that makes it easier for its stdlib to succeed.
With Bun I use less dependencies from NPM than I used from Nuget with .NET to build minimal apis. For example the pg driver.
And for good reason. There are enough platform differences that you have to write your own code on top anyway.
Package managers should do the same thing
Tell me about it. Using AI Chatbots (not even agents), I got a MVP of a packaging system[1] to my liking (to create packages for a proprietary ERP system) and an endpoint-API-testing tool, neither of which require a venv or similar to run.
------------------------------
[1] Okay, all it does now is create, sign, verify and unpack packages. There's a roadmap file for package distribution, which is a different problem.
Really? I thought 'asking you every time they want to do something' was called 'security fatigue' and generally considered to be a bad thing. Yes you can concatenate files in the current project, Claude.
Yes less deps people need the better but it doesn't fix trhe core problem. Sharing and distrib uting code is a key tenant of being able to write modern code.
(Leaving aside thoughts on language syntax, compile times, tooling etc - just interested in people's experiences with / thoughts on healthy stdlibs)
Our React projects are the contrast. They live in total and complete isolation, both in development and in production. You're not going to work on React on a computer that will be connected to any sort of internal resources. We've also had to write a novel's worth of legal bullshit explaining how we can't realistically review every line of code from React dependencies for compliance.
Anyway, I don't think JS/TS is that bad. It has a lot of issues, but then, you could always have written your own wrapper ontop of Node's fetch instead of using Axios. Which I guess is where working in the NIS2 compliance sector makes things a little bit different, because we'd always chose to write the wrapper instead of using one others made. With the few exceptions for Microsoft products that I mentioned earlier.
And there's plenty of libraries you'll have to pull to get a viable product.
Python (decent standard library) - It's pretty much everywhere. There's so many hidden gems in that standard library (difflib, argparse, shlex, subprocess, cmd)
C#/F# (.NET)
C# feels so productive because of how much is available in .NET Core, and F# gets to tag along and get it all for free too. With C# you can compile executables down to bundle the runtime and strip it down so your executables are in the 15 MiB range. If you have dotnet installed, you can run F# as scripts.
Do you worry at all about the future of F#? I've been told it's feeling more and more like a second-class citizen on .NET, but I don't have much personal experience.
This is exactly the world I'm working towards with packaging tooling with a virtual machine i.e. electron but with virtual machines instead so the isolation aspect comes by default.
The irony in this case is that axios is not really needed now given that fetch is part of the JS std lib.
The problem is not third party libraries. It is updating third party libraries when the version you have still works fine for your needs.
If your tooling can pull a dependency from the internet, it could certainly check if more recent version from a vendored one is available.
I think you've identified the problem here: package management and package distribution are two different problems. Both tools have possibilities for exploits, but if they are separate tools then the surface area is smaller.
I'm thinking that the package distribution tool maintains a local system cache of packages, using keys/webrings/whatever to verify provenance, while the package management tool allows pinning, minver/maxver, etc.
Frankly inventing a new language is irresponsible these days unless you build on-top of an existing ecosystem because you need to solve all these problems.
Or write your own stuff. Yes, that's right, I said it. Even HTTP. Even cryptography. Just because somebody else messed it up once doesn't mean nobody should ever do it. Professional quality software _should_ be customized. Professional developers absolutely can and should do this and get it right. When you use a third-party HTTP implementation (for example), you're invariably importing more functionality than you need anyway. If you're just querying a REST service, you don't need MIME encoding, but it's part of the HTTP library anyway because some clients do need it. That library (that imports all of its own libraries) is just unnecessary bloat, and this stuff really isn't that hard to get right.
This post is modded down (I think because of the "roll your own crypto vibe", which I disagree with), but this is actually spot on the money for HTTP.
The surface area for HTTP is quite large, and your little API, which never needed range-requests, basic-auth, multipart form upload, etc suddenly gets owned because of a vulnerability in one of those things you not only never used, you also never knew existed!
"Surface area" is a problem, reducing it is one way to mitigate.
Again, you run into the attack surface area here. Think about the Heartbleed vulnerability. It was a vulnerability in the DTLS implementation of OpenSSL, but it affected every single user, including the 99% that weren't using DTLS.
Experienced developers can, and should, be able to elide things like side-channel attacks and the other gotchas that scare folks off of rolling their own crypto. The right solution here is better-defined, well understood acceptance criteria and test cases, not blindly trusting something you downloaded from the internet.
1. It's really really hard to verify that you have not left a vulnerability in (for a good time, try figuring out all the different "standards" needed in x509), but, more importantly,
2. You already have options for a reduced attack surface; You don't need to use OpenSSL just for TLS, you can use WolfSSL (I'm very happy with it, actually). You don't need WolfSSL just for public/private keys signing+encryption, use libsodium. You don't need libsodium just for bcrypt password hashing, there's already a single function to do that.
With crypto, you have some options to reduce your attack surface. With HTTP you have few to none; all the HTTP libs take great care to implement as much of the specification as possible.
That's actually not really crypto, though - that's writing a parser (for a container that includes a lot of crypto-related data). And again... if you import a 3rd-party x.509 parser and you only need DER but not BER, you've got unnecessary bloat yet again.
Good luck