upvote
TBH this is a pretty good way of looking at it. Yeah we're seeing an explosion of vulnerabilities being found right now, but that (hopefully) means those vulnerabilities are all being cleaned up and we're entering a more hardened era of software. Minus the software packages that are being intentionally put out as exploits, of course. Maybe some might say it's too optimistic and naive, but I think you have a good point.
reply
I agree with the prediction but not the timing. We won't enter a more hardened era of software until after a long period of security vulnerabilities.

Rivers caught on fire for a hundred years before the EPA was formed.

reply
New code will also use these tools from the get go, hopefully vastly reducing the vulnerabilities that make it to prod to begin with.
reply
The future may be distributed quite unevenly here, as they say, with a divergence between a small amount of "responsible" code in systems which leverage AI defensively, and a larger amount of vibe-coded / prompt-engineered code in systems which don't go through the extra trouble, and in fact create additional risk by cutting corners on human review. I personally know a lot of people using AI to create software faster, but none of them have created special security harnesses a la Mozilla (https://arstechnica.com/information-technology/2026/05/mozil...).
reply
> we're entering a more hardened era of software

This is one force that operates. Another is that, in an effort to avoid depending on such a big attack surface, people are increasingly rolling their own code (with or without AI help) where they might previously have turned to an open source library.

I think the effect will generally be an increase in vulnerabilities, since the hand-rolled code hasn't had the same amount of time soaking in the real world as the equivalent OS library; there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did. But the vulnerabilities will have much narrower scope: If you successfully exploit an OS library, you can hack a large fraction of all the code that uses it, while if you successfully exploit FooCorp's hand-rolled implementation, you can only hack FooCorp. This changes the economic incentive of funding vulnerabilities to exploit -- though less now than in the past, when you couldn't just point an LLM at your target and tell it "plz hack".

reply
If I hand roll my logging library, I unlikely include automatic LDAP request based on message text (infamous Log4j vulnerability).
reply
I’m seeing a lot of similar things during code reviews of substantially LLM-produced codebases now. Half-baked bad idea that probably leaked from training sets.
reply
Typically when hand-rolling code you implement only what you require for your use-case, while a library will be more general purpose. As a consequence of doing more, have more code and more bugs.

Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

For supply chain security and bug count, I'll take a focused custom implementation of specific features over a library full of generalized functionality.

reply
Yes, a lot hinges on how little you can get away with implementing for your use case. If you have an XML config file with 3 settings in it, you probably won't need to implement handling of external entities the way a full XML parsing library would, which will close off an entire class of attendant vulnerabilities.

> Also, even seemingly trivial libraries can have bugs. The infamous leftpad library didn't handle certain edge doses properly.

This isn't really an argument in favour of having the average programmer reimplement stuff, though. For it to be, you'd have to argue that the leftpad author was unusually sloppy. That may be true in this specific case, but in general, I'm not persuaded that the average OSS author is worse than the average programmer overall. IMHO, contributing your work to an OSS ecosystem is already a mild signal of competence.

On the wider topic of reimplementation: Recently there was an article here about how the latest Ubuntu includes a bunch of coreutils binaries that have been rewritten in Rust. It turns out that, while this presumably reduced the number of memory corruption bugs (there was still one, somehow; I didn't dig into it), it introduced a bunch of new vulnerabilities, mostly caused by creating race conditions between checking a filesystem path and using the path for something.

reply
This argument goes even further. If you have only 3 settings, why does it need to be an xml file?
reply
ETA: I'm not saying it has to, I'm saying it's possible to imagine reasons that would justify this decision in some cases.

Because it might grow in future and you want to allow flexibility for that, because it might be the input to or output from some external system that requires XML, because your team might have standardised on always using XML config files, because introducing yet another custom plain text file format just creates unnecessary cognitive load for everyone who has to use it are real-world reasons I can think of.

But really I was just looking for a concrete example where I know the complexity of the implementation has definitely caused vulnerabilities, whether or not the choice to use it to solve the problem at hand was sensible. I have zero love for XML.

reply
deleted
reply
>there's no reason to assume the average author would magically create fewer bugs than the original OS library authors initially did

Have you read this old code? It's terrible and written with no care at all to security often in C. AI is much much better at writing code.

reply
Do you have a specific library in mind? I think it would have to be an ancient, unmaintained C library.

But I think most OSS code isn't like this -- even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel, GNU userland, PostgreSQL, Python.

reply
> even C code born long ago, if it's still in wide use, has been hardened by now. Examples: Linux kernel

There have been two LPE vulnerability and exploits in the Linux kernel announced today. After the one announced just last week. I don't think as much of the C code born long ago has been as carefully hardened as you think.

(Copy Fail 2 and Dirty Frag today, and Copy Fail last week)

reply
One. "Copy Fail 2" and "Dirty Frag" are the same thing.
reply
And consideing the size of the kenel, I call this stupendously good.

You (anyone, not you personally) write that much code yourself and let's see how well you did in comparison.

reply
Sure, I didn't mean to say that these examples are guaranteed 100% safe -- just that I trust them to be enormously more safe than software that accomplishes the same task that was hand-written by either a human or an an LLM last week.
reply
To be fair, to some extent that’s up to us. Time to get cleaning, I guess.
reply
You are avoiding intentionally to say ‘thanks to LLMs’ or is implicit? As all these recent mega bugs surface with lots of fuzzing and agentic bashing, right ?
reply
Thank you for reminding us all that you AI bros are still the most obnoxious people there are.
reply
Having casually read into a few recent incidents the vector has often been outside of software. A lot of mis-configurations or simply attacking the human in the chain. And nation states have basically unbounded resources for everything from bribes, insiders, and even standing up entire companies.
reply
I think it will be an arms race in the future as well. Easier to fix known vulnerabilities automatically, but also easier to find new ones and the occasionally AI fuckup instead of the occasionally human fuckup.
reply
Yeah.

Right now it kinda feels to me like "Open Source" is the Russian army, assuming their sheer numbers and their huge quantity of equipment much off which is decades old.

Meanwhile attackers and bug hunters are like the Ukrainians, using new, inexpensive, and surprisingly powerful tools that none of the Open Source community has ever seen in the past, and for which it has very little defence capability.

The attackers with cheap drones or LLMs are completely overwhelming the old school who perhaps didn't notice how quickly the world has changed around them, or did notice but cannot do anything about quickly enough.

reply
Well this argument was certainly inventive. What a weird impression to have about these things.

Who exactly is the innocent little Ukraine supposed to be that the big bad open source is supposed to be attacking to, what? take their land and make the OSS leader look powerful and successful at acheiving goals to distract from their fundamental awfulness? And who are the North Korean canon fodder purchased by OSS while we're at it?

Yeah it's just like that, practically the same situation. The authors of gnu cp and ls can't wait to get, idk, something apparently, out of the war they started when they attacked, idk, someone apparently.

reply
New software is being generated faster than it can be adequately tested. We are in the same place we’ve always been; except everything is moving much too fast.
reply
This is exactly the feeling I have. First: excessive growth of dependencies fueled by free components.

* with internet access to FOSS via sourceforge and github we got an abundance of building blocks

* with central repositories like CPAN, npm, pip, cargo and docker those building blocks became trivially easy to use

Then LLMs and agents added velocity to building apps and producing yet more components, feeding back into the dependency chain. Worse: new code with unattributed reuse of questionable patterns found in unknowable versions of existing libraries. That is, implicit dependencies on fragments multitude of packages.

This may all end well ultimately, but we're definitely in for a bumpy ride.

reply
This assumes that there are no new exploits being generated.

We're seeing maintainers retreat from maintaining because the amount of AI slop being pushed at them is too much. How many are just going to hand over the maintenance burden to someone else, and how many of those new maintainers are going to be evil?

The essential problem is that our entire system of developing civilisation-critical software depends on the goodwill of a limited set of people to work for free and publish their work for everyone else to use. This was never sustainable, or even sensible, but because it was easy we based everything on it.

We need to solve the underlying problem: how to sustainably develop and maintain the software we need.

A large part of this is going to have to be: companies that use software to generate profits paying part of those profits towards the development and maintenance of that software. It just can't work any other way. How we do this is an open question that I have no answers for.

reply
That is already how it works. The loner hacker in moms basement working for free on his super critical OSS package is largely a myth. The vast majority of OSS code is contributed by companies paying their employees to work on it.
reply
I'm thinking of projects like curl [0]

this is a cornerstone of modern software development. If it died, or if got taken over by a malicious entity, every single company on the planet would have an immediate security problem. Yet the experience of that maintainer is bad verging on terrible [1].

We need to do better than this.

[0] https://curl.se/docs/governance.html

[1] https://lwn.net/Articles/1034966/

reply
>As an example, he put up a slide listing the 47 car brands that use curl in their products; he followed it with a slide listing the brands that contribute to curl. The second slide, needless to say, was empty.

>He emphasized that he has released curl under a free license, so there is no legal problem with what these companies are doing. But, he suggested, these companies might want to think a bit more about the future of the software they depend on.

There is little reason for minimal-restriction licenses to exist other than to allow corporate use without compensation or contribution. I would think by now that any hope that they would voluntarily be any less exploitative than they can would have been dashed.

If you aren't getting paid or working purely for your own benefit, use a protective license. Though, if thinly veiled license violation via LLM is allowed to stand, this won't be enough.

reply
There is a lot of opposition in the FOSS community for restrictive/protective licenses. And to be fair, this comes from a consistent and entirely logical worldview.

There's a bunch of problems with getting companies to pay for this, too - that sense of entitlement (or even contractual obligation), the ability to control the project with cash, etc.

I don't have any answers or solutions. But I don't think we can hand-wave the problem away.

reply
The problem is that they get away too easily with bugs in their products they ship to customers. If this would come with some penalties, there would be some incentive to invest in security and this would probably often flow back to upstream projects.
reply
Like a money-back guarantee?

Like you get when you buy e.g. MS products?

/s

reply
I am not talking about the open-source projects, but the downstream products such as cars that integrate curl.
reply
The sad truth about open source in 2026 is that it does not serve the society the way it is advertised or did back in the 90s.
reply
How so? We have open source operating systems running on a whole sleuth of systems ages apart. Interesting ideas and open collaboration coming out of the OS world.

This opposed to closed off “products” that change at the whims of the company owning it.

reply
Statistically. Most of it is created to serve marketing, personal or other agenda needs and is sponsored through the corresponding means for it.

There’s a lot of misconception about how the open source comes to be and very small part, still significant of course, of it was really created for the benefit of a community. There are exceptions, but dig the organisational culture and origins and you’ll see the pattern. Also, thousands of projects are made for the satisfaction of the author himself being highly intelligent and high on algorithmic dopamine.

reply
There is an xkcd about that i think
reply
Faults are injected into the code at a constant rate per developer. Then there's the intentional injections.

Auto-installing random software is the problem. It was a problem when our parents did it, why would it be a good idea for developers to do it?

reply
This is related to a massive annoyance of mine: when I run a piece of software and the system is missing a required dependency, I want the software to *tell me* that dependency is missing so I can make a decision about proceeding or not. Instead it seems that far too often software authors will try and be “clever” by silently installing a bunch of dependencies, either in some directory path specific to the software, or even worse globally.

I run a distro that often causes software like this to break because their silent automatic installation typically makes assumptions about Linux systems which don’t apply to mine. However I fear for the many users of most typical distros (and other OS’ in general as it’s not just a Linux-only issue) who are subject to having all sorts of stuff foisted onto their system with little to no opportunity to easily decide what is being heaped upon them.

reply
Ruby gems and CPAN have build scripts that rebuild stuff on the user's device (and warn you if they can't find a dependency). But I believe one of the Python's tools that started the trend of downloading binaries instead of building them. Or was it NPM?
reply
curl ... | sudo bash

yolo!

reply
What we are seeing so far come out of the AI agent era is reduced not increased code quality. The few advances are by far negated by all the slop that's thrown around and that's unlikely to change.

> any useful piece of software has been fuzz tested, property tested and formally verified.

That would require effort. Human effort and extra token cost. Not going to happen, people want to rather move fast an break things.

reply
Isn't blaming AI for that similar to blaming C for buffer overflows?

More people are producing more code because of easier tools. Most code is bad. But that's not the tools fault.

And in the end it is a problem of processes and culture.

reply
We are not in disagreement here. I'm not blaming AI, I'm blaming the culture around its use.
reply
Will need those animal bones if all the industrial control systems get turned against us

Nuclear might be airgapped but what about water, power…?

reply