upvote
Remember Heartbleed in OpenSSL? That long predated LLMs, but same story: some bozo forgot how long something should/could be, and no one else bothered to check either.
reply
Hey we are the bozos
reply
Lets all get together and self-reflect on the bozos way.
reply
I believe that once the OpenBSD team started cleaning up some of the other gross coding style stuff as part of their fork into LibreSSL that even fairly simplistic static analysis tools could spot the underlying bugs that caused heartbleed.
reply
The bug that caused Heartbleed was extremely obvious: read a u16 out of a packet, copy that many bytes of the source packet into the reply packet. If someone put that code in front of you in isolation you would spot it instantly (if you know C). The problem --- this is hugely the case with most memory safety bugs --- is that it's buried under a mountain of OpenSSL TLS protocol handling details. You have to keep resident in your brain what all the inputs to the function are, and follow them through the code.
reply
It's much, much, easier to run an LLM than to use a static or dynamic analyzer correctly. At the very least, the UI has improved massively with "AI".
reply
Most people have no idea how hard it is to run static analysis on C/C++ code bases of any size. There are a lot of ways to do it wrong that eat a ton of memory/CPU time or start pruning things that are needed.

If you know what you're doing you can split the code up in smaller chunks where you can look with more depth in a timely fashion.

reply
And even if that's true (and it frequently is!), detractors usually miss the underlying and immense impact of "sleeping dad capability" equivalent artificial systems.

Horizontally scaling "sleeping dads" takes decades, but inference capacity for a sleeping dad equivalent model can be scaled instantly, assuming one has the hardware capacity for it. The world isn't really ready for a contraction of skill dissemination going from decades to minutes.

reply
Most likely no-one runned them, given the developer culture.
reply
There’s the classic case of the Debian OpenSSL vulnerability, where technically illegal but practically secure code was turned into superficially correct but fundamentally insecure code in an attempt to fix a bug identified by a (dynamic, in this case) analyzer.
reply