upvote
It's always the inconsistencies which amaze me, from the article:

> I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet

You have "so many?" Are they uncountable for some reason? You "haven't validated" them? How long does that take?

> found a total of five Linux vulnerabilities

And how much did it cost you in compute time to find those 5?

These articles are always fantastically light on the details which would make their case for them. Instead it's always breathless prognostication. I'm deeply suspicious of this.

reply
>And how much did it cost you in compute time to find those 5?

This is the last thing I'd worry about if the bug is serious in any way. You have attackers like nation states that will have huge budgets to rip your software apart with AI and exploit your users.

Also there have been a number of detailed articles about AI security findings recently.

reply
I'd be interested in how it compares (in terms of time, money and false positives) with fuzzing.
reply
You are suspicious because you probably haven't worked anywhere that's AI-first. Anyone that's worked at a modern tech company will find this absolutely believable.

Like what, you expect Nicholas to test each vuln when he has more important work to do (ie his actual job?)

reply
What models are you using, on what type of codebases, with what tools?
reply
Apart from obvious PR (if you would need to lean into AI wave a bit this of all places is it) and fanboyism which is just part of human nature, why can't both be true?

It can properly excel in some things while being less than helpful in others. These are computers from the beginning, 1000x rehashed and now with an extra twist.

reply