upvote
I suppose I should have been more specific--pattern matching in text. We humans do a lot more than processing ascii bytes (or whatever encoding you like) and looking for semantically nearby ones. If "only" because we have sensors which harvest more varied data than a 1D character stream. Security researchers may get an icky feeling if they notice something or another in some system they're analyzing, which leads eventually to something exploitable. Or they may beat their head against a problem all day at work on a Friday, go to the bar afterwards, wake up with a terrible hangover Saturday morning, go out to brunch, and while stepping off the bus on the way to the zoo after brunch an epiphany strikes like a flash and the exploit unfurls before them unbidden like a red carpet. LLMs do precisely none of this. And then we can go into their deficiencies--incapable of metacognition, incapable of memory, incapable of reasoning (despite the marketing jargon), incapable of determining factual accuracy, incapable of estimating uncertainty, ...
reply
I won't argue whether their "human-like" marketing is dumb but I will argue that whatever LLM's are doing is plenty sufficient to find the vast majority of vulnerabilities. Don't tell my employer I said that though
reply
That's awesome, and I'd love to see a whole bunch of data backing it up. If I was in a position to buy a product to do vuln scanning, and somebody showed me convincing evidence that this machine does the job.. you got a deal. I can't imagine why they didn't do that, if indeed it works.
reply