1. Pick a file to seed as a starting place.
2. Ask the LLM (in an agent harness) to find a vulnerability by starting there.
3. If it claims to have found something, ask another one to create an exploit/verify it/prove it or whatever.
4. If both conclude there is a vuln, then with the latest models you almost certainly found something real.
Just run it against every file in a repo, or select a subset, or have an LLM select files with a simple "what X files look likely to have vulns?".
So basically yes, it is that simple. It's just a matter of having the money to pay for the tokens.
Linus' law was wrong because there were never enough (qualified) eyeballs to check the code. LLMs provide an ample supply of eyeballs (though it's not a benefit to open source, since proprietary developers can use the same LLMs).
Thanks to agents and tool calling, there are now business cases that can be fully described by AI tooling, the next step in microservices, serverless and what not.
Naturally with a much smaller team than what was required previously.