It feels like a dunk to write that. But I genuinely do think there's so much motivated reasoning on both sides of this issue, and one signal of that is when people tip their hands like this.
I was going to argue that companies got to choose their own auditors, so of course there were some bad ones out there. But looking at the market, it seems like (1) the race to the bottom has gotten ridiculous, and (2) the insurance companies do not currently trust the auditors in any meaningful way. So, yeah, point to you.
Once upon a time, I went through SOC2 audits where the auditors asked lots of questions about Vault and really tried to understand how credentials got handled. Sure, that was exceptional even at the time.
But that still leaves a whole pile of other audits and regulatory frameworks I need to comply with. Probably most of these frameworks will eventually accept "The code was written by an LLM and reviewed by an actual programmer." I am less certain that you'll be able to get away with vibe coding regulated systems any time soon.
My thing here is: you want to summon some kind of deus ex machina reason why the unpredictability (say) of agent-generated software will fail in the real world, but the concrete one you came up with fails to make that argument, pretty abruptly. Which makes me think the argument is less about the world as it is and more about the world as you'd hope it would be, if that makes sense.
Would you have the same reaction to requiring an approval for a production deployment? That’s driving the development process.
—-
Also jfc I need to cool it with the buzzwords, sorry I just got home from “talk like this all day” $job