upvote
> Closed source software won't receive any reports, but it will be exploited with AI.

What makes you so sure that closed-source companies won't run those same AI scanners on their own code?

It's closed to the public, it's not closed to them!

reply
As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.
reply
Seconded.

Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."

There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.

reply
Yea, its fundamentally an issue of asymmetric economics.

Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that

But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.

reply
In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.

There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!

reply
That sounds like a great idea. I'd love to be able to contribute the remainder of my monthly AI subscriptions for something like this, especially since some of them bill and refresh their quotas by calendar month.
reply
Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same?

I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week.

I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that.

reply
Yup, closed source software is a huge pile of shit with good marketing teams. Always was.
reply
As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.
reply
I think it makes it all the more apparent that writing EAL4 code with as little design competence as possible was taking advantage of some strange scarcity economics.. It's now even easier to make something with endless technical debt and security vs backwards compatibility liability but is anyone going to keep paying for things that aren't correct and to the point if some market participants structure their agent usage toward verifiable quality and don't actually have more cost any more?
reply
More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.
reply
With enough copies of GPT printing out the same bulleted list, all bugs are

1. shallow

2. hollow

3. flat

...

reply
Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.
reply
Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used.

Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.

reply
> Same tools A, B and C, but minus tools D, E and F,

Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?

reply
The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team.
reply
There's no more "eyes" though, it's all models, and they are all converging pretty damn fast.
reply
If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored. After all, even open-source software is private until it is released.
reply
Fair enough
reply
Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way.
reply
> Closed source software won't receive any reports

Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.

Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.

reply
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.

That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).

reply
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.
reply
But also tools that might not be nice and report security vulnerabilities, but exploit them.

There is no guarantee that open means that they will be discovered.

reply
That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.
reply
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers
reply
> Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates

So just like a pre-AI or worse?

reply
You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.
reply
deleted
reply
deleted
reply
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.

This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.

reply
> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.

Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.

> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits

That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.

> any open-source business stands to lose way more

That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?

You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.

In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.

reply
The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.

But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.

It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.

It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.

reply
Some users might be tech sensitive and have the capacity to check the codebase If a company want to use your platform, it can run an audit with its own staff These are people really concerned about the code, not "good samaritans"
reply
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals
reply
Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.
reply
Isn’t that security by obscurity?
reply
I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.

If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.

There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.

reply
What do you use for the pentests? any oss libraries?
reply
This is a sandbox escape pentest so the only tooling needed is Claude Code and a simple prompt that asks it to follow a workflow: https://github.com/airutorg/airut/blob/main/workflows/sandbo...
reply
> Closed source software won't receive any reports, but it will be exploited with AI.

This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.

Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.

reply
We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.
reply
"Security through obscurity" is a term popularized entirely by the long-standing consensus among security researchers and any expert not being paid to say otherwise that this is a bad idea that doesn't work
reply
i agree with his too,

but with cal.com i dont think this is about security lol

open source will always be an advantage just you need to decide wether it aligns with you business needs

reply
given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore
reply
Guess that kind of depends on your definition of "source", I personally wouldn't really agree with you here.
reply
absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story
reply
I mean-- to an LLM is there really any difference between the actual source and disassembled source? Informative names and comments probably help them too, but it's not clear that they're necessary.
reply
Which models have you had good luck with when working with ghidra?

I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.

reply
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.
reply
> Closed source software won't receive any reports, but it will be exploited with AI

How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.

But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.

reply
Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.
reply
It's SaaS though. You don't have access to the binary to decompile. There's only so much you can reverse-engineer through public URLs and APIs, especially if the SaaS uses any form of automatic detection of bot traffic.
reply
Thanks you. This is what the parent post was trying to say. Don't know why it is down-voted. AI or not, if the API end points are well secured, for example use uuid-v7, then their is little that the ai can gain from just these points.
reply
The opposite is true. Open source barely matters to attackers, especially ones that can be automated. It mostly enables more people (or agents, or people with agents) to notice and fix your vulnerabilities. Secrecy and other asymmetries in the information landscape disproportionately benefit attackers, and the oft-repeated corporate claim that proprietary software is more secure is summarily discounted by most cybersecurity professionals, whether in industry or academic research. This is also seldom the motivation for making products proprietary, but it's more PR-friendly to claim that closing your source code is for security reasons than it is to say that it's for competitive advantage or control over your customers
reply
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.
reply
This might be the most painfully obvious advertisement I’ve ever seen on a forum.
reply
I didn't mean it as such, but I can see why it would seem so. I've edited the link out now. Thanks for the feedback.
reply