4/15/2026 at 4:17:26 PM
I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.by tananaev
4/15/2026 at 6:06:19 PM
> Closed source software won't receive any reports, but it will be exploited with AI.What makes you so sure that closed-source companies won't run those same AI scanners on their own code?
It's closed to the public, it's not closed to them!
by lelanthran
4/15/2026 at 6:21:42 PM
As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.by 440bx
4/15/2026 at 7:21:53 PM
Seconded.Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."
There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.
by sdoering
4/15/2026 at 8:05:09 PM
Yea, its fundamentally an issue of asymmetric economics.Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that
But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.
by valeriozen
4/15/2026 at 8:22:20 PM
In theory though, there is now a new way for community to support open source, but running vulnerability scans in white-hat mode, reporting and patching. That way they burn tokens for a project they love. Even if they couldn't actually contribute code before.There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!
by njyx
4/15/2026 at 11:41:56 PM
That sounds like a great idea. I'd love to be able to contribute the remainder of my monthly AI subscriptions for something like this, especially since some of them bill and refresh their quotas by calendar month.by ValentineC
4/15/2026 at 9:05:18 PM
Hang on, why is it costly for in-house to run AI scanners but near zero for threat actors to do the same?I've seen multiple proprietary places now including a routine AI scan of their code because it's so cheap and they may as well use-up unused tokens at the end of the week.
I mean, it's literally zero because they already paid for CC for every developer. You can't get cheaper than that.
by lelanthran
4/17/2026 at 5:59:30 AM
If a company specifically doesn't have a dedicated security team (or even a person), this will never get done.Most software companies sadly don't hire a dedicated (software) security expert.
by theshrike79
4/15/2026 at 7:39:45 PM
Yup, closed source software is a huge pile of shit with good marketing teams. Always was.by sevenzero
4/15/2026 at 6:32:30 PM
As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.by baileypumfleet
4/16/2026 at 6:48:36 AM
> As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.Yeah, but with closed source it's cheaper for the defender than for the attacker - the defender can scan their sources and their PRs as well as the compiled output. The attacker can only scan the compiled output, and they have to perform repeated scans.
by lelanthran
4/15/2026 at 10:40:13 PM
I think it makes it all the more apparent that writing EAL4 code with as little design competence as possible was taking advantage of some strange scarcity economics.. It's now even easier to make something with endless technical debt and security vs backwards compatibility liability but is anyone going to keep paying for things that aren't correct and to the point if some market participants structure their agent usage toward verifiable quality and don't actually have more cost any more?by topopopo
4/16/2026 at 9:06:11 AM
> What makes you so sure that closed-source companies won't run those same AI scanners on their own code?How many companies take the time to use penetration testing tools, that have been available for many years, to verify their software (or pay a penetration testing company to do a more thorough job than they have the experience to do internally)?
Some, certainly. Many, possibly. Most, I would wager not.
by dspillett
4/16/2026 at 2:38:37 PM
The economic motivation just simply isn't there. I'm sure we could cherry pick a few examples of companies where things like quality and security really are part of the culture, not just feel-good lip services. The reality is that companies are in business to make money and corner cutting is the easiest way to pad the margins.by necheffa
4/15/2026 at 6:19:32 PM
More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.by ihaveajob
4/15/2026 at 6:22:09 PM
With enough copies of GPT printing out the same bulleted list, all bugs are1. shallow
2. hollow
3. flat
...
by phendrenad2
4/15/2026 at 8:28:26 PM
Because they're a company. Even if the bar to entry can fit a normal sized american, doesn't mean they will do it, or do it in a systematic way; We know very well that nothing about AI is naturally systematic, so why would you assume it'll happen in a systematic way.by cyanydeez
4/15/2026 at 6:15:03 PM
Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.by LunicLynx
4/15/2026 at 6:47:01 PM
Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used.Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.
by bluebarbet
4/15/2026 at 8:01:49 PM
> Same tools A, B and C, but minus tools D, E and F,Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?
by lelanthran
4/15/2026 at 8:23:48 PM
The point being that there are always going to be more eyes, and more knowledge of available tools (i.e. including "D, E and F"), and more experience using them, with open source than with a single in-house dev team.by bluebarbet
4/15/2026 at 9:02:13 PM
There's no more "eyes" though, it's all models, and they are all converging pretty damn fast.by lelanthran
4/15/2026 at 10:56:03 PM
If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored. After all, even open-source software is private until it is released.by bluebarbet
4/16/2026 at 6:59:15 AM
> If true then logically it will be sufficient to run this "master model" once before any code release for the level playing field to be restored.I'm struggling to see how it is a level playing field:
1. Closed-source: defender runs llms to check the sources for vulns, runs llms on each PR, runs llm on deployment of the compiled output. Attacker runs llm only on compiled output.
2. Open-source: both attacker and defender runs llms on source, on PRs and on compiled output.
by lelanthran
4/15/2026 at 7:59:37 PM
Fair enoughby LunicLynx
4/15/2026 at 7:40:37 PM
[dead]by suhputt
4/15/2026 at 4:22:00 PM
> Closed source software won't receive any reportsNot from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
by Aurornis
4/15/2026 at 4:47:07 PM
Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
by switchbak
4/15/2026 at 4:29:53 PM
Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.by tananaev
4/15/2026 at 6:17:25 PM
But also tools that might not be nice and report security vulnerabilities, but exploit them.There is no guarantee that open means that they will be discovered.
by LunicLynx
4/15/2026 at 6:33:11 PM
That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.by baileypumfleet
4/15/2026 at 4:24:28 PM
+1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackersby bearsyankees
4/15/2026 at 7:49:57 PM
> Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinatesSo just like a pre-AI or worse?
by 0x457
4/15/2026 at 8:20:20 PM
Worse. [0]by shakna
4/15/2026 at 6:56:42 PM
You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.by bmurphy1976
4/15/2026 at 5:06:50 PM
I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
by rd
4/15/2026 at 5:37:38 PM
> It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
by bigbadfeline
4/15/2026 at 6:21:24 PM
The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.
It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.
It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.
by tetha
4/17/2026 at 6:01:10 AM
It's a token game.Let's say finding a security issue takes 10M tokens. If one company has to pay for it, they most likely won't bother. It's purely a cost/benefit thing for them.
But if you have an open source project, you might get a 1000 people looking at it, each only has to spend 10k tokens to find the same flaws.
by theshrike79
4/15/2026 at 5:38:55 PM
A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminalsby sureMan6
4/15/2026 at 5:46:40 PM
Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.by eddythompson80
4/15/2026 at 5:35:53 PM
Some users might be tech sensitive and have the capacity to check the codebase If a company want to use your platform, it can run an audit with its own staff These are people really concerned about the code, not "good samaritans"by NaritaAtrox
4/15/2026 at 5:32:13 PM
Isn’t that security by obscurity?by dgb23
4/15/2026 at 4:48:36 PM
I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
by hardsnow
4/15/2026 at 5:59:33 PM
What do you use for the pentests? any oss libraries?by Johnny_Bonk
4/15/2026 at 6:10:50 PM
This is a sandbox escape pentest so the only tooling needed is Claude Code and a simple prompt that asks it to follow a workflow: https://github.com/airutorg/airut/blob/main/workflows/sandbo...by hardsnow
4/15/2026 at 6:31:01 PM
We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.by baileypumfleet
4/15/2026 at 7:24:42 PM
"Security through obscurity" is a term popularized entirely by the long-standing consensus among security researchers and any expert not being paid to say otherwise that this is a bad idea that doesn't workby advael
4/15/2026 at 6:58:37 PM
> Closed source software won't receive any reports, but it will be exploited with AI.This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.
Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.
by giancarlostoro
4/15/2026 at 5:01:23 PM
Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.by charcircuit
4/15/2026 at 4:31:55 PM
given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymoreby baq
4/15/2026 at 4:33:03 PM
Guess that kind of depends on your definition of "source", I personally wouldn't really agree with you here.by embedding-shape
4/15/2026 at 4:35:55 PM
absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different storyby baq
4/15/2026 at 8:21:23 PM
I mean-- to an LLM is there really any difference between the actual source and disassembled source? Informative names and comments probably help them too, but it's not clear that they're necessary.by raddan
4/15/2026 at 4:51:33 PM
Which models have you had good luck with when working with ghidra?I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
by criddell
4/15/2026 at 5:14:36 PM
i agree with his too,but with cal.com i dont think this is about security lol
open source will always be an advantage just you need to decide wether it aligns with you business needs
by devstatic
4/15/2026 at 5:21:46 PM
> Closed source software won't receive any reports, but it will be exploited with AIHow so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
by cm2187
4/15/2026 at 5:30:28 PM
Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.by geoffschmidt
4/15/2026 at 7:31:39 PM
It's SaaS though. You don't have access to the binary to decompile. There's only so much you can reverse-engineer through public URLs and APIs, especially if the SaaS uses any form of automatic detection of bot traffic.by evanelias
4/15/2026 at 8:00:04 PM
Thanks you. This is what the parent post was trying to say. Don't know why it is down-voted. AI or not, if the API end points are well secured, for example use uuid-v7, then their is little that the ai can gain from just these points.by zenmac
4/15/2026 at 7:27:06 PM
The opposite is true. Open source barely matters to attackers, especially ones that can be automated. It mostly enables more people (or agents, or people with agents) to notice and fix your vulnerabilities. Secrecy and other asymmetries in the information landscape disproportionately benefit attackers, and the oft-repeated corporate claim that proprietary software is more secure is summarily discounted by most cybersecurity professionals, whether in industry or academic research. This is also seldom the motivation for making products proprietary, but it's more PR-friendly to claim that closing your source code is for security reasons than it is to say that it's for competitive advantage or control over your customersby advael
4/15/2026 at 4:47:31 PM
Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.by kirubakaran
4/15/2026 at 5:08:27 PM
This might be the most painfully obvious advertisement I’ve ever seen on a forum.by ofjcihen
4/15/2026 at 5:10:46 PM
I didn't mean it as such, but I can see why it would seem so. I've edited the link out now. Thanks for the feedback.by kirubakaran