5/8/2026 at 8:29:27 PM
This has been a very long time coming and the crackup we're starting to see was predicted long before anyone knew what an LLM is.The catalyst is the shift towards software transparency: both the radically increased adoption of open source and source-available software, and the radically improved capabilities of reversing and decompilation tools. It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.
This has been playing out in slow motion ever since BinDiff: you can't patch software without disclosing vulnerabilities. We've been operating in a state of denial about this, because there was some domain expertise involved in becoming a practitioner for whom patches were transparently vulnerability disclosures. But AIs have vaporized the pretense.
It is now the case that any time something gets merged into mainline Linux, several different organizations are feeding the diffs through LLM prompts aggressively evaluating whether they fix a vulnerability and generating exploit guidance. That will be the case for most major open source projects (nginx, OpenSSL, Postgres, &c) sooner rather than later.
The norms of coordinated disclosure are not calibrated for this environment. They really haven't been for the last decade.
I'm weirdly comfortable with this, because I think coordinated disclosure norms have always been blinkered, based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise! The delay also keeps information out of the hands of system operators who have options other than applying patches.
by tptacek
5/8/2026 at 8:44:39 PM
> It has been over a decade since any ordinary off-the-shelf closed-source software was meaningfully obscured from serious adversaries.Probably goes without saying but the last line of defense is not deploying your software publicly and instead relying on server-client architectures to do anything. Maybe this will be more common as vulnerabilities are more easily detected and exploited. Of course its not always feasible.
It has been annoying seeing my (proguard obfuscated) game client binaries decompiled and published on github many times over the last 11 years. Only the undeployed server code has remained private.
Interestingly I didn't have a problem with adversaries reverse engineering my network protocols until I was updating them less frequently than weekly. LLM assisted adversaries could probably keep up with that now too.
by grog454
5/8/2026 at 11:27:18 PM
>Only the undeployed server code has remained private.How easy to do you this is for LLM to build decent emulator of the server in question by just observing what you send and what you get as response?
by ReptileMan
5/9/2026 at 5:28:27 PM
Honestly, I can't really imagine how this would work at all?I could see how, given enough data, you'd be able to infer the intended logic of the server and reimplement something that's compatible (I've done this myself with Wireshark + USB devices in the past).
But how would could you reason about specific vulnerabilities in remote code just from a set of requests and responses?
by AussieWog93
5/9/2026 at 12:13:03 AM
not sure why downvoted. server emulators will become faster to make. protocol analysis will become faster as well.by globalnode
5/9/2026 at 12:40:43 AM
Because while you could get something that drives a dumb interface, by moving the work and data to the server it's not available for the emulation software to use.by Izkata
5/9/2026 at 3:59:25 AM
If the contract is well defined, the LLM can infer what it's purpose is, implementation, possibly even your secret sauce. There is no software moat anymore.by reactordev
5/9/2026 at 5:19:58 AM
yes this is what i was trying to say. its quite common on older client-server games to do this sort of thing. powerful ai models will just make the work to recreate/emulate servers faster.by globalnode
5/9/2026 at 3:50:17 AM
Except that emulating what is seen is surprisingly useful to find attack vectors. As a single deeper datapoint, one can look at more than just baseline behavior and delve into timing details to further refine implementation guesses.by imoverclocked
5/9/2026 at 12:35:22 AM
> BinDiff: you can't patch software without disclosing vulnerabilitiesThat’s why Microsoft has been obfuscating its binary builds for at least the last two decades so that even the two builds from the same source would produce very different blobs.
by sedatk
5/9/2026 at 1:11:44 AM
Sounds dubious, do you have a citation? The disassembly looks very straightforward for a lot of Windows code.by dataflow
5/9/2026 at 2:19:14 AM
They're not encoded, but the code blocks are shuffled. That's why disassembly does look straightforward, but it used to thwart BinDiff at the time.by sedatk
5/9/2026 at 5:35:41 AM
If I understand correctly, that is just randomness comes from parallel compiling and linking.If you saying there is a whole step just scrambling blobs, i will be very surprised.
by j16sdiz
5/9/2026 at 12:41:56 PM
That sounds a lot like US9116712, but I don't think its ever been publicly said that Windows does this.by shakna
5/9/2026 at 3:16:22 AM
What made you believe this is the case? any examples/links/etc.?by dataflow
5/9/2026 at 3:43:36 AM
It was a part of our Windows build process when I was at Microsoft. I only assumed that they would keep doing it, but they might have as well dropped the practice.by sedatk
5/9/2026 at 2:02:11 PM
I don't see how that can be useful when Microsoft publishes debug symbols for almost everything.by cstdr
5/11/2026 at 5:26:32 PM
All the while, Linux is going towards reproducible builds (Debian just announced it as a policy). This is of course the only sane way for FOSS, and, I believe, the only sane long term approach in any case. Security by obscurity, while not worthless, is just a thin mitigation layer. By the way, build-time randomization is ineffective in light of AI analysis---it needs to be per-binary-run, in the style of KASLR.by prezk
5/9/2026 at 12:37:35 AM
How are they obfuscated?by wglb
5/9/2026 at 2:19:29 AM
See my sibling comment.by sedatk
5/9/2026 at 7:01:15 PM
Everything I can find says they are not obfuscatingby wglb
5/8/2026 at 10:29:57 PM
I always understood the business reasons that brought about coordinated vulnerability disclosure & I've been forced to toe this line at employers, but I've always been firmly in the full disclosure camp. I am so ready for this.by busterarm
5/9/2026 at 1:40:36 AM
I believe this premise that the cost of identification of vulnerabilities via diffs is going down over time begs the question "what do our processes need to look like if simply making the patch public is the disclosure?"Current coordinated disclosure practices have a dependency on patching and disclosure being separate, but the gap between them seems to be asymptomatically approaching zero.
by riknos314
5/9/2026 at 2:45:16 AM
Right, all I'm saying is that we were asymptotically close many years ago; all that's changed is that nobody can kid themselves about it anymore.The actual policy responses to it, I couldn't say! I've always believed, even when there was a meaningful gap between patching and disclosing, that coordinated disclosure norms were a bad default.
by tptacek
5/9/2026 at 6:22:35 AM
What process or mechanism would you prefer to use instead of coordinated disclosure?by riknos314
5/10/2026 at 1:10:39 AM
I guess people could download (but not install) encrypted patches with an announced key release date+time, so that by the moment it is disclosed essentially everyone is applying the patch.by DoctorOetker
5/10/2026 at 7:55:11 AM
That's still coordinated, but by the publicizing of the keyby riknos314
5/9/2026 at 2:50:02 PM
The most common alternative is full disclosure.by akerl_
5/9/2026 at 8:18:05 AM
> based on the unquestioned premise that delaying disclosure for the operational convenience of system administrators is a good thing. There are reasons to question that premise!Care to mention these reasons?
With "convenience of system administrators", I'm guessing you mean that there's a patch available that sysadmins can install, ideally before the vulnerability is disclosed? What else are sysadmins supposed to do, in your opinion? Fix the vulnerability themselves? Or simply shutdown the servers?
With the various copyfails of recent, it at least was possible to block the affected modules. If that were not the case, what would you have done, as a sysadmin?
by deng
5/9/2026 at 2:55:39 PM
The best convenience is that by the time of disclosure, the patch was already merged perhaps months prior and so sysadmins following a routine update schedule would have already updated to a version including the patch and thus have nothing to do. This relies on an assumption that a patch or series of patches aren't equivalent to a disclosure, so that a disclosure can be delayed from the patch, which is basically untenable in modern times.by Jach
5/9/2026 at 3:28:31 PM
Choose to take an availability hit rather than risk a breach.by tptacek
5/9/2026 at 5:54:28 PM
Presumably you also have positive downstream effects in mind: when "taking the availability hit" feels like more of a live choice, operators feel the pain of running insecure designs more. Do what you describe a couple times, and you'll naturally start thinking things like "dammit, we need to finally get away from shared kernels; this is insane", "maybe we should figure out a way to do this that doesn't involve running software that runs in God mode", or even "we should see what it takes to port our application to a platform that is more secure by design".When you can't imagine or pretend that when a major vuln is disclosed (a) you've been secure up until the point of disclosure or (b) all you need to do now is apply a patch without thinking too much about what your blast radius just was, you might actually have stronger incentives to think about the design of the overall system so that when similar issues come up, you can avoid having to sweat those outages.
It's interesting that "defense-in-depth" gets cited and repeated all the time but the standard attitude about patching still seems to be "what do you mean?? isn't patching the only thing we can do?". How about designing systems so that you can more quickly and easily throw up other kinds of mitigations when you need to? What about designing systems with robust enough notions of graceful degradation that when something crops up for a certain feature you can "just" say "okay, let's turn only that part off for a couple days"? How about getting really, really good at CI/CD so you can more confidently add and deploy mitigations to your application code, or redeploy with a feature flag that lets you temporarily drop an unpatched-and-vulnerable dependency?
If you can manage to build a system without the assumption that just patching is always on the table, you might simply end up with better software, which would be pretty cool.
by isityettime
5/9/2026 at 6:43:49 PM
"Taking an availability hit" is also an "in the limit" case that mostly serves to illustrate the falsity of "disclose or patch" as a binary. Much more commonly: a fully disclosed vulnerability arms systems teams with enough information to mitigate; pull kernel modules, change permissions, that sort of thing.by tptacek
5/9/2026 at 9:39:00 PM
Maybe some corporations like the "just patch" playbook because it takes less skill to execute or articulate. It might be as much a deprofessionalizarion/commoditization of labor thing as much as anything else.by isityettime
5/9/2026 at 4:39:01 PM
With "availability hit" I'm assuming you mean to simply stop operations until patches are rolled out, so possibly for days? That would at least explain what's happening at GitHub...by deng
5/9/2026 at 12:09:53 PM
Many vulnerabities seem to be in code paths for rarely used features. They can often be disabled.by vanviegen
5/9/2026 at 12:58:06 PM
So what materializes now is basically tech debt returns on the "move fast and break things" paradigm?by cineticdaffodil
5/9/2026 at 1:07:29 AM
You’re obviously one of the most knowledgeable people on this topic around here.What would the best solution be? And where do you believe the industry is headed (which may very well be something other than the best solution) ?
I can’t think about anything other than improving operations, but given the state of the industry, this seems like a pipe dream.
by stingraycharles