4/17/2026 at 9:01:20 PM
> At the time of writing, the fix has not yet reached stable releases.Why was this disclosed before the hole was patched in the stable release?
It's only been 18 days since the bug was reported to upstream, which is much shorter than typical vulnerability disclosure deadlines. The upstream commit (https://github.com/gnachman/iTerm2/commit/a9e745993c2e2cbb30...) has way less information than this blog post, so I think releasing this blog post now materially increases the chance that this will be exploited in the wild.
Update: The author was able to develop an exploit by prompting an LLM with just the upstream commit, but I still think this blog post raises the visibility of the vulnerability.
by KerrickStaley
4/18/2026 at 5:21:44 AM
There exist some disclosure embargo exceptions when you believe the vulnerability is being used in wild or when the vulnerability fix is already released publicly (such as git commit), which makes it possible to produce exploit quickly. In this case it is preferred by the community to publish vulnerability.by winstonwinston
4/18/2026 at 3:38:03 PM
Disclosure: I didn't discover the vulnerability. I wrote the blog post.>The author was able to develop an exploit by prompting an LLM with just the upstream commit
Yes, I was able to do this. I believe anyone watching iTerm2's commits would be able to do this too.
>but I still think this blog post raises the visibility of the vulnerability.
Yes, I wanted to raise the visibility of the vulnerability, and it works!
The author of iTerm2 initially didn’t consider it severe enough to warrant an immediate release, but they now seem to have reconsidered.
by cryptbe
4/18/2026 at 4:46:20 PM
> The author of iTerm2 initially didn’t consider it severe enough to warrant an immediate release, but they now seem to have reconsidered.It's funny that we still have the same conversation about disclosure timelines. 18 days is plenty of time, the commit log is out there, etc.
The whole "responsible disclosure" thing is in response to people just publishing 0days, which itself was a response to vendors threatening researchers when vulns were directly reported.
by staticassertion
4/18/2026 at 12:34:00 AM
Once the commit is public, the cat is out of the bag. Being coy about it only helps attackers and reduces everyone's security.by bawolff
4/18/2026 at 3:24:24 PM
Yes I think this is an appropriate view today.My only caveat would be that in some security fixes, the pure code delta, is not always indicative of the full exploit method. But LLMs could interpolate from there depending on context.
by 6thbit
4/18/2026 at 10:47:49 PM
It is just as much the appropriate view now as it was in the 90s.Attackers are not idiots. Once you have the commit, it is usually pretty easy to figure out, even just having the binary diff is usually enough.
by bawolff
4/18/2026 at 11:48:21 PM
The binary diff?by 6thbit
4/19/2026 at 6:59:54 AM
There are people who reverse engineer security vulns of closed source products by comparing the before and after of the compiled binary.by bawolff
4/17/2026 at 9:42:09 PM
I guess traditional moratorium period for vulnerability publication is going to be fade away as we rely on AI to find it.If publicly accessible AI model with very cheap fee can find it, it's very natural to assume the attackers had found it already by the same method.
by ezoe
4/17/2026 at 10:27:20 PM
It’s a wrong way to look at things. Just because CIA can know your location (if they want to), would you share live location to everyone on the internet?LLM is a tool, but people still need to know — what where how.
by saddist0
4/17/2026 at 10:54:53 PM
Not sure if that's a great example. If there's a catastrophic vulnerability in a widely used tool, I'd sure like to know about it even if the patch is taking some time!The problem with this is that the credible information "there's a bug in widely used tool x" will soon (if not already) be enough to trigger massive token expenditure of various others that will then also discover the bug, so this will often effectively amount to disclosure.
I guess the only winning move is to also start using AI to rapidly fix the bugs and have fast release cycles... Which of course has a host of other problems.
by lxgr
4/18/2026 at 1:20:28 AM
>there's a bug in widely used tool x"There's a security bug in Openssh. I don't know what it is, but I can tell you with statistical certainty that it exists.
Go on and do with this information whatever you want.
by integralid
4/18/2026 at 2:52:55 AM
I think in the context of these it’s more of “we’ve discovered a bug” which gives you more information than “there is a bug”. The main difference in information being that the former implies not only there is a bug but that LLMs can find it.by mmilunic
4/18/2026 at 10:19:53 AM
If you're a random person on the Internet, I can indeed not do much with that information.But if you're a security research lab that a competing lab can ballpark the funding of and the amount of projects they're working on (based on industry comparisons, past publications etc.), I think that can be a signal.
by lxgr
4/18/2026 at 12:05:25 AM
Wrong argument, since it's not just available to "the CIA" but every rando under the sun, people should be notified immediately if "tracking" them is possible and mitigation measures should become a common standard practiceby mx7zysuj4xew
4/18/2026 at 4:03:26 PM
You and I would need to know "what where how".There are many attackers that are just going to feed every commit of every project of interest to them into their LLMs and tell it "determine if this is patching an exploit and if so write the exploit". They don't need targeting clues. They're already watching everything coming out of
Do not make the mistake of modeling the attackers as "some guy in a basement with a laptop who decided just today to start attacking things". There are nation-state attackers. There are other attackers less funded than that but who still may not particularly blink at the plan I described above. Putting out the commit was sufficient to tell them even today exactly what the exploit was and the cheaper AI time gets the less targeting info they're going to need as the just grab everything.
I suggest modeling the attackers like a Dark Google. Think of them as well-funded, with lots of resources, and this is their day job, with dedicated teams and specialized positions and a codebase for exploits that they've been working on for years. They're not just some guy who wants to find an exploit maybe and needs huge hints about what commit might be an issue.
by jerf
4/18/2026 at 4:38:01 PM
>Do not make the mistake of modeling the attackers as "some guy in a basement with a laptop who decided just today to start attacking things". There are nation-state attackers.The parent's point is that if those capable attackers can exploit it anyway, doesn't mean it should be given on a silver platter to any script kiddie and guy in some basement with a laptop. The first have a much smaller target group than the latter.
by coldtea
4/18/2026 at 4:47:14 PM
This ignores that by publicly releasing the patch is motivated.by staticassertion
4/18/2026 at 9:59:15 AM
> LLM is a tool, but people still need to know — what where how.And the moment the commit lands upstream, they know what, where, and how.
The usual approach here is to backchannel patched versions to the distros and end users before the commit ever goes into upstream. Although obviously, this runs counter to some folks expectations about how open source releases work
by swiftcoder
4/18/2026 at 3:20:17 PM
No. You operate AS IF they know your location.In other words, it becomes part of your threat model.
by 6thbit
4/18/2026 at 9:45:55 AM
> what> we rely on AI to find it
> where
> the upstream commit
> how
> publicly accessible AI model with very cheap fee
by 0123456789ABCDE
4/18/2026 at 11:59:01 AM
So this bug just proves my thesis about shortening update windows.You may need Claude Mythos to find a hard-to-discover bug in a 30-year-old open source codebase, but that bug will eventually be patched, and that patch will eventually hit the git repo. This lets smaller models rediscover the bug a lot more easily.
I won't be surprised if the window between a git commit and active port scans shrinks to hours or maybe even minutes in the next year or two.
This is where closed source SaaS has a crucial advantage. You don't get the changelog, and even if you did, it wouldn't be of much use to you after the fix is deployed to production.
by miki123211
4/18/2026 at 1:51:32 PM
I found a 20-year old bug in gmime a couple of months or so ago. You don't need to be an AI to do that ...It also puts the lie to "all bugs are shallow with sufficient eyes", gmime is pretty commonly used, but locale<->UTF and back were still wrong.
by spacedcowboy
4/18/2026 at 3:34:51 PM
Because malicious actors don't believe in disclosure windows.by maximilianburke
4/18/2026 at 8:19:44 AM
[dead]by BFV