alt.hn

4/30/2026 at 7:14:41 PM

Unverified Evaluations in Dusk's PLONK

https://osec.io/blog/2026-04-30-unverified-evaluations-dusk-plonk/

by deut-erium

5/3/2026 at 10:09:11 AM

I notice no mention of a bug bounty. Did they not get paid for this?

All I could find of a Dusk bug bounty was this blog post from 2023[0]:

> Although we do not currently have a bug bounty program, we will certainly create an extensive one in the near future, when we are ready to transition toward the auditing, testing, and security assessment phases of our roadmap.

And the roadmap links to a URL that now 404s.

I would be extremely reticent to use a blockchain with no bug bounty, as it means that it's easy for a malicious actor to monetize a vulnerability, but there's no incentive for an honest researcher to report it or even look for one.

[0] https://dusk.network/news/infrastructure-vulnerability-fixed

by mtlynch

5/3/2026 at 9:53:20 AM

Many years ago I attended an academic cryptography conference and took part in a panel there. At the time, I was the lead engineer at a startup making an enterprise blockchain platform and we (I) had made the controversial choice to not use ZKPs in its design. Instead we were using confidential computing with Intel SGX enclaves. Although CC schemes like SGX do use cryptography extensively, it's all "boring" cryptography like key agreement, AES, signatures and so on. Everyone else there was much more interested in exotic cryptography like zkSNARKs, and thought using it in blockchain protocols was obvious.

So on the panel I was the devil's advocate, being challenged over that choice (it was all very polite). One of my points was that I didn't feel comfortable with ZKP systems because they are both very clever and completely non-recoverable. Any mistake at all leads to catastrophic collapse: not only can people mint money at will, but they can't be caught by construction, not even via post-hoc audits because by design there's no way to audit the system. This is quite different to classical Satoshi-style blockchains where the only thing that can cause such problems is a collapse of the core digital signature algorithm, which is very well tested and understood.

After the conference whilst getting drinks, I was chatting to a ZKP researcher and we got a bit drunk. He told he that he was leaving academia to go work for a blockchain company, so of course I asked if he'd be adding ZKPs to their platform. He laughed and said no, he'd never use his own research for anything. He said these systems generate billions of constraints and if even a single one is wrong the entire system fails.

Technologies like SGX are, cryptographically speaking, ugly ducklings. They rely heavily on proprietary technology and security through the obscurity of nanometer scale electronics. But the whole secure hardware world has gone through many rounds of intense combat and learned to build in extensive renewability features. Errors in the implementation are expected and designed for, with many ways to re-seal the system after a compromise and to audit that counterparties have applied the necessary updates. Intel's implementation of all this got a bad rap after Spectre attacks were discovered, but I think that in reality they did well: unlike AMD's implementation which experienced catastrophic collapse several times requiring new hardware, Intel were able to repeatedly patch and reseal SGX in the wild without needing hardware replacements - even against Spectre, which nobody anticipated and goes to the core of CPU design. Additionally, the way I used it in the design of Corda meant that a failure of SGX just led to privacy failures but not logic failures: you couldn't mint money by attacking it, just leak data. But that wasn't the only privacy feature so it was a pure upgrade.

A few years later there was a very similar attack to this one on Zcash. They accidentally published some values from the setup procedure they shouldn't have, and it could be used to forge proofs. Someone told me later that in their view Zcash should have just shut down after that, because the social contract had been violated. Nobody could ever be sure that someone hadn't spotted the problem and minted themselves an unlimited supply of coins.

So we can't say this attack on dusk-plonk is terribly surprising. It's exactly the scenario the researcher warned me about years ago and very close to one that happened a few years later. These algorithms are so insanely complicated that even the researchers that write the papers - in academia, under no time pressure at all - can't implement them correctly! If the original researchers can't do it then what chance do other people have? I worked on Bitcoin for years but I'd have serious reservations about using a coin that relied on ZKPs because I could never have confidence that the money supply was secure.

by mike_hearn

5/3/2026 at 11:08:03 AM

> Intel were able to repeatedly patch and reseal SGX in the wild without needing hardware replacements

I take it you haven't caught up on https://tee.fail/ ?

(Which has been known as a hypothetical ever since Intel quietly changed their threat model to exclude such physical-access attacks, but now we have practical PoCs)

by Retr0id

5/4/2026 at 8:57:06 AM

That wasn't actually a fail, was it?

Intel pivoted a few times to try and find market demand because it turned out that developers couldn't/wouldn't handle the original more secure SGX design in which available RAM was limited but physical attacks were harder. Nobody wrote enclaves (well, except a few people like me), and instead people adopted AMD's SEV - which was far, far less secure, but it's a market for lemons. Very few people really understand this stuff. So Intel saw this and said OK, developers can't handle the enclave model and maybe don't care about physical attacks, they want remotely attestable cloud VMs instead.

Unfortunately making memory secure against physical attacks, plentiful and fast simultaneously is an unsolved research problem. You can have two of three but not easily all of them. Intel's original design provided memory secure against physical attacks and fast enough, but not plentiful. Post-pivot they provided plentiful RAM but sacrificed security against physical attacks.

The original SGX I wrote enclaves for were secure against physical attacks along with many others. It's a pity the computing industry proved unwilling to adopt it.

by mike_hearn

5/3/2026 at 1:23:16 PM

I don’t think that these attacks indicate things one way or another for ZKPs vs TEEs. After all, you yourself mention catastrophic breaks for various TEE implementations.

There are two kinds of issues: a break in the proof system, and a break in the program that proof system is proving.

The first is analogous to vulnerabilities in the TEE platform, of which, as you yourself note, there have been numerous.

The second is analogous to a bug in the program running in the TEE. Again, this is a kind of bug that’s totally possible when writing plain software too.

Yes, ZKP-based privacy mechanisms have an issue where if you have these soundness bugs, it can be catastrophic. We can and should design ZKP systems that are resistant to such issues. But it’s just part of the maturity cycle of the tech. AFAIK dusk was an early adopter of the tech back when we were still figuring out how to implement things securely.

by Ar-Curunir

5/4/2026 at 9:00:50 AM

I think it's a fundamental difference.

TEEs have had attacks, but the good ones like Xbox One or classical SGX didn't have any catastrophic attacks. All attacks were fixable via software updates that could be rolled out quickly and easily.

ZKPs have had multiple catastrophic attacks now. By catastrophic I mean there is no way to recover. Once the problem is discovered the entire database the ZKPs were protecting is a writeoff.

I stopped following ZKP research years ago but at the time it appeared this problem was fundamental. By design these systems leave only small mathematical objects behind that prove things about something you can't see. If the proofs can be forged it's game over, there's no data which can be used to restore trust in the system. I don't see how this can be addressed with maturity, and cryptographers have been pushing circuit based ZKP systems for 15 years now, so how long exactly is this maturation process supposed to take?

You can use TEEs in a way that yield catastrophic attacks, but the system I designed didn't have that problem.

by mike_hearn