1/22/2026 at 2:02:13 PM
Some real cognitive dissonance in this article…“The PDF Association operates under a strict principle—any new feature must work seamlessly with existing readers” followed by introducing compression as a breaking change in the same paragraph.
All this for brotli… on a read-many format like pdf zstd’s decompression speed is a much better fit.
by ericpauley
1/22/2026 at 2:22:21 PM
yup, zstd is better. Overall use zstd for pretty much anything that can benefit from a general purpose compression. It's a beyond excellent library, tool, and an algorithm (set of).Brotli w/o a custom dictionary is a weird choice to begin with.
by xxs
1/22/2026 at 2:43:44 PM
Brotli makes a bit of sense considering this is a static asset; it compresses somewhat more than zstd. This is why brotli is pretty ubiquitous for precompressed static assets on the Web.That said, I personally prefer zstd as well, it's been a great general use lib.
by adzm
1/22/2026 at 3:17:07 PM
You need to crank up zstd compression level.zstd is Pareto better than brotli - compresses better and faster
by dist-epoch
1/22/2026 at 3:49:38 PM
I thought the same, so I ran brotli and zstd on some PDFs I had laying around. brotli 1.0.7 args: -q 11 -w 24
zstd v1.5.0 args: --ultra -22 --long=31
| Original | zstd | brotli
RandomBook.pdf | 15M | 4.6M | 4.5M
Invoice.pdf | 19.3K | 16.3K | 16.1K
I made a table because I wanted to test more files, but almost all PDFs I downloaded/had stored locally were already compressed and I couldn't quickly find a way to decompress them.Brotli seemed to have a very slight edge over zstd, even on the larger pdf, which I did not expect.
by atiedebee
1/22/2026 at 5:29:10 PM
EDIT: Something weird is going on here. When compressing zstd in parallel it produces the garbage results seen here, but when compressing on a single core, it produces result competitive with Brotli (37M). See: https://news.ycombinator.com/item?id=46723158I did my own testing where Brotli also ended up better than ZSTD: https://news.ycombinator.com/item?id=46722044
Results by compression type across 55 PDFs:
+------+------+-----+------+--------+
| none | zstd | xz | gzip | brotli |
+------|------|-----|------|--------|
| 47M | 45M | 39M | 38M | 37M |
+------+------+-----+------+--------+
by mort96
1/22/2026 at 11:41:11 PM
Turns out that these numbers are caused by APFS weirdness. I used 'du' to get them which reports the size on disk, which is weirdly bloated for some reason when compressing in parallel. I should've used 'du -A', which reports the apparent size.Here's a table with the correct sizes, reported by 'du -A' (which shows the apparent size):
+---------+---------+--------+--------+--------+
| none | zstd | xz | gzip | brotli |
+---------|---------|--------|--------|--------|
| 47.81M | 37.92M | 37.96M | 38.80M | 37.06M |
+---------+---------+--------+--------+--------+
These numbers are much more impressive. Still, Brotli has a slight edge.
by mort96
1/23/2026 at 5:41:05 PM
Worth considering the compress/decompress overhead, which is also lower in brotli than zstd from my understanding.Also, worth testing zopfli since it's decompression is gzip compatible.
by tracker1
1/22/2026 at 6:16:14 PM
> I couldn't quickly find a way to decompress them pdftk in.pdf output out.pdf decompress
by mrspuratic
1/22/2026 at 7:57:26 PM
Does your source .pdf material have FlateDecode'd chunks or did you fully uncompress it?by Thoreandan
1/23/2026 at 12:00:06 PM
I wasn't sure. I just went in with the (probably faulty) assumption that if it compresses to less than 90% of the original size that it had enough "non-randomness" to compare compression performance.by atiedebee
1/23/2026 at 12:49:38 PM
Ran the tests again with some more files, this time decompressing the pdf in advance. I picked some widely available PDFs to make the experiment reproducable. file | raw | zstd (%) | brotli (%) |
gawk.pdf | 8.068.092 | 1.437.529 (17.8%) | 1.376.106 (17.1%) |
shannon.pdf | 335.009 | 68.739 (20.5%) | 65.978 (19.6%) |
attention.pdf | 24.742.418 | 367.367 (1.4%) | 362.578 (1.4%) |
learnopengl.pdf | 253.041.425 | 37.756.229 (14.9%) | 35.223.532 (13.9%) |
For learnopengl.pdf I also tested the decompression performance, since it is such a large file, and got the following (less surprising) results using 'perf stat -r 5': zstd: 0.4532 +- 0.0216 seconds time elapsed ( +- 4.77% )
brotli: 0.7641 +- 0.0242 seconds time elapsed ( +- 3.17% )
The conclusion seems to be consistent with what brotli's authors have said: brotli achieves slightly better compression, at the cost of a little over half the decompression speed.
by atiedebee
1/22/2026 at 4:47:25 PM
Whats the assumption we can potentially target as reason for the counter-intuitive result?that data in pdf files are noisy and zstd should perform better on noisy files?
by order-matters
1/22/2026 at 5:03:43 PM
What's counter-intuitive about this outcome?by jeffbee
1/22/2026 at 5:39:58 PM
maybe that was too strongly worded but there was an expectation for zstd to outperform. So the fact it didnt means the result was unexpected. i generally find it helpful to understand why something performs better than expected.by order-matters
1/22/2026 at 5:47:29 PM
Isn't zstd primarily designed to provide decent compression ratios at amazing speeds? The reason it's exciting is mainly that you can add compression to places where it didn't necessarily make sense before because it's almost free in terms of CPU and memory consumption. I don't think it has ever had a stated goal of beating compression ratio focused algorithms like brotli on compression ratio.by mort96
1/22/2026 at 6:38:44 PM
I actually thought zstd was supposed to be better than Brotli in most cases, but a bit of searching reveals you're right... Brotli, especially at the highest compression levels (10/11), often exceeds zstd at the highest compression levels (20-22). Both are very slow at those levels, although perfectly suitable for "compress once, decompress many" applications which the PDF spec is obviously one of them.by sgerenser
1/22/2026 at 3:34:11 PM
Are you sure? Admittedly I only have 1 PDF in my homedir, but no combination of flags to zstd gets it to match the size of brotli's output on that particular file. Even zstd --long --ultra -22.by jeffbee
1/23/2026 at 6:16:22 AM
on max compression (11 vs zstd's 22) of text brotli will be around 3-4% denser... and a lot slower. Decompression wise zstd is over 2x faster.The pdfs you have are already compressed with deflate (zip).
by xxs
1/22/2026 at 4:12:55 PM
I love zstd but this isn't necessarily true.by DetroitThrow
1/22/2026 at 4:07:13 PM
Not with small files.by dchest
1/23/2026 at 12:58:07 AM
If that's about using predefined dictionaries, zstd can use them too.If brotli has a different advantage on small source files, you have my curiosity.
If you're talking about max compression, zstd likely loses out there, the answer seems to vary based on the tests I look at, but it seems to be better across a very wide range.
by Dylan16807
1/24/2026 at 8:30:53 PM
No, it's literally just compressing small files without training zstd dict or plugging external dictionaries (not counting the built-in one that brotli has). Especially for English text, brotli at the same speed as zstd gives better results for small data (in kilobyte to a few of megabyte range).by dchest
1/22/2026 at 6:11:04 PM
> ParetoI don’t think you’re using that correctly.
by itsdesmond
1/22/2026 at 8:26:21 PM
It's correct use of Pareto, short for Pareto frontier, if the claim being made is "for every needed compression ratio, zstd is faster; and for every needed time budget, zstd is faster". (Whether this claim is true is another matter.)by wizzwizz4
1/22/2026 at 9:10:17 PM
brotli is ubiquitous because Google recommends it. While Deflate definitely sucks and is old, Google ships brotli in Chrome, and since Chrome is the de facto default platform nowadays, I'd imagine it was chosen because it was the lowest-effort lift.Nevertheless, I expect this to be JBIG2 all over again: almost nobody will use this because we've got decades of devices and software in the wild that can't, and 20% filesize savings is pointless if your destination can't read the damn thing.
by stonogo
1/22/2026 at 7:56:30 PM
Brotli compresses my files way better, but it's doing it way slower. Anyway, universal statement "zstd is better" is not valid.by deepsun
1/23/2026 at 11:02:58 AM
On max compression "--ultra -22", zstd is likely to be 2-4% less dense (larger) on text alike input. While taking over 2x times times to compress. Decompression is also much faster, usually over 2x.I have not tried using a dictionary for zstd.
by xxs
1/22/2026 at 2:38:52 PM
This bizzare move has all the hallmarks of embrace-extend-extinguish rather than technical excellenceby greenavocado
1/22/2026 at 10:26:58 PM
Note the language: "You're not creating broken files—you're creating files that are ahead of their time."Imagine a sales meeting where someone pitched that to you. They have to be joking, right?
I have no objection to adding Brotli, but I hope they take the compatability more seriously. You may need readers to deploy it for a long time - ten years? - before you deploy it in PDF creation tools.
by mmooss
1/22/2026 at 10:35:23 PM
(sarcasm warning...)You're absolutely right! It's not just an inaccurate slogan—it's a patronizing use of artificial intelligence. What you're describing is not just true, it's precise.
by nxobject
1/23/2026 at 8:09:54 PM
I don't understand your point ...by mmooss
1/23/2026 at 11:15:40 PM
The commenter is making a joke about the style of delivery of the sentence you quoted, because the style is [1]characteristic of AI generated writing.[1]https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
by eventualcomp
1/23/2026 at 6:21:14 PM
> on a read-many format like pdf zstd’s decompression speed is a much better fit.brotli decompression is already plenty fast. For PDFs, zstd’s advantage in decompression speed is academic.
by spider-mario
1/22/2026 at 8:07:16 PM
Well, except for speed, compression algorithms need to be compared in terms of compression, you know.Here's discussion by brotli's and zstd's staff:
by deepsun