alt.hn

4/10/2026 at 1:06:58 PM

Jennifer Aniston and Friends Cost Us 377GB and Broke Ext4 Hardlinks

https://blog.discourse.org/2026/04/how-jennifer-aniston-and-friends-cost-us-377gb-and-broke-ext4-hardlinks/

by speckx

4/10/2026 at 1:55:05 PM

In short: Deduplication efforts frustrated by hardlink limits per inode — and a solution compatible with different file systems.

by replooda

4/10/2026 at 2:21:42 PM

The real problem is they aren't deduplicating at the filesystem level like sane people do.

by UltraSane

4/10/2026 at 3:42:35 PM

From the article:

> [W]e shipped an optimization. Detect duplicate files by their content hash, use hardlinks instead of downloading each copy.

by otterley

4/10/2026 at 3:54:16 PM

I meant TRANSPARENT filesystem level dedupe. They are doing it at the application level. filesystem level dedupe makes it impossible to store the same file more than once and doesn't consume hardlinks for the references. It is really awesome.

by UltraSane

4/10/2026 at 3:59:33 PM

Filesystem/file level dedupe is for suckers. =D

If the greatest filesystem in the world were a living being, it would be our God. That filesystem, of course, is ZFS.

Handles this correctly:

https://www.truenas.com/docs/references/zfsdeduplication/

by mmh0000

4/10/2026 at 4:01:37 PM

I was talking about block level dedupe.

by UltraSane

4/10/2026 at 4:08:14 PM

I thought you might be.

I just wanted to mention ZFS.

Have I mentioned how great ZFS is yet?

by mmh0000

4/10/2026 at 6:19:58 PM

ZFS is great! However, it's too complicated for most Linux server use cases (especially with just one block device attached); it's not the default (root filesystem); and it's not supported for at least one major enterprise Linux distro family.

by otterley

4/10/2026 at 8:07:56 PM

File system dedupe is expensive because it requires another hash calculation that cannot be shared with application-level hashing, is a relatively rare OS-fs feature, doesn't play nice with backups (because files will be duplicated), and doesn't scale across boxes.

A simpler solution is application-level dedupe that doesn't require fs-specific features. Simple scales and wins. And plays nice with backups.

Hash = sha256 of file, and abs filename = {{aa}}/{{bb}}/{{cc}}/{{d}} where

aa = hash 2 hex most significant digits

bb = hash next 2 hex digits

cc = hash next 2 hex after that

d = remaining hex digits

by burnt-resistor

4/10/2026 at 11:28:54 PM

For ZFS, at least, `zfs send` is the backup solution. And it performs incremental backups with the `-i` argument.

by otterley

4/11/2026 at 1:58:01 AM

zfs send is really awesome when combined with dedupe and incremental

by UltraSane

4/10/2026 at 8:23:44 PM

All good backup software should be able to do deduped incremental backups at the block level. I'm used to veeam and commvault

by UltraSane

4/10/2026 at 2:00:28 PM

We were on a break...of your filesystem!

by dj_rock

4/10/2026 at 2:05:08 PM

And I thought this was a reference to a Win95 problem https://www.slashgear.com/1414245/jennifer-aniston-matthew-p...

by uticus

4/10/2026 at 2:44:39 PM

Yeah Block level dedupe has been an industry standard for decades. Tracking file hashes? Why?

And I see above that this is a self-hosted platform and I still don’t get it. I was running terabytes of ZFS with dedupe=on on cheap supermicro gear in 2012

by mingus88

4/10/2026 at 3:49:56 PM

File hashes are great to get two systems to work together to dedupe themselves. I have a Windows backup that sends hashes to a backup server, so we don't back up crud we already have.

by zulux

4/11/2026 at 12:16:24 AM

Completely Claude written FWIW. I recongise the style.

by niobe

4/10/2026 at 2:50:37 PM

The Problem. The fix. The Limit.

Is it just me or is everybody else just as fed up with always the same AI tropes?

I've reached a point where I just close the tab the moment I read a headline "The problem". At least use tropes.fyi please

by trixn86

4/10/2026 at 4:25:58 PM

Doesn’t read like AI to me

by colejohnson66

4/10/2026 at 5:40:38 PM

Let that sink in.

by snickerbockers

4/10/2026 at 3:41:21 PM

Another reason to use XFS -- it doesn't have per-inode hard link limits.

(Some say ZFS as well, but it's not nearly as easy to use, and its license is still not GPL-friendly.)

by otterley

4/10/2026 at 8:11:45 PM

xfs on mdraid is what I use on my homelab NAS across several giant RAID arrays. While it lacks some integrity and CoW features, it's really, really stable. I had ZoL ZFS troubles that the maintainers shrugged off requiring transferring everything to another volume.. so I won't ever use or recommend ZFS unless it's Sun-Oracle.

by burnt-resistor

4/10/2026 at 2:02:28 PM

As is always the case, short vs long term... but I think I'd put effort into migrating to a filesystem that is aware of duplication instead of trying to recreate one with links [while retaining duplicates, just fewer].

Effectiveness is debatable, this approach still has duplication. An insignificant amount, I'll admit. The filesystem handling this at the block level is probably less problematic/prone to rework and more efficient.

edit: Eh, ignore me. I see this is preparing for [whatever filesystem hosts chose] thanks to 'ameliaquining' below. Originally thought this was all Discourse-proper, processing data they had.

by bravetraveler

4/10/2026 at 2:28:40 PM

Discourse is self-hostable; they can't require their users to use a filesystem that supports deduplication. (Or, well, they could, but it would greatly complicate installation and maintenance and whatnot, and also there would need to be some kind of story for existing installations.)

by ameliaquining

4/10/2026 at 2:32:06 PM

Fair, I am/was confused by the hosting model and presentation. This is a nice User-preparation/consideration, I guess. I still maintain a backup filesystem unaware of duplication at the block level is a mistake.

I completely overlooked the shipping-of-tarballs. Links make sense, here. I had 'unpacked' and relatively-local data in mind. Absolutely would not go as far to suggest their scheme pick up 'zfs {send,receive}'/equivalent, lol.

by bravetraveler

4/10/2026 at 7:03:57 PM

They do also offer it as multi-tenant hosted SaaS, and the post is about their experience running backups on that. But whatever solution they use has to also work with the self-hosted version, which imposes some constraints.

by ameliaquining

4/10/2026 at 7:27:28 PM

Sweet

by bravetraveler

4/10/2026 at 2:15:56 PM

[dead]

by mikehotel

4/10/2026 at 2:20:23 PM

This makes them look rather incompetent. Storing the exact same file 246,173 times is just stupid. Dedupe at the filesystem level and make your life easier.

by UltraSane