alt.hn

12/12/2025 at 1:16:49 AM

Disks Lie: Building a WAL that actually survives

https://blog.canoozie.net/disks-lie-building-a-wal-that-actually-survives/

by jtregunna

12/12/2025 at 4:12:22 PM

I’ve seen disks do off track writes, dropped writes due to write channel failures, and dropped writes due to the media having been literally scrubbed off the platter previously. You need LBA seeded CRC to catch these failures along with a number of other checks. I get excited when people write about this in the industry. They’re extremely interesting failure modes that I’ve been lucky enough to have been exposed to, at volume, for a large fraction of my career.

by jmpman

12/14/2025 at 9:44:26 PM

People consistently underestimate the many ways in which storage can and will fail in the wild.

The most vexing storage failure is phantom writes. A disk read returns a "valid" page, just not the last written/fsync-ed version of that page. Reliably detecting this case is very expensive, particularly on large storage volumes, so it is rarely done for storage where performance is paramount.

by jandrewrogers

12/14/2025 at 10:16:19 PM

Not that uncommon failure mode for some SSDs, unclean shutdown is like a dice roll for some of them: maybe you get what you wrote five seconds ago, maybe you get a snapshot of a couple hours ago.

by formerly_proven

12/14/2025 at 10:36:48 PM

Early SSDs were particularly prone to phantom writes due to firmware bugs. Still have scars from the many creative ways in which early SSDs would routinely fail.

by jandrewrogers

12/14/2025 at 10:50:51 PM

In college I had a 90GB OCZ Vertex, or maybe it was a Vertex 2.

It would suddenly become blank. You have an OS and some data today, and tomorrow you wake up and everything claims it is empty. It would still work, though. You could still install a new OS and keep going, and it would work until next time.

What a friendly surprise on exam week.

Sold it to a friend for really cheap with a warning about what had been happening. It surprise wiped itself for him too.

by doubled112

12/14/2025 at 9:55:29 PM

I worked with a greybeard that instilled in me that when we were about to do some RAID maintenance that we would always run sync twice. The second to make sure it immediately returns. And I added a third for my own anxiety.

by kami23

12/14/2025 at 10:33:44 PM

You need to sync twice because Unix is dumb: "According to the standard specification (e.g., POSIX.1-2001), sync() schedules the writes, but may return before the actual writing is done." https://man7.org/linux/man-pages/man2/sync.2.html

by wmf

12/14/2025 at 11:05:03 PM

Then how do you know the writes are done after the second sync?

by amelius

12/14/2025 at 11:11:04 PM

AFAIK multiple syncs can't happen at the same time so the second sync implicitly waits for the first one to complete.

by wmf

12/14/2025 at 11:41:44 PM

If it was that simple, then why doesn't sync just do 2x sync internally?

by amelius

12/15/2025 at 12:59:31 AM

Why is creat() missing the e? Why does an FTP server connect back to the client?

by wmf

12/15/2025 at 12:37:02 AM

If I had to guess, it is just extra work to do it twice, and you may not need to wait for it for some use cases. The better way would be to add a flag or alternative function to make the sync a blocking operation in the first place.

by wakawaka28

12/14/2025 at 10:42:26 PM

> Unix is dumb

I don't know. Now async I/O is all the rage and that is the same idea.

by 1718627440

12/14/2025 at 10:56:45 PM

If they had a sync() system call and a wait_for_sync_to_finish() system call then you'd be right. But they didn't have those.

by wmf

12/14/2025 at 10:55:46 PM

The syscall is literally called "sync", though.

by marcosdumay

12/14/2025 at 10:58:31 PM

I think it is a way of the OS to shoehorn async into a synchronously written application.

by 1718627440

12/14/2025 at 10:39:24 PM

  sync; sync; halt

by lowbloodsugar

12/14/2025 at 10:06:40 PM

it's not just a good idea for raid

by zabzonk

12/14/2025 at 10:24:35 PM

Oh definitely not, I do it on every system that I've needed it to be synced before I did something. We were just working at a place that had 2k+ physical servers with 88 drives each in RAID6, so that was our main concern back then.

I have been passing my anxieties about hardrives to junior engineers for a decade now.

by kami23

12/14/2025 at 10:54:32 PM

> Submit the write to the primary file

> Link fsync to that write (IOSQE_IO_LINK)

> The fsync's completion queue entry only arrives after the write completes

> Repeat for secondary file

Wait, so the OS can re-order the fsync() to happen before the write request it is supposed to be syncing? Is there a citation or link to some code for that? It seems too ridiculous to be real.

> O_DSYNC: Synchronous writes. Don't return from write() until the data is actually stable on the disk.

If you call fsync() this isn't needed correct? And if you use this, then fsync() isn't needed right?

by n_u

12/14/2025 at 11:06:48 PM

> Wait, so the OS can re-order the fsync() to happen before the write request it is supposed to be syncing? Is there a citation or link to some code for that? It seems too ridiculous to be real.

This is an io_uring-specific thing. It doesn't guarantee any ordering between operations submitted at the same time, unless you explicitly ask it to with the `IOSQE_IO_LINK` they mentioned.

Otherwise it's as if you called write() from one thread and fsync() from another, before waiting for the write() call to return. That obviously defeats the point of using fsync() so you wouldn't do that.

> If you call fsync(), [O_DSYNC] isn't needed correct? And if you use [O_DSYNC], then fsync() isn't needed right?

I believe you're right.

by scottlamb

12/14/2025 at 11:28:36 PM

I guess I'm a bit confused why the author recommends using this flag and fsync.

Related: I would think that grouping your writes and then fsyncing rather than fsyncing every time would be more efficient but it looks like a previous commenter did some testing and that isn't always the case https://news.ycombinator.com/item?id=15535814

by n_u

12/15/2025 at 3:46:22 AM

I'm not sure there's any good reason. Other commenters mentioned AI tells. I wouldn't consider this article a trustworthy or primary source.

by scottlamb

12/15/2025 at 4:36:44 AM

Yeah that seems reasonable. The article seems to mix fsync and O_DSYNC without discussing their relationship which seems more like AI and less like a human who understands it.

It also seems if you were using io_uring and used O_DSYNC you wouldn't need to use IOSQE_IO_LINK right?

Even if you were doing primary and secondary log file writes, they are to different files so it doesn't matter if they race.

by n_u

12/13/2025 at 10:58:33 AM

https://en.wikipedia.org/wiki/Data_Integrity_Field

This, along with RAID-1, is probably sufficient to catch the majority of errors. But realize that these are just probabilities - if the failure can happen on the first drive, it can also happen on the second. A merkle tree is commonly used to also protect against these scenarios.

Notice that using something like RAID-5 can result in data corruption migrating throughout the stripe when using certain write algorithms

by jmpman

12/13/2025 at 11:07:51 AM

The paranoid would also follow the write with a read command, setting the SCSI FUA (forced unit access) bit, requiring the disk to read from the physical media, and confirming the data is really written to that rotating rust. Trying to do similar in SATA or with NVMe drives might be more complicated, or maybe impossible. That’s the method to ensure your data is actually written to viable media and can be subsequently read.

by jmpman

12/14/2025 at 10:58:07 PM

Note that 99% of drives don't implement DIF.

by wmf

12/12/2025 at 7:22:43 AM

I thought an fsync on the containing directories of each of the logs was needed to ensure the that newly created files were durably present in the directories.

by compressedgas

12/12/2025 at 1:37:14 PM

Right, you do need to fsync when creating new files to ensure the directory entry is durable. However, WAL files are typically created once and then appended to for their lifetime, so the directory fsync is only needed at file creation time, not during normal operations.

by jtregunna

12/12/2025 at 2:32:28 PM

> Conclusion

> A production-grade WAL isn't just code, it's a contract.

I hate that I'm now suspicious of this formulation.

by breakingcups

12/14/2025 at 9:58:19 PM

You’re not insane. This is definitely AI.

by nmilo

12/12/2025 at 3:23:38 PM

In what sense? The phrasing is just a generalization, production-grade anything needs consideration of the needs and goals of the project.

by jtregunna

12/12/2025 at 4:29:59 PM

“<x> isn’t just <y>, it’s <z>” is an AI smell.

by rogerrogerr

12/14/2025 at 11:31:08 PM

It is, but partly because it is a common form in the training data. LLM output seems to use the form more than people, presumably either due to some bias in the training data (or the way it is tokenised) or due to other common token sequences leading into it (remember: it isn't an official acronym but Glorified Predictive Text is an accurate description). While it is a smell, it certainly isn't a reliable marker, there needs to be more evidence than that.

by dspillett

12/14/2025 at 11:21:55 PM

Wouldn't that just be because the construction is common in the training materials, which means it's a common construction in human writing?

by devman0

12/14/2025 at 11:26:24 PM

It must be, but any given article is likely to not be the average of the training material, and thus has a different expectedness of such a construction.

by 1718627440

12/14/2025 at 10:52:30 PM

This article is pretty low quality. It's an important and interesting topic and the article is mostly right but it's not clear enough to rely on.

The OS page cache is not a "problem"; it's a basic feature with well-documented properties that you need to learn if you want to persist data. The writing style seems off in general (e.g. "you're lying to yourself").

AFAIK fsync is the best practice not O_DIRECT + O_DSYNC. The article mentions O_DSYNC in some places and fsync in others which is confusing. You don't need both.

Personally I would prefer to use the filesystem (RAID or ditto) to handle latent sector errors (LSEs) rather than duplicating files at the app level. A case could be made for dual WALs if you don't know or control what filesystem will be used.

Due to the page cache, attempting to verify writes by reading the data back won't verify anything. Maaaybe this will work when using O_DIRECT.

by wmf

12/14/2025 at 11:06:07 PM

Deleting data from the disk is actually even harder.

by amelius

12/14/2025 at 11:47:21 PM

This looks AI-generated, including the linked code. That explains why the .zig-cache directory and the binary is checked into Git, why there's redundant commenting, and why the README has that bold, bullet point and headers style that is typical of AI.

If you can't be bothered to write it, I can't be bothered to read it.

by henning

12/14/2025 at 11:54:49 PM

The front page this weekend has been full of this stuff. If there’s a hint of clickbait about the title, it’s almost a forgone conclusion you’ll see all the other LLM tics, too.

These do not make the writing better! They obscure whatever the insight is behind LinkedIn-engagement tricks and turns of phrase that obfuscate rather than clarify.

I’ll keep flagging and see if the community ends up agreeing with me, but this is making more and more of my hn experience disappointing instead of delightful.

by twoodfin