4/6/2026 at 5:47:48 AM
> This is not a bug report. [...] The goal is constructive, not a complaint.Er, I appreciate trying to be constructive, but in what possible situation is it not a bug that a power cycle can lose the pool? And if it's not technically a "bug" because BTRFS officially specifies that it can fail like that, why is that not in big bold text at the start of any docs on it? 'Cuz that's kind of a big deal for users to know.
EDIT: From the longer write-up:
> Initial damage. A hard power cycle interrupted a commit at generation 18958 to 18959. Both DUP copies of several metadata blocks were written with inconsistent parent and child generations.
Did the author disable safety mechanisms for that to happen? I'm coming from being more familiar with ZFS, but I would have expected BTRFS to also use a CoW model where it wasn't possible to have multiple inconsistent metadata blocks in a way that didn't just revert you to the last fully-good commit. If it does that by default but there's a way to disable that protection in the name of improving performance, that would significantly change my view of this whole thing.
by yjftsjthsd-h
4/6/2026 at 6:29:13 AM
As far as I can see, no, the author disabled nothing of the sort that he documented.I suspect that the author's intent is less "I do not view this as a bug" and more "I do not think it's useful to get into angry debates over whether something is a bug". I do not know whether this is a common thing on btrfs discussions, but I have certainly seen debates to that effect elsewhere.
(My personal favorite remains "it's not a data loss bug if someone could technically theoretically write something to recover the data". Perhaps, technically, that's true, but if nobody is writing such a tool, nobody is going to care about the semantics there.)
by rincebrain
4/6/2026 at 7:04:24 AM
> I suspect that the author's intent is less "I do not view this as a bug" and more "I do not think it's useful to get into angry debates over whether something is a bug".Agreed, and I appreciate the attempt to channel things into a productive conversation.
by yjftsjthsd-h
4/6/2026 at 8:22:03 AM
btrfs's reputation is not great in this regard.by rcxdude
4/6/2026 at 11:48:21 AM
As far as I understand, single device and RAID1 is solid, but as soon as you want to do RAID1+0 or RAID5/6 you’re entering dangerous territory with BTRFS.by stingraycharles
4/6/2026 at 5:45:32 PM
I had a metadata corruption in metadata raid1c3 (raid1, 3 copies) over 4 disks. It happened after an unplanned power loss during a simulated disk failure replacement. Since manual cleanup of the filesystem metadata (list all files, get IO errors, delete IO errored files), the btrfs kernel driver segfaults in kernel space on any scrub or device replacment attenpt.Honestly the code of btrfs is a bit scary to read too. I have lost all trust in this filesystem.
Too bad because btrfs has pretty compelling features.
by bombela
4/6/2026 at 7:03:02 AM
Unless I missed it the writeup never identifies a causal bug, only things that made recovery harder.by Retr0id