2/21/2026 at 10:09:15 AM
> etcd is a strongly consistent, distributed key-value store, and that consistency comes at a cost: it is extraordinarily sensitive to I/O latency. etcd uses a write-ahead log and relies on fsync calls completing within tight time windows. When storage is slow, even intermittently, etcd starts missing its internal heartbeat and election deadlines. Leader elections fail. The cluster loses quorum. Pods that depend on the API server start dying.This seems REALLY bad for reliability? I guess the idea is that it's better to have things not respond to requests than to lose data, but the outcome described in the article is pretty nasty.
It seems like the solution they arrived at was to "fix" this at the filesystem level by making fsync no longer deliver reliability, which seems like a pretty clumsy solution. I'm surprised they didn't find some way to make etcd more tolerant of slow storage. I'd be wary of turning off filesystem level reliability at the risk of later running postgres or something on the same system and experiencing data loss when what I wanted was just for kubernetes or whatever to stop falling over.
by kg
2/21/2026 at 1:37:05 PM
> This seems REALLY bad for reliability? I guess the idea is that it's better to have things not respond to requests than to lose data, but the outcome described in the article is pretty nasty.It is. Because if it really starts to crap out above 100ms just a small hiccup in network attached storage of VM it is running on cam
but it's not as simple as that, if you have multiple nodes, and one starts to lag, kicking it out is only way to keep the latency manageable.
Better solution would be to keep cluster-wide disk latency average and only kick node that is slow and much slower than other nodes; that would also auto tune to slow setups like someone running it on some spare hdds at homelab
by PunchyHamster
2/21/2026 at 10:44:14 AM
Yes, wouldn't their fix likely make etcd not consistent anymore since there's no guarantee that the data was persisted on disk?by denysvitali
2/21/2026 at 11:41:07 AM
Yes, but they wrote it’s for a demo and it’s fine if they lost the last few seconds in the event of unexpected system shutdown.And also in prod, etcd recommends you run with SSDs to minimize variance of fsync/write latencies
by weiliddat
2/21/2026 at 12:31:28 PM
Getting into an inconsistent state does not just mean “losing a few seconds”.by ahoka
2/22/2026 at 4:00:26 AM
How would you get into an inconsistent state based on an fsync change?Edit: I meant what sequence of events would cause etcd to go into an inconsistent state when fsync is working this way
by weiliddat
2/22/2026 at 8:07:19 AM
data corruption, since fsync on the host is essentially a noop. The VM fs thinks data is persistent on disk, but it’s not - the pod running on the VM thinks the same …by _ananos_
2/21/2026 at 10:59:42 AM
Yes, they totally missed the point of the fsync...by justincormack
2/21/2026 at 2:51:37 PM
well, the actual issue (IMHO) is that this meta-orchestrator (karmada) needs quorum even for a single node cluster.The purpose of the demo wasn't to show consistency, but to describe the policy-driven decision/mechanism.
What hit us in the first place (and I think this is what we should fix) is the fact that a brand new nuc-like machine, with a relatively new software stack for spawning VMs (incus / ZFS etc.) behaves so bad it can produce such hiccups for disk IO access...
by _ananos_
2/21/2026 at 6:49:20 PM
They used a desktop platform with an unknown SSD and with ZFS. There could be a chance what with at least a proper SSD they wouldn't even get in the trouble in the first place.by justsomehnguy
2/21/2026 at 2:53:29 PM
well, indeed -- we should have found the proper parameters to make etcd wait for quorum (again, I'm stressing that it's a single node cluster -- banging my head to understand who else needs to coordinate with the single node ...)by _ananos_
2/21/2026 at 1:50:05 PM
That’s a design issue in etcd.by api
2/21/2026 at 1:38:50 PM
CAP theorem goes brrr. This is CP. ZooKeeper gives you AP. Postgres (k3s/kine translation layer) gives you roughly CA, and CP-ish with synchronous streaming replication.If you run this on single-tenant boxes that are set up carefully (ideally not multi-tenant vCPUs, low network RTT, fast CPU, low swappiness, `nice` it to high I/O priority, `performance` over `ondemand` governor, XFS) it scales really nicely and you shouldn't run into this.
So there are cases where you actually do want this. A lot of k8s setups would be better served by just hitting postgres, sure, and don't need the big fancy toy with lots of sharp edges. It's got raison d'etre though. Also you just boot slow nodes and run a lot.
by landl0rd
2/21/2026 at 1:45:10 PM
[flagged]by fcantournet