3/24/2026 at 12:30:40 PM
User here, who also acts as a Level 2 support for storage.The article contains some solid logic plus an assumption that I disagree with.
Solid logic: you should prefer zswap if you have a device that can be used for swap.
Solid logic: zram + other swap = bad due to LRU inversion (zram becomes a dead weight in memory).
Advice that matches my observations: zram works best when paired with a user-space OOM killer.
Bold assumption: everybody who has an SSD has a device that can be used for swap.
The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells. This is much worse than any HDD. If a DRAMless consumer SSD is all that you have, better use zram.
by patrakov
3/24/2026 at 12:41:14 PM
Thank you for reading and your critique! What you're describing is definitely a real problem, but I'd challenge slightly and suggest the outcome is usually the inverse of what you might expect.One of the counterintuitive things here is that _having_ disk swap can actually _decrease_ disk I/O. In fact this is so important to us on some storage tiers that it is essential to how we operate. Now, that sounds like patent nonsense, but hear me out :-)
With a zram-only setup, once zram is full, there is nowhere for anonymous pages to go. The kernel can't evict them to disk because there is no disk swap, so when it needs to free memory it has no choice but to reclaim file cache instead. If you don't allow the kernel to choose which page is colder across both anonymous and file-backed memory, and instead force it to only reclaim file caches, it is inevitable that you will eventually reclaim file caches that you actually needed to be resident to avoid disk activity, and those reads and writes hit the same slow DRAMless SSD you were trying to protect.
In the article I mentioned that in some cases enabling zswap reduced disk writes by up to 25% compared to having no swap at all. Of course, the exact numbers will vary across workloads, but the direction holds across most workloads that accumulate cold anonymous pages over time, and we've seen it hold on constrained environments like BMCs, servers, desktop, VR headsets, etc.
So, counter-intuitively, for your case, it may well be the case that zswap reduces disk I/O rather than increasing it with an appropriately sized swap device. If that's not the case that's exactly the kind of real-world data that helps us improve things on the mm side, and we'd love to hear about it :-)
by cdown
3/24/2026 at 1:37:31 PM
1. Thanks for partially (in paragraph 4 but not paragraph 5) preempting the obvious objection. Distinguishing between disk reads and writes is very important for consumer SSDs, and you quoted exactly the right metric in paragraph 4: reduction of writes, almost regardless of the total I/O. Reads without writes are tolerable. Writes stall everything badly.2. The comparison in paragraph 4 is between no-swap and zswap, and the results are plausible. But the relevant comparison here is a three-way one, between no-swap, zram, and zswap.
3. It's important to tune earlyoom "properly" when using zram as the only swap. Setting the "-m" argument too low causes earlyoom to miss obvious overloads that thrash the disk through page cache and memory-mapped files. On the other hand, with earlyoom, I could not find the right balance between unexpected OOM kills and missing the brownouts, simply because, with earlyoomd, the usage levels of RAM and zram-based swap are the only signals available for a decision. Perhaps systemd-oomd will fare better. The article does mention the need for tuning the userspace OOM killer to an uncomfortable degree.
I have already tried zswap with a swap file on a bad SSD, but, admittedly, not together with earlyoomd. With an SSD that cannot support even 10 MB/s of synchronous writes, it browns out, while zram + earlyoomd can be tuned not to brown out (at the expense of OOM kills on a subjectively perfectly well performing system). I will try backing-store-less zswap when it's ready.
And I agree that, on an enterprise SSD like Micron 7450 PRO, zswap is the way to go - and I doubt that Meta uses consumer SSDs.
by patrakov
3/24/2026 at 3:42:33 PM
It's very rare for disk reads to hang your UI (you would need to be running blocking operations ij the UI thread)But a swap witu high latency will occasionally hang the interface, and with it hang any means to free memory manually
by nextaccountic
3/24/2026 at 1:56:40 PM
Counterargument: you can mostly disable zswap writeback, so it will only use the swap partition when hibernating[1].[1]https://wiki.archlinux.org/title/Power_management/Suspend_an...
by bilegeek
3/25/2026 at 12:44:34 AM
> The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells. This is much worse than any HDD. If a DRAMless consumer SSD is all that you have, better use zram.Do mind that DRAMless is much less of an issue on NVMe. NVMe can use Host Memory Buffer to use system RAM for its logic, which is still orders of magnitude faster than relying on the NAND.
DRAMless is strictly worse in every way on SATA, where you really don't want to use it if you can help it; on NVMe, the difference is more about having a bad lower-quality drive or a good higher-quality drive. Having DRAM is a good indicator of the drive being good as the manufacturer is unlikely to pair it with slow NAND and controller, but lacking it doesn't necessarily mean a drive will perform badly. When comparing drives between generations, DRAMless often ends up performing better, even in loaded scenarios, compared to an older drive with DRAM.
by Numerlor
3/24/2026 at 11:16:36 PM
> The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells.Do you know to what extent this can be mitigated by overprovisioning? Like only partitioning say 50% of the drive and leaving the rest free for controller as "scratch space"?
by pamcake
3/24/2026 at 3:25:55 PM
> Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells.Is there an experiment you'd recommend to reliably show this behavior on such a SSD (or ideally to become confident a given SSD is unaffected)? Is it as simple as writing flat-out for say, 10 minutes, with O_DIRECT so you can easily measure latency of individual writes? do you need a certain level of concurrency? or a mixed read/write load? etc? repeated writes to a small region vs writes to a large region (or maybe given remapping that doesn't matter)? Is this like a one-liner with `fio`? does it depend on longer-term state such as how much of the SSD's capacity has been written and not TRIMed?
Also, what could one do in advance to know if they're about to purchase such an SSD? You mentioned one affected model. You mentioned DRAMless too, but do consumer SSD spec sheets generally say how much DRAM (if any) the devices have? maybe some known unaffected consumer models? it'd be a shame to jump to enterprise prices to avoid this if that's not necessary.
I have a few consumer SSDs around that I've never really pushed; it'd be interesting to see if they have this behavior.
by scottlamb
3/24/2026 at 4:15:22 PM
> Also, what could one do in advance to know if they're about to purchase such an SSD? You mentioned one affected model.Typically QLC is significantly worse at this than TLC, since the "real" write speed is very low. In my experience any QLC is very susceptible to long pauses in write heavy scenarios.
It does depend on controller though. As an example, check out the sustained write benchmark graph here[1], you can see that a number of models starts this oscillating pattern after exhausting the pseudo-SLC buffer, indicating the controller is taking a time-out to rearrange things in the background. Others do it too but more irregularly.
> You mentioned DRAMless too, but do consumer SSD spec sheets generally say how much DRAM (if any) the devices have?
I rely on TechPowerUp, as an example compare the Samsung 970 Evo[2] to 990 Evo[3] under DRAM cache section.
[1]: https://www.tomshardware.com/pc-components/ssds/samsung-990-... (second image in IOMeter graph)
[2]: https://www.techpowerup.com/ssd-specs/samsung-970-evo-1-tb.d...
[3]: https://www.techpowerup.com/ssd-specs/samsung-990-evo-plus-1...
by magicalhippo
3/25/2026 at 12:48:32 AM
> Is there an experiment you'd recommend to reliably show this behavior on such a SSD? fio --name 4k-write --rw=write --bs=4k --size=1G --filename=fio.file --ioengine=libaio --sync=1 --time_based --runtime=60 --write_iops_log=ssd --log_avg_msec=1000 --randrepeat=0 --refill_buffers=1
Then examine ssd_iops.1.log.Results from Apacer AS350 1TB: https://pastebin.com/F6pr5g29 - the first field is the timestamp in milliseconds, the second one is the write IOs completed since the previous line.
EDIT: I was told that the test above is invalid and that I should add --direct=1. OK, here is the new log, showing the same: https://pastebin.com/Wyw6r9TC - note that some timestamps are completely missing, indicating that the SSD performed zero IOs in that second.
You may want to repeat the experiment a few times.
by patrakov
3/24/2026 at 3:53:43 PM
> Many consumer SSDs ... under synchronous writes, will regularly produce latency spikes of 10 seconds or moreSurely "regularly" is a significant overstatement. Most people have practically never seen this failure mode. And if it only occurs under a heavy write workload, that's not something that's supposed to happen purely as a result of swapping.
by zozbot234
3/24/2026 at 6:40:06 PM
Online-upgrading a rolling distro with a browser running is enough to see it happen regularly.by seba_dos1
3/24/2026 at 8:53:34 PM
You're gambling on consumer SSD firmware not dumping the FTL and going AWOL when queued swap writes pile up under memory pressure. You may never see a 10 second stall in normal browsing, but add write-heavy Chrome churn or a VM and that 'never happens' claim gets shaky fast.It is workload dependent.
by hrmtst93837
3/25/2026 at 6:24:11 AM
LLM comment history.by AlexeyBelov
3/25/2026 at 8:40:52 AM
You're gambling.by hrmtst93837
3/24/2026 at 2:15:28 PM
At this point just throw your shitty SSD in the garbage bin^W^W USB box and buy a proper one. OOMing would always cost you more.And if you still need to use a shitty SSD then just increase your swap size dramatically, giving a breathing room for the drive and implicitly doing an overprovisioning for it.
by justsomehnguy