alt.hn

3/24/2026 at 10:39:21 AM

Debunking Zswap and Zram Myths

https://chrisdown.name/2026/03/24/zswap-vs-zram-when-to-use-what.html

by javierhonduco

3/24/2026 at 12:30:40 PM

User here, who also acts as a Level 2 support for storage.

The article contains some solid logic plus an assumption that I disagree with.

Solid logic: you should prefer zswap if you have a device that can be used for swap.

Solid logic: zram + other swap = bad due to LRU inversion (zram becomes a dead weight in memory).

Advice that matches my observations: zram works best when paired with a user-space OOM killer.

Bold assumption: everybody who has an SSD has a device that can be used for swap.

The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells. This is much worse than any HDD. If a DRAMless consumer SSD is all that you have, better use zram.

by patrakov

3/24/2026 at 12:41:14 PM

Thank you for reading and your critique! What you're describing is definitely a real problem, but I'd challenge slightly and suggest the outcome is usually the inverse of what you might expect.

One of the counterintuitive things here is that _having_ disk swap can actually _decrease_ disk I/O. In fact this is so important to us on some storage tiers that it is essential to how we operate. Now, that sounds like patent nonsense, but hear me out :-)

With a zram-only setup, once zram is full, there is nowhere for anonymous pages to go. The kernel can't evict them to disk because there is no disk swap, so when it needs to free memory it has no choice but to reclaim file cache instead. If you don't allow the kernel to choose which page is colder across both anonymous and file-backed memory, and instead force it to only reclaim file caches, it is inevitable that you will eventually reclaim file caches that you actually needed to be resident to avoid disk activity, and those reads and writes hit the same slow DRAMless SSD you were trying to protect.

In the article I mentioned that in some cases enabling zswap reduced disk writes by up to 25% compared to having no swap at all. Of course, the exact numbers will vary across workloads, but the direction holds across most workloads that accumulate cold anonymous pages over time, and we've seen it hold on constrained environments like BMCs, servers, desktop, VR headsets, etc.

So, counter-intuitively, for your case, it may well be the case that zswap reduces disk I/O rather than increasing it with an appropriately sized swap device. If that's not the case that's exactly the kind of real-world data that helps us improve things on the mm side, and we'd love to hear about it :-)

by cdown

3/24/2026 at 1:37:31 PM

1. Thanks for partially (in paragraph 4 but not paragraph 5) preempting the obvious objection. Distinguishing between disk reads and writes is very important for consumer SSDs, and you quoted exactly the right metric in paragraph 4: reduction of writes, almost regardless of the total I/O. Reads without writes are tolerable. Writes stall everything badly.

2. The comparison in paragraph 4 is between no-swap and zswap, and the results are plausible. But the relevant comparison here is a three-way one, between no-swap, zram, and zswap.

3. It's important to tune earlyoom "properly" when using zram as the only swap. Setting the "-m" argument too low causes earlyoom to miss obvious overloads that thrash the disk through page cache and memory-mapped files. On the other hand, with earlyoom, I could not find the right balance between unexpected OOM kills and missing the brownouts, simply because, with earlyoomd, the usage levels of RAM and zram-based swap are the only signals available for a decision. Perhaps systemd-oomd will fare better. The article does mention the need for tuning the userspace OOM killer to an uncomfortable degree.

I have already tried zswap with a swap file on a bad SSD, but, admittedly, not together with earlyoomd. With an SSD that cannot support even 10 MB/s of synchronous writes, it browns out, while zram + earlyoomd can be tuned not to brown out (at the expense of OOM kills on a subjectively perfectly well performing system). I will try backing-store-less zswap when it's ready.

And I agree that, on an enterprise SSD like Micron 7450 PRO, zswap is the way to go - and I doubt that Meta uses consumer SSDs.

by patrakov

3/24/2026 at 3:42:33 PM

It's very rare for disk reads to hang your UI (you would need to be running blocking operations ij the UI thread)

But a swap witu high latency will occasionally hang the interface, and with it hang any means to free memory manually

by nextaccountic

3/25/2026 at 12:44:34 AM

> The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells. This is much worse than any HDD. If a DRAMless consumer SSD is all that you have, better use zram.

Do mind that DRAMless is much less of an issue on NVMe. NVMe can use Host Memory Buffer to use system RAM for its logic, which is still orders of magnitude faster than relying on the NAND.

DRAMless is strictly worse in every way on SATA, where you really don't want to use it if you can help it; on NVMe, the difference is more about having a bad lower-quality drive or a good higher-quality drive. Having DRAM is a good indicator of the drive being good as the manufacturer is unlikely to pair it with slow NAND and controller, but lacking it doesn't necessarily mean a drive will perform badly. When comparing drives between generations, DRAMless often ends up performing better, even in loaded scenarios, compared to an older drive with DRAM.

by Numerlor

3/24/2026 at 11:16:36 PM

> The assumption is simply false, and not due to the "SSD wear" argument. Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells.

Do you know to what extent this can be mitigated by overprovisioning? Like only partitioning say 50% of the drive and leaving the rest free for controller as "scratch space"?

by pamcake

3/24/2026 at 3:25:55 PM

> Many consumer SSDs, especially DRAMless ones (e.g., Apacer AS350 1TB, but also seen on Crucial SSDs), under synchronous writes, will regularly produce latency spikes of 10 seconds or more, due to the way they need to manage their cells.

Is there an experiment you'd recommend to reliably show this behavior on such a SSD (or ideally to become confident a given SSD is unaffected)? Is it as simple as writing flat-out for say, 10 minutes, with O_DIRECT so you can easily measure latency of individual writes? do you need a certain level of concurrency? or a mixed read/write load? etc? repeated writes to a small region vs writes to a large region (or maybe given remapping that doesn't matter)? Is this like a one-liner with `fio`? does it depend on longer-term state such as how much of the SSD's capacity has been written and not TRIMed?

Also, what could one do in advance to know if they're about to purchase such an SSD? You mentioned one affected model. You mentioned DRAMless too, but do consumer SSD spec sheets generally say how much DRAM (if any) the devices have? maybe some known unaffected consumer models? it'd be a shame to jump to enterprise prices to avoid this if that's not necessary.

I have a few consumer SSDs around that I've never really pushed; it'd be interesting to see if they have this behavior.

by scottlamb

3/24/2026 at 4:15:22 PM

> Also, what could one do in advance to know if they're about to purchase such an SSD? You mentioned one affected model.

Typically QLC is significantly worse at this than TLC, since the "real" write speed is very low. In my experience any QLC is very susceptible to long pauses in write heavy scenarios.

It does depend on controller though. As an example, check out the sustained write benchmark graph here[1], you can see that a number of models starts this oscillating pattern after exhausting the pseudo-SLC buffer, indicating the controller is taking a time-out to rearrange things in the background. Others do it too but more irregularly.

> You mentioned DRAMless too, but do consumer SSD spec sheets generally say how much DRAM (if any) the devices have?

I rely on TechPowerUp, as an example compare the Samsung 970 Evo[2] to 990 Evo[3] under DRAM cache section.

[1]: https://www.tomshardware.com/pc-components/ssds/samsung-990-... (second image in IOMeter graph)

[2]: https://www.techpowerup.com/ssd-specs/samsung-970-evo-1-tb.d...

[3]: https://www.techpowerup.com/ssd-specs/samsung-990-evo-plus-1...

by magicalhippo

3/25/2026 at 12:48:32 AM

> Is there an experiment you'd recommend to reliably show this behavior on such a SSD?

  fio --name 4k-write --rw=write --bs=4k --size=1G --filename=fio.file --ioengine=libaio --sync=1 --time_based --runtime=60 --write_iops_log=ssd --log_avg_msec=1000 --randrepeat=0 --refill_buffers=1
Then examine ssd_iops.1.log.

Results from Apacer AS350 1TB: https://pastebin.com/F6pr5g29 - the first field is the timestamp in milliseconds, the second one is the write IOs completed since the previous line.

EDIT: I was told that the test above is invalid and that I should add --direct=1. OK, here is the new log, showing the same: https://pastebin.com/Wyw6r9TC - note that some timestamps are completely missing, indicating that the SSD performed zero IOs in that second.

You may want to repeat the experiment a few times.

by patrakov

3/24/2026 at 3:53:43 PM

> Many consumer SSDs ... under synchronous writes, will regularly produce latency spikes of 10 seconds or more

Surely "regularly" is a significant overstatement. Most people have practically never seen this failure mode. And if it only occurs under a heavy write workload, that's not something that's supposed to happen purely as a result of swapping.

by zozbot234

3/24/2026 at 6:40:06 PM

Online-upgrading a rolling distro with a browser running is enough to see it happen regularly.

by seba_dos1

3/24/2026 at 8:53:34 PM

You're gambling on consumer SSD firmware not dumping the FTL and going AWOL when queued swap writes pile up under memory pressure. You may never see a 10 second stall in normal browsing, but add write-heavy Chrome churn or a VM and that 'never happens' claim gets shaky fast.

It is workload dependent.

by hrmtst93837

3/25/2026 at 6:24:11 AM

LLM comment history.

by AlexeyBelov

3/25/2026 at 8:40:52 AM

You're gambling.

by hrmtst93837

3/24/2026 at 2:15:28 PM

At this point just throw your shitty SSD in the garbage bin^W^W USB box and buy a proper one. OOMing would always cost you more.

And if you still need to use a shitty SSD then just increase your swap size dramatically, giving a breathing room for the drive and implicitly doing an overprovisioning for it.

by justsomehnguy

3/24/2026 at 2:15:58 PM

Would be nice if zswap could be configured to have no backing cache so it could completely replace zram. Having two slightly different systems is weird.

There's not really any difference between swap on disk being full and swap in ram being full, either way something needs to get OOM killed.

Simplifying the configuration would probably also make it easier to enable by default in most distros. It's kind of backwards that the most common Linux distros other than ChromeOS are behind Mac and Windows in this regard.

by CoolGuySteve

3/24/2026 at 2:52:06 PM

This is actually something we're actively working on! Nhat Pham is working on a patch series called "virtual swap space" (https://lwn.net/Articles/1059201/) which decouples zswap from its backing store entirely. The goal is to consolidate on a single implementation with proper MM integration rather than maintaining two systems with very different failure modes. It should be out in the next few months, hopefully.

by cdown

3/24/2026 at 7:21:15 PM

Very exciting, thanks!

by MrDrMcCoy

3/24/2026 at 3:25:30 PM

> Would be nice if zswap could be configured to have no backing cache

You can technically get this behavior today using /dev/ram0 as a swap device, but it's very awkward and almost certainly a bad idea.

by jcalvinowens

3/24/2026 at 6:44:32 PM

And you can use a zram-backed ram0 if you're still undecided :D

by seba_dos1

3/24/2026 at 2:42:39 PM

Very much agreed. I feel like distros still regularly get this wrong (as evidence, Ubuntu, PopOS and Fedora all have fairly different swap configs from each other).

by adgjlsfhk1

3/24/2026 at 2:28:20 PM

With zram, I can just use zram-generator[0] and it does everything for me and I don't even need to set anything up, other than installing the systemd generator, which on some distros, it's installed by default. Is there anything equivalent for zswap? Otherwise, I'm not surprised most people are just using zram, even if sub-optimal.

[0]: https://crates.io/crates/zram-generator

by prussian

3/24/2026 at 11:07:41 PM

Zswap is enabled by default in Arch. Wont do anything without a backing disk swap though

by ac29

3/24/2026 at 4:53:32 PM

Kernel arguments are the primary method: https://wiki.archlinux.org/title/Zswap#Using_kernel_boot_par...

Snag: I had issues getting it to use zstd at boot. Not sure if it's a bug or some peculiarity with Debian. Ended up compiling my own kernel for other reasons, and was finally able to get zstd by default, but otherwise I'd have to make/add it to a startup script.

by bilegeek

3/25/2026 at 9:39:04 AM

I had the same issue with LZ4. I found a thread about it on the Linux Mint Debian Edition forum and posted my fix there: https://forums.linuxmint.com/viewtopic.php?p=2767087#p276708....

In short: add the kernel modules and update GRUB as usual, then install sysfsutils and add the following line at the end of `/etc/sysfs.conf`:

  module/zswap/parameters/compressor = lz4
  # For zstd:
  #module/zswap/parameters/compressor = zstd
Perhaps some kernel change between Linux 6.8 and 6.12 caused the old approach to no longer work.

by networked

3/24/2026 at 6:46:50 PM

It's a handy tool, but it doesn't even give you a reasonable zram size by default and doesn't touch other things like page-cluster, so "I don't even need to set anything up" applies only if you don't mind it being quite far from optimal.

by seba_dos1

3/24/2026 at 2:51:17 PM

This should've been a bash script...

by alfanick

3/24/2026 at 3:25:14 PM

  echo 1 > /sys/module/zswap/parameters/enabled

It's in TFA.

by melvyn2

3/24/2026 at 3:42:23 PM

enabling != configuring. Are you saying this is all that's necessary, assuming an existing swap device exists? That should be made clearer.

Edit: To be extra clear. When I was researching this, I ended up going with zram only because:

* It is the default for Fedora.

* zramctl gives me live statistics of used and compressed size.

* The zswap doc didn't help my confusion on how backing devices work (I guess they're any swapon'd device?)

by prussian

3/24/2026 at 6:33:15 PM

It doesn't really need any config on most distros, no.

That said, if you want it to behave at its best when OOM, it does help to tweak vm.swappiness, vm.watermark_scale_factor, vm.min_free_kbytes, vm.page-cluster and a couple of other parameters.

See e.g.

https://makedebianfunagainandlearnhowtodoothercoolstufftoo.c...

https://documentation.suse.com/sles/15-SP7/html/SLES-all/cha...

I don't know of any good statistics script for zswap, I use the script below as a custom waybar module:

  #!/bin/bash
  stored_pages="$(cat /sys/kernel/debug/zswap/stored_pages)"
  pool_total_size="$(cat /sys/kernel/debug/zswap/pool_total_size)"
  compressed_size_mib="$((pool_total_size / 1024 / 1024))"
  compressed_size_gib="$((pool_total_size / 1024 / 1024 / 1024))"
  compressed_size_mib_remainder="$((compressed_size_mib * 10 / 1024 - compressed_size_gib * 10))"
  uncompressed_size="$((stored_pages * 4096))"
  uncompressed_size_mib="$((uncompressed_size / 1024 / 1024))"
  uncompressed_size_gib="$((uncompressed_size / 1024 / 1024 / 1024))"
  uncompressed_size_mib_remainder="$((uncompressed_size_mib * 10 / 1024 - uncompressed_size_gib * 10))"
  ratio="$((100 * uncompressed_size / (pool_total_size + 1)))"
  echo "$compressed_size_gib.$compressed_size_mib_remainder"

by stdbrouw

3/24/2026 at 8:17:26 PM

You mean zswap part of cleancache? But that fell out of kernel completely, no? And zram gained support for backing device.

BTW most of zram tutorials get it wrong, you are supposed to manually mark idle pages and initiate writeback by periodically writing to /sys/block/zramX/idle and /sys/block/zramX/writeback . Otherwise zram will never ever write anything to backing device. It is documented in kernel docs, just that if you expect it to work automatically you might misread it.

And you can convert swap into such backing device, but then you don't do swapon on it (just remove it from fstab) nor it's necesary to format(mkswap) it.

by rini17

3/24/2026 at 7:51:13 PM

Been using zram since it hit the kernel, with the same priorities "petard" and disk backed swap. I don't remember the details now, but zswap many years ago would not handle hibernation "well" (as well as it can get..), or not better than zram+distinct hibernation with -XXX priority. But zram definitely has some caveats with that setup and it will lead to disk cache being used and requiring manual flushing, for example after hibernation, because zram-generator (if you're using it) isn't ready yet on resume, from what I recall about it. This seems like such a neatly written post I'm going to try and go with zswap from now on.

by 0x_rs

3/24/2026 at 6:34:51 PM

A simpler alternative to OOM daemons could be enabling MGLRU's thrashing prevention: https://www.kernel.org/doc/html/next/admin-guide/mm/multigen...

I'm using it together with zram sized to 200% RAM size on a low RAM phone with no disk swap (plus some tuning like the mentioned clustering knob) and it works pretty well if you don't mind some otherwise preventable kills, but I will happily switch to diskless zswap once it's ready.

by seba_dos1

3/24/2026 at 5:16:19 PM

>They size the zram device to 100% of your physical RAM, capped at 8GB. You may be wondering how that makes any sense at all – how can one have a swap device that's potentially the entire size of one's RAM?

zram size applies to uncompressed data, real usage is dynamically growing (plus static bookkeeping). Most memory compresses well, so you probably want to have zram device size even larger than physical RAM.

by garaetjjte

3/24/2026 at 9:11:13 PM

From the linked comment Zswap appears to write decompressed pages to disk. I assume Chesterton's fence applies, but why not write them compressed to disk and decompress when/if they need to be loaded back?

> After the folio has been decompressed into the swap cache, the compressed version stored by zswap can be freed.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

by rhnDN1

3/24/2026 at 9:24:17 PM

THIS. This boggles my mind ever since zswap landed in the kernel.

Actually, forget zswap. Why the fuck swap's on disk format hasn't been compressed in the last 2 decades when processing power has been abundant and compressors like lzo/lz4 has been in kernel tree.

by kasabali

3/24/2026 at 3:29:28 PM

One underappreciated aspect of zswap vs zram is the compression algorithm choice and its interaction with the data being compressed.

LZ4 (default in both) is optimized for speed at the expense of ratio — typically 2-2.5x on memory pages. zstd can push that to 3-3.5x but at significantly higher CPU cost per page fault.

The interesting tradeoff: memory pages are fundamentally different from files. They contain lots of pointer-sized values, stack frames, and heap metadata — data patterns where simple LZ variants actually perform surprisingly well relative to more complex algorithms. Going beyond zstd (e.g., BWT-based or context mixing) would give diminishing returns on memory pages while destroying latency.

So the real question isn't just "zswap vs zram" but "how much CPU are you willing to spend per compressed page, given your workload's memory access patterns?" For latency-sensitive workloads, LZ4 with zswap writeback is hard to beat.

by MaxCompression

3/25/2026 at 7:27:08 AM

LLM written comments are not permitted on this site.

by messe

3/25/2026 at 8:56:56 AM

1. What makes you think it is written by an LLM

2. Where is that rule, could you cite it?

3. How dow I know you did not use LLM for your comment?

by BoredomIsFun

3/24/2026 at 12:10:57 PM

I used to put swap on zram when my laptop had one of those early ssds, that people would tell you not to put swap on for fear of wearing them out

Setup was tedious

by nephanth

3/24/2026 at 6:16:03 PM

There is one more feature that zram can do: multiple compression levels. I use simple bash script to first use fast compression and after 1h recompress it using much stronger compression.

unfortunately you cannot chain it with any additional layer or offload to disk later on, because recompression breaks idle tracking by setting timestamp to 0 (so it's 1970 again)

https://gist.github.com/Szpadel/9a1960e52121e798a240a9b320ec...

by Szpadel

3/24/2026 at 1:56:14 PM

Is this advice also applicable to Desktops installations?

by Mashimo

3/24/2026 at 4:27:05 PM

The better distros have it (ZRAM) enabled by default for desktops (I think PopOS and Fedora). In my personal experience every desktop Linux should use memory compression (except you have an absurd amount of RAM) because it helps so much, especially with everything related to browser and/or electron usage!

Windows and macOS have it enabled by default for many years (even if it works a little different there).

by therealmarv

3/24/2026 at 5:05:21 PM

I did an Archinstall setup this weekend, and that also suggested zram.

by captn3m0

3/24/2026 at 7:40:29 PM

Because it's an easy solution esp. to a rather new installer: setting up swap on disk (partition or file, if file which file system, if partition w/o encryption, ...). Zram: install one additional package and forget.

See also the "zram on Fedora" section in the article.

by Schlaefer

3/24/2026 at 6:47:39 PM

I get the impression that most desktop users enable zram or zswap to get a little bit more out of their RAM but there is never any real worry about OOM, not regularly anyway, so then (according to the principles laid out in the article) it shouldn't matter much.

On my workstation, I run statistical simulations in R which can be wasteful with memory and cause a lot of transient memory pressure, and for that scenario I do like that zswap works alongside regular swap. Especially when combined with the advice from https://makedebianfunagainandlearnhowtodoothercoolstufftoo.c... to wake up kswapd early, it really does seem to make a difference.

by stdbrouw

3/24/2026 at 6:17:30 PM

> It only really makes sense for extremely memory-constrained embedded systems

Even "mildly" memory constrained embedded systems don't use swap because their resources are tailored for their function. And they are typically not fans [1] of compression either because the compression rate is often unpredictable.

[1] Yes, they typically don't need fans because overheating and using a motor for cooling is a double waste of energy.

by astrobe_

3/24/2026 at 1:05:36 PM

So much polemic and no numbers? If it is a performance issue, show me the numbers!

by guenthert

3/24/2026 at 1:34:38 PM

There are quite a few numbers in the article, although of course I'm happy to hear any more you'd like presented.

* A counterintuitive 25% reduction in disk writes at Instagram after enabling zswap

* Eventual ~5:1 compression ratio on Django workloads with zswap + zstd

* 20-30 minute OOM stalls at Cloudflare with the OOM killer never once firing under zram

The LRU inversion argument is just plain from the code presented and a logical consequence of how swap priority and zram's block device architecture interact, I'm not sure numbers would add much there.

by cdown

3/24/2026 at 2:33:24 PM

> The LRU inversion argument is just plain from the code presented and a logical consequence of how swap priority and zram's block device architecture interact, I'm not sure numbers would add much there.

Yes, while it is all very plausible, the run times of a given workload (on a given, documented system) known to cause memory pressure to the point of swapping with vanilla Linux (default swappiness or some appropriate value), zram and zswap would be appreciated.

https://linuxblog.io/zswap-better-than-zram/ at least qualifies that zswap performs better when using a fast NVMe device as swap device and zram remains superior for devices with slow or no swap device.

by guenthert

3/24/2026 at 7:31:00 PM

Thank you for all your wonderful work, Chris! Just curious: is it feasible to eventually support sending the page clusters to backing swap in their compressed state to further reduce I/O? It's my understanding that the clusters get decompressed before getting sent to disk, which I presume is to simplify addressing.

by MrDrMcCoy

3/24/2026 at 12:50:57 PM

can you make a follow-up here for the best way to setup swap to support full disk encryption+hybernation?

by adgjlsfhk1

3/24/2026 at 5:49:04 PM

If you want hibernation then you just encrypt your swap partition like you encrypt your root partition.

by tmtvl

3/25/2026 at 5:40:18 AM

that's fair. I mostly just want the OS installer to handle this for me. I have 32 gigs of ram, but compile llvm now and then, so 99% of the time I don't care about swap, but 1% of the time I really do

by adgjlsfhk1

3/24/2026 at 3:15:29 PM

That's a banger article, I don't even like low level stuff and yet read the whole thing. Hopefully I will have opportunity to use some of it if I ever get around to switch my personal notebook back to linux

by tonnydourado

3/24/2026 at 12:32:10 PM

thank goodness Kubernetes got support for swap; zswap has been a great boon for one of my workloads

by jitl

3/24/2026 at 2:02:58 PM

[dead]

by devnotes77

3/24/2026 at 3:04:05 PM

The interesting meta-point here is how a kernel mechanism turned into cargo-cult tuning advice.

"Use zram, save your SSD" made sense in the era of tiny eMMC, no TRIM, and mystery flash controllers. It also fit a very human bias: disk I/O feels scary and finite, CPU cycles feel free and infinite. So zram became a kind of talisman you enable once and never think about again.

But the kernel isn't optimizing for your feelings about SSD wear, it's optimizing for global memory pressure. zswap fits into that feedback loop, zram mostly sits outside it. Once you see that, the behavior people complain about ("my system thrashes and then dies mysteriously") stops being mysterious: they effectively built a second, opaque memory pool that the MM subsystem can't reason about or reclaim from cleanly.

What's funny is that on modern desktops and servers, the alleged downside of zswap (writing to disk sometimes) is the one thing the hardware is extremely good at, while the downside of zram (locking cold garbage in RAM and confusing reclaim/oom) is exactly what you don't want when the machine is under stress. The folk wisdom never updated, but the hardware and the kernel did.

by quapster