alt.hn

2/20/2026 at 11:56:25 AM

A bug is a bug, but a patch is a policy: The case for bootable containers

https://tuananh.net/2026/02/20/patch-is-policy/

by tuananh

2/23/2026 at 6:52:29 PM

A bootable container, kernel included, is not a container. Building a whole new OS image for patches isn't a bad idea, but depending on the workload this might be a non-starter. At the very least, make updates to the OS image incremental ala OSTree. kexec can also be a nice speedup on server hardware but that carries its own risks from kexec itself but also from lack of exercising cold boot. It's not nice to find out about a few percent of hosts failing to boot all at once because nothing tested it for months until the power outage.

IMHO, optimizing your update process and treating whole OS environments like we do containers is good, but there are plenty of environments like stateful services where a rolling reboot can still take months to complete if done in a naive way.

by MertsA

2/23/2026 at 9:20:03 AM

> Greg’s argument is a hard truth: “Usage is different for each user.” He cannot score a vulnerability because he doesn’t know if you’re running a cloud-native microservice or a legacy industrial controller.

What about having several use cases in mind, and give the scores for each of those?

> We must stop litigating which fixes matter and start treating every kernel bug fix as relevant (a bug is a bug). We must stop running patching as a project and bake it into the pipeline so that applying stable fixes is simply what the system does (the patch is the policy).

Ah, so it's simply "apply all the fixes automatically", i.e. "the Chainguard way" but, again, fully automated. Okay?

by Joker_vD

2/23/2026 at 10:42:22 AM

> What about having several use cases in mind, and give the scores for each of those?

i imagine the same reason they don't score for 1, it takes time that could be allocated elsewhere

tbh i think scoring for multiple scenarios would take more time and be less useful. kernel devs are not implementors, they may have never used docker or built a cut down kernel for an iot device, they just build a general purpose kernel

by twelvedogs

2/23/2026 at 2:16:56 PM

> it takes time that could be allocated elsewhere

And not scoring means that the security triage teams everywhere have to spend their time to assess the severity on their own, and in doing so, they mostly duplicate each other's work while deduplication is nigh impossible. Is this a worthwhile trade?

Consider e.g. vehicle recalls: the manufacturer could very well (baring legal requirements and general public's expectation) just leave it to the customers and the repairmen out there to discover and deal with the defects on their own.

> kernel devs are not implementors, they may have never used docker or built a cut down kernel for an iot device, they just build a general purpose kernel

Well that's a pretty condescending look upon the kernel maintainers. Making a successful general-purpose kernel (nevermind making a general-purpose kernel that also has a lot of quite specific affordances for custom scenarios) still requires understanding of how it will be used.

by Joker_vD

2/23/2026 at 9:39:15 AM

> What about having several use cases in mind, and give the scores for each of those?

Or assign one score according to the worst case scenario.

by kleiba

2/23/2026 at 2:09:08 PM

As TFA states, it's a tad too reductive in practice.

by Joker_vD

2/23/2026 at 9:05:55 AM

As part of a security team tasked with triaging endless CVSS scores that all make the assumption you are directly piping unauthenticated malicious data to the code in question, using whatever the worst way of doing it is, I approve of not giving misleading "worst case" CVSS scores. They are almost never worst case, are frequently trivial, and suck up a huge amount of resource.

Glad to hear developers are also pushing back against the madness. I do think just patching known bugs quickly is the best way to go. Alternative might be some kind of AI assisted triage process.

EDIT: CVSS evaluates vulnerabilities in the context of the entire system. It makes no sense to apply it to software components; you just don't know what the solution actually looks like from down there. So it's just an inappropriate method to use in the first place.

by MattPalmer1086

2/23/2026 at 9:40:15 AM

I tried a lot to get this in reality - using fedora silverblue. But that thing sucks. It is slow. Really dogslow. Devs are blaming rpm-ostree or btrfs - no idea. I wish there was something like ChromeOS but open.

Hint: Maybe firefox should pivot (re-do Firefox OS) to that.

by faust201

2/23/2026 at 4:50:18 PM

What's really slow? Using the system, or installing updates? I use Kalpa, an atomic OpenSUSE desktop version, and it just installs updates every night and notifies me to restart, so I generally neither know nor care about how quickly that runs. (Although, I've also run updates manually and it seems fine.)

by yjftsjthsd-h

2/23/2026 at 10:55:27 AM

It's pretty much rpm-ostree. Nobody bothered to make those workflows performant, so if you need to apply updates separately, it's going to suck. The OSTree download can be fast if you have a fast connection to the Fedora server, but it's not mirrored and there's no mirror network support (so no geographically close downloads). To be fair, bootc has this problem too because container tooling in general can't support mirror networks currently.

by Conan_Kudo

2/23/2026 at 1:46:22 PM

when you talk about slow, what exactly is slow in your case? download speed or performance?

by tuananh

2/23/2026 at 3:34:20 PM

Outlining this as precision versus using 100s of thousands on chainguard, seems like 2 extremes pitted against eachother, when hardened images is largely free now: https://hub.docker.com/hardened-images/catalog

by pploug

2/23/2026 at 10:02:32 AM

I really don't understand the argument being made, here, it genuinely feels nonsensical to me:

- it talks about Kernel CVEs while talking about a user-space tool (containers).

- with respect to a kernel bug, what's the difference between updating/downgrading a kernel container image (whatever that means) and just doing the same for the kernel installed on the machine? Unlike a whole distro' which is made out of many moving parts with complex (and brittle) interactions, where updating can break things in ways that cannot trivially be rolled back (which makes stateless containers a good idea for user space), the kernel is pretty much a monolith and you can trivially switch between versions (even on a consumer Linux desktop you can use the previous kernel simply by selecting it in the grub list…).

by littlestymaar

2/23/2026 at 6:29:02 PM

Containers = Yet Another Attack Surface.

It's just obfuscation for the untalented.

by bitbytebane