3/7/2026 at 5:57:32 PM
I've seen countless attempts to replace "docker build" and Dockerfile. They often want to give tighter control to the build, sometimes tightly binding to a package manager. But the Dockerfile has continued because of its flexibility. Starting from a known filesystem/distribution, copying some files in, and then running arbitrary commands within that filesystem mirrored so nicely what operations has been doing for a long time. And as ugly as that flexibility is, I think it will remain the dominant solution for quite a while longer.by bmitch3020
3/7/2026 at 9:53:04 PM
> But the Dockerfile has continued because of its flexibility.The flip side is that the world still hasn’t settled on a language-neutral build tool that works for all languages. Therefore we resort to running arbitrary commands to invoke language-specific package managers. In an alternate timeline where everyone uses Nix or Bazel or some such, docker build would be laughed out of the window.
by kccqzy
3/7/2026 at 10:30:45 PM
As a Nix evangelist, I have to say: Nix is really not capable of replacing languag-specific package managers.> running arbitrary commands to invoke language-specific package managers.
This is exactly what we do in Nix. You see this everywhere in nixpkgs.
What sets apart Nix from docker is not that it works well at a finer granularity, i.e. source-file-level, but that it has real hermeticity and thus reliable caching. That is, we also run arbitrary commands, but they don't get to talk to the internet and thus don't get to e.g. `apt update`.
In a Dockerfile, you can `apt update` all you want, and this makes the build layer cache a very leaky abstraction. This is merely an annoyance when working on an individual container build but would be a complete dealbreaker at linux-distro-scale, which is what Nix operates at.
by muvlon
3/7/2026 at 11:16:09 PM
Fundamentally speaking, the key point is really just hermeticity and reliable caching. Running arbitrary commands is never the problem anyways. What makes gcc a blessed command but the compiler for my own language an "arbitrary" command anyways?And in languages with insufficient abstraction power like C and Go, you often need to invoke a code generation tool to generate the sources; that's an extremely arbitrary command. These are just non-problems if you have hermetic builds and reliable caching.
by kccqzy
3/8/2026 at 4:18:02 AM
I mean, I guess at a theoretical level. In practice, it's just not a large problem.by xhcuvuvyc
3/7/2026 at 11:20:56 PM
Well, arbitrary granularity is possible with Nix, but the build systems of today simply do not utilise it. I've for example written an experimental C build system for Nix which handles all compiler orchestration and it works great, you get minimal recompilations and free distributed builds. It would be awesome if something like this was actually available for major languages (Rust?). Let me know if you're working on or have seen anything like this!by poly2it
3/8/2026 at 3:39:22 AM
A problem with that is that Nix is slow.On my nixos-rebuild, building a simple config file for in /etc takes much longer than a common gcc invocation to compile a C file. I suspect that is due to something in Nix's Linux sandbox setup being slow, or at least I remember some issue discussions around that; I think the worst part of that got improved but it's still quite slow today.
Because of that, it's much faster to do N build steps inside 1 nix build sandbox, than the other way around.
Another issue is that some programming languages have build systems that are better than the "oneshot" compilation used by most programming languages (one compiler invocation per file producing one object file, e.g. ` gcc x.c x.o`). For example, Haskell has `ghc --make` which compiles the whole project in one compiler invocation, with very smart recompilation avoidance (pet-function, comment changes don't affect compilation, etc) and avoidance of repeat steps (e.g. parsing/deserialising inputs to a module's compilation only once and keeping them in memory) and compiler startup cost.
Combining that with per-file general-purpose hermetic build systems is difficult and currently not implemented anywhere as far as I can tell.
To get something similar with Nix, the language-specific build system would have to invoke Nix in a very fine-grained way, e.g. to get "avoidance of codegen if only a comment changed", Nix would have to be invoked at each of the parser/desugar/codegen parts of the compiler.
I guess a solution to that is to make the oneshot mode much faster by better serialisation caching.
by nh2
3/8/2026 at 12:38:42 PM
What if you set up a sandbox pool? Maybe I'm rambling, I haven't read much Nix source code, but that should allow for only a couple of milliseconds of latency on these types of builds. I have considered forking Nix to make this work, but in my testing with my experimental build system, I never experienced much latency in builds. The trick to reduce latency in development builds is to forcibly disable the network lookups which normally happen before Nix starts building a derivation: preferLocalBuild = true;
allowSubstitutes = false;
Set these in each derivation. The most impactful thing you could do in a Nix fork according to my testing in this case is to build derivations preemptively while you are fetching substitutes and caches simultaneously, instead of doing it in order.If you are interested in seeing my experiment, it's open on your favourite forge:
by poly2it
3/8/2026 at 2:34:50 AM
craneby vatsachak
3/8/2026 at 12:30:02 PM
I use crane, but it does not have arbitrary granularity. The end goal would be something which handled all builds in Nix.by poly2it
3/7/2026 at 10:02:32 PM
Reminds me of the “Electric cars in reverse” video where the guy envisions a world where all vehicles are electric and tries to make the argument for gas engines.by brightball
3/7/2026 at 10:26:51 PM
Link?by anal_reactor
3/7/2026 at 10:42:21 PM
try searching for 'Rory Sutherland: What If Petrol Cars Were Invented In 2025'by fittom
3/8/2026 at 10:53:56 AM
Actual article was in the evening standard, but like all things Rory Sutherland, it’s worth to watch him tell the story: https://youtu.be/OTOKws45kCo?si=jbTdx3YCGkZv3AkbFor those who want more of him, check out his classic TED talk from decades ago: “Lessons from an ad man”
https://www.ted.com/talks/rory_sutherland_life_lessons_from_...
by brightbeige
3/8/2026 at 8:54:00 AM
There is some truth to it, however in production it is simple: There is a working deployment or not.Therefore I would rephrase your remarks as upside: let others continue scratch their head while others deploy working code to PROD.
I am glad there is a solution like Docker - with all it flaws. Nothing is flawless, there is always just yet another sub-optimal solution weighting out the others by a large margin.
by _the_inflator
3/8/2026 at 9:28:14 PM
Popularity of a technology usually isn’t perfectly correlated with how good it is.> let others continue scratch their head while others deploy working code to PROD.
You make it sound like when docker build arrived on the scene, a cross-language hermetic build tool was still a research project. That’s just untrue.
by kccqzy
3/8/2026 at 3:06:34 PM
This calls for https://xkcd.com/927/.by whurley23
3/7/2026 at 7:28:00 PM
There are some hurdles preventing that flow from achieving reproducible builds. As the bad guys get more sophisticated, it's going to become more and more important that one party can say "we trust this build hash" and a separate party to say "us too".That's not going to work if both parties get different hashes when they build the image, which won't happen as long as file modification timestamps (and other such hazards) are part of what gets hashed.
by __MatrixMan__
3/7/2026 at 10:39:34 PM
Recent versions of buildkit have added support for SOURCE_DATE_EPOC. I've been making the images reproducible before that with my own tooling, regctl image mod [1] to backdate the timestamps.It's not just the timestamps you need to worry about. Tar needs to be consistent with the uid vs username, gzip compression depends on implementations and settings, and the json encoding can vary by implementation.
And all this assumes the commands being run are reproducible themselves. One issue I encountered there was how alpine tracks their package install state from apk, which is a tar file that includes timestamps. There are also timestamps in logs. Not to mention installing packages needs to pin those package versions.
All of this is hard, and the Dockerfile didn't make it easy, but it is possible. With the right tools installed, reproducing my own images has a documented process [2].
by bmitch3020
3/8/2026 at 11:02:25 AM
> I've been making the images reproducible before that with my own toolingI've been doing the same, using https://github.com/reproducible-containers/repro-sources-lis... . It allows you to precisely pin the state of the distro package sources in your Docker image, using snapshot.ubuntu.com & friends, so that you can fearlessly do `apt-get update && apt-get install XYZ`.
by codethief
3/8/2026 at 11:41:09 AM
I'm not sure if I was just holding it wrong, but I couldn't create images reproducibly using Docker. (I could get this working with Podman/buildah however.)by computerfriend
3/8/2026 at 1:16:25 AM
Does any of that matter if you’re not auditing the packages you install?I’m more concerned about sources being poisoned over the build processes. Xz is a great example of this.
by hsbauauvhabzb
3/8/2026 at 2:06:16 AM
Both are needed, but you get more bang for your buck focusing on build security than on audited sources. If the build is solid then it forces attackers to work in the open where all auditors can work together towards spoiling the attack.If you flip it around and instead have magically audited source but a shaky build, then perhaps a diligent user can protect themself, but they do so by doing their own builds, which means they're unaware that the attack even exists. This allows the attacker to just keep trying until they compromise somebody who is less diligent.
Getting caught requires a user who analyses downloaded binaries in something like ghidra... who does that when it's much easier to just build them from sources instead? (answer: vanishingly few people). And even once the attacker is found out, they can just hide the same payload a bit differently, and the scanners will stop finding it again.
Also, "maybe the code itself is malicious" can only ever be solved the hard way, whereas we have a reasonable hope of someday providing an easy solution to the "maybe the build is malicious" problem.
by __MatrixMan__
3/7/2026 at 6:57:15 PM
The lack of docker registry-like solutions really does seem to be the chokepoint for many alternatives.Personally I love using mkosi and while it has all the composability and deployment options I'd care for, its clear not everyone wants to build starting only with a blank set of OS templates.
by miladyincontrol
3/8/2026 at 8:24:06 AM
There are lots of alternative container image registries: quay, harbor, docker's open sourced one, the cloud providers, github, etc.Or do you mean a replacement for docker hub?
by nunez
3/7/2026 at 7:30:56 PM
Nix is exceptionally good at making docker containers.by whateveracct
3/7/2026 at 7:46:14 PM
Yes but then you're committed to using Nix which doesn't work so well the moment you need some software not packaged by Nix.Want to throw a requirements.txt in there? No no, why would you even ask that? Meanwhile docker says yeah sure just run pip install, why should I care?
by Spivak
3/7/2026 at 7:49:11 PM
LLMs are getting very good at packaging software using Nix.by okso
3/7/2026 at 9:08:23 PM
Then you're committing to maintaining a package for that software.Like all LLM boosters, you've ignored the fact that the largest time sink in many kinds of software is not initial development, but perpetual maintenance.
by mort96
3/8/2026 at 12:07:37 AM
It's not materially any different from maintaining lines in a Dockerfile.by xyzzy_plugh
3/8/2026 at 9:10:33 AM
It is mateirially different compared to "maintaining" the line 'RUN apt-get -y install foobar'by mort96
3/8/2026 at 4:08:28 PM
Is it though? If the way that I’m going to edit those files is by typing the same natural language command into Claude code, and the edit operation to maintain it takes 20 seconds instead of 10, to me that seems pretty materially the sameby SOLAR_FIELDS
3/8/2026 at 4:30:43 PM
Yes, it isby mort96
3/9/2026 at 9:31:28 PM
How so?by SOLAR_FIELDS
3/7/2026 at 8:18:51 PM
This. I wouldn't have touched Nix when you needed someone who was really good at Nix to keep it working, but agents make it viable to use in a number of place.by CuriouslyC
3/7/2026 at 10:09:24 PM
Packaging for nix is exceptionally easy once you learn it. And once something is packaged, it's solved for all, it's not going to randomly break.If you care about getting it to work with minimal effort right now more thar about it being sustainable later, then sure.
by gnull
3/8/2026 at 7:32:58 AM
> Packaging for nix is exceptionally easy once you learn itMost of the complaints I've seen about Nix about around documentation, so "once you learn it" might be the larger issue.
by saghm
3/8/2026 at 1:00:17 AM
I don't in ow if I'd say it's "easy". The Python ecosystem in particular is quite hard to get working in a hermetic way (Nix or otherwise). Multiple attempts at getting Python easy to package with Nix have come and gone over the years.by hamandcheese
3/7/2026 at 8:29:52 PM
I use software from pretty much every language with Nix. And I package it myself too when needed. Including Python often :)by whateveracct
3/7/2026 at 9:51:45 PM
Nix doesn't make sense if all you're going to use it for is building Docker images. It only makes sense if you're all in in the first place. Then Docker images are free.by nothrabannosir
3/8/2026 at 5:15:40 AM
Packing software with nix is easier than any other system TBH and just seems to be just getting easier.by ghthor
3/7/2026 at 9:52:31 PM
Does Nix do one layer per dependency? Does it run into >=128 layers issues?In Spack [1] we do one layer per package; it's appealing, but I never checked if besides the layer limit it's actually bad for performance when doing filesystem operations.
by stabbles
3/8/2026 at 12:58:02 AM
This post has a great overview: https://grahamc.com/blog/nix-and-layered-docker-images/tl;dr it will put one package per layer as much as possible, and compress everything else into the final layer. It uses the dependency graph to implement a reasonable heuristic for what is fine grained and what get combined.
by hamandcheese
3/8/2026 at 5:14:02 AM
That layering algorithm is also configurable, though I couldn’t really understand how to configure it and just wrote my own post processing to optimize layering for my internal use case. I believe I can open source this w/o much work.The layer layout is just a json file so it can be post processed w/o issue before passing to the nix docker builders
by ghthor
3/7/2026 at 8:06:34 PM
Especially if you use nix2container to take control over the layer construction and caching.by mikepurvis
3/8/2026 at 12:53:29 AM
I'm not sure if this is what you mean but in some ways it would be nice to have tighter coupling with a registry. Docker build is kind of like a multiplexer - pull from here or there and build locally, then tag and push somewhere else. Most of the time all pulls are from public registries, push to a single private one and the local image is never used at all.It seems overly orthogonal for the typical use case but perhaps just not enough of an annoyance for anyone to change it.
by osigurdson
3/8/2026 at 4:55:04 AM
[dead]by alex_dev42
3/8/2026 at 9:55:45 AM
^ this account has posted nothing but AI generated spam since it was created 6 hours agoby Ameo
3/7/2026 at 6:05:19 PM
> the Dockerfile has continued because of its flexibilityI wish we had standardized on something other than shell commands, though. Puppet or terraform or something more declarative would have been such a better alternative to “everyone cargo cults ‘RUN apt-get upgrade’ onto the top of their dockerfiles”.
Like, the layer/stage/caching behavior is fine. I just wish the actual execution parts had been standardized using something at a higher level of abstraction than shell.
by zbentley
3/7/2026 at 6:20:01 PM
> Puppet or terraform or something more declarative would have been such a better alternativeUntil you need to do something that isn't covered with its DSL, and you extend it with an external command execution declaration... At which point people will just write bash scripts anyway and use your declarative language as a glorified exec.
by bheadmaster
3/7/2026 at 7:39:23 PM
If you have 90-95% of everyone's needs (installing packages, compiling, putting files) covered in your DSL, and it has strong consistency and declarativeness, it's not that big of a problem if you need an escape hatch from time to time. Terraform, Puppet, Ansible, SaltStack show this pretty well, and the vast majority of them that isn't bash scripts is better and more maintainable than their equivalents in pure bash would be.by sofixa
3/7/2026 at 11:35:57 PM
The problem is, ironically, that each DSL has its own execution platform, and is not designed for testability. Bash scripts may be hard to maintain, but at least you can write tests for them.In Azure YAML I had an odd bug because I used succeeded() instead of not(failed()) as a condition. I had no way of testing the pipeline without executing it. And each DSL has its own special set of sharp edges.
At least Bash's common edges are well known.
by bheadmaster
3/7/2026 at 6:09:27 PM
Docker broke out the build layer into a separate component called BuildKit (see HN discussion recently https://news.ycombinator.com/item?id=47166264).However, Dockerfiles are so popular because they run shell commands and permit 'socially' extending someone else shell commands; tacking commands onto the end of someone else's shell script is a natural process. /bin/sh is unreasonably effective at doing anything you need to a filesystem, and if the shell exposes a feature, it has probably been used in a Dockerfile somewhere.
Every other solution, especially declarative ones, tend to come up short when _layering_ images quickly and easily. However, I agree they're good if you control the entire declarative spec.
by avsm
3/7/2026 at 6:16:04 PM
I'd say LLB is the "standard", Dockerfile is just one of human-friendly frontends, but you can always make one yourself or use an alternative. For example, Dagger uses BuildKit directly for building its containers instead of going through a Dockerfile.by mihaelm
3/7/2026 at 8:11:57 PM
Declarative methods existed before Docker for years and they never caught on.They sounded nice on paper but the work they replaced was somehow more annoying.
I moved over to Docker when it came out because it used shell.
by harrall
3/7/2026 at 10:32:17 PM
Give https://github.com/project-dalec/dalec a look. It is more declarative. Has explicit abstractions for packages, caching, language level integrations, hermetic builds, source packages, system packages, and minimal containers.Its a Buildkit frontend, so you still use "docker build".
by cpuguy83
3/7/2026 at 7:29:39 PM
The more you try and abstract from the OS, the more problems you're going to run into.by esseph
3/7/2026 at 9:06:39 PM
Bash is pretty darn abstracted from the OS, though. Puppet vs Bash is more about abstraction relative to the goal.If your dockerfile says “ensure package X is installed at version Y” that’s a lot clearer (and also more easy to make performant/cached and deterministic) than “apt-get update; apt-get install $transitive-at-specific-version; apt-get install $the-thing-you-need-atspecific-version”. I’m not thrilled at how distro-locked the shell version makes you, and how easy it is for accidental transitive changes to occur too.
But neither of those approaches is at a particularly low abstraction level relative to the OS itself; files and system calls are more or less hidden away in both package-manager-via-bash and puppet/terraform/whatever.
by zbentley
3/7/2026 at 9:09:40 PM
Dockerfile has the flexibility to do what you want though, no? Use a base image with terraform or puppet or opentofu or whatever pre-installed, then your Dockerfile can just run the right command to apply some declarative config file from the build context.And if you want something weird that's not supported by your particular tool of choice, you have the escape hatch of running arbitrary commands in the Dockerfile.
What more do you want?
by mort96
3/8/2026 at 8:29:34 PM
The loose integration between the declarative tools and the container build system drags down performance and creates a lot of footguns re: image size and inert declarative-build-system transitive deps left lying around, I’ve found.by zbentley
3/9/2026 at 8:05:13 AM
Why would terraform leave transitive steps around? To my knowledge, Docker doesn't record a log the IO syscalls performed by a RUN directive, the layer just reflects the actual changes it makes. It uses overlayfs, doesn't it? If you create a temporary file and then delete it within the same layer, there's no trace that the temporary file ever existed in overlayfs, correct?I'd get your worry if we were talking about splitting up a terraform config and running it across multiple RUN directives, but we're not, are we?
by mort96
3/7/2026 at 7:17:30 PM
Oof, not terraform please. If you use foreach and friends, dependency calculations are broken, because dependency happens before dynamic rules are processed.I'd get much better results it I used something else to do the foreach and gave terraform only static rules.
by toast0
3/7/2026 at 7:32:08 PM
You can pretty much replace "docker build" with "go build".But as long as people want to use scripting languages (like php, python etc) i guess docker is the neccessary evil.
by phplovesong
3/7/2026 at 9:14:30 PM
>You can pretty much replace "docker build" with "go build".I'll tell that to my CI runner, how easy is it for Go to download the Android SDK and to run Gradle? Can I also `go sonarqube` and `go run-my-pullrequest-verifications` ? Or are you also going to tell me that I can replace that with a shitty set of github actions ?
I'll also tell Microsoft they should update the C# definition to mark it down as a scripting language. And to actually give up on the whole language, why would they do anything when they could tell every developer to write if err != nil instead
Just because you have an extremely narrow view of the field doesn't mean it's the only thing that matters.
by well_ackshually
3/8/2026 at 12:20:50 PM
My point was that 90% of "dockerized" stuff is just scripting langsby phplovesong
3/7/2026 at 8:25:14 PM
Go is just one language, while Dockerfile gives you access to the whole universe with myriads of tools and options from early 1970s and up to the future. I don't know how you can compare or even "replace" Docker with Go; they belong to different categories.by garganzol
3/7/2026 at 7:40:28 PM
In some situations, yes, others no. For instance if you want to control memory or cpu using a container makes sense (unless you want to use cgroups directly). Also if running Kubernetes a container is needed.by osigurdson
3/7/2026 at 7:59:04 PM
You have to differentiate container images, and "runtime" containers. You can have the former without the latter, and vice versa. They are entirely orthogonal things.E.g. systemd exposes a lot of resource control as well as sandboxing options, to the point that I would argue that systemd services can be very similar to "traditional" runtime containers, without any image involved.
by matrss
3/8/2026 at 1:14:57 AM
Well, I did mention "or use cgroups" above.by osigurdson
3/8/2026 at 1:07:26 PM
And what I've said is that there are more options. You don't have to use cgroups directly, there are other tools abstracting over them (e.g. systemd) that aren't also container runtimes.by matrss
3/7/2026 at 7:41:16 PM
Wasn’t this the same argument for .jar files?by aobdev
3/7/2026 at 8:40:02 PM
> You can pretty much replace "docker build" with "go build".Interesting. How does go build my python app?
by yunwal
3/8/2026 at 1:28:42 PM
It obviously means you dont use a scripting language, instead use a real langauge with a compiler.by phplovesong
3/9/2026 at 1:55:28 AM
Ok yeah let me just port pytorch over that should be quickby yunwal
3/8/2026 at 10:44:24 PM
Calling "go" a "real language" is stretching the definition quite a bit.Real languages don't let errors go silently unnoticed.
by LtWorf
3/7/2026 at 7:35:37 PM
It doesn't sound like Golang is going to dominate and replace everything else, so Docker is there to stay.by speedgoose
3/8/2026 at 7:28:57 AM
At the risk of stating the obvious, there's quite a lot of languages besides just scripting languages and Go that get run in containers.by saghm