alt.hn

3/27/2026 at 10:23:38 AM

Ninja is a small build system with a focus on speed

https://github.com/ninja-build/ninja

by tosh

3/30/2026 at 5:00:36 PM

Ninja is one of the best tools I have used. It is extremely simple and always works flawlessly.

Some blog posts from the creator of ninja:

https://neugierig.org/software/blog/2018/07/options.html

https://neugierig.org/software/blog/2011/04/complexity.html

Also there was a post about why just generating ninja using python can be a good option. I do this in my project and it has been very productive so far. I couldn’t find this post now but it was saying to use ninja_syntax.py from ninja codebase and just doing something minimal for a project

by ozgrakkurt

3/30/2026 at 7:23:02 PM

OMG, so many open source projects need to read this:

https://neugierig.org/software/blog/2018/07/options.html

(hello FreeCAD ;)

by fainpul

3/31/2026 at 4:43:32 AM

Seems like we so often get projects made by people who either need to read this, or who stubbornly ignore its "But first" paragraph.

It's good to be critical about options, but ultimately people and their needs are diverse and good tools recognize that too.

by seba_dos1

3/30/2026 at 9:24:59 PM

we used ninja as a parallel task runner in pytype - had our whole-project analyser generate a ninja file with a task graph, and then just evoke ninja to run it, taking care of dependencies and parallel execution. it worked very nicely indeed.

by zem

3/30/2026 at 8:12:21 AM

If someone sees this: The ninja package on PyPI [0] currently stays at version 1.13.0 . There is an issue in 1.13.0 preventing it building projects on Windows. The issue is already fixed in 1.13.1 almost a year ago, but the PyPI package hasn't got an update, see [1], and many downstream projects have to stay at 1.11 . I hope it could update soon.

[0] https://pypi.org/project/ninja/

[1] https://github.com/scikit-build/ninja-python-distributions/i...

by woctordho

3/30/2026 at 12:40:12 PM

Why is a C++ project being distributed on PyPi at all?

by endgame

3/30/2026 at 1:18:40 PM

Probably for the same reason other binaries are distributed by npm: lack of cross platform general package managers and registries

by grim_io

3/30/2026 at 1:51:18 PM

Also for cases where a python project needs to depend on it.

by mikepurvis

3/31/2026 at 6:01:08 AM

Kinda weird to have the language toolchain wrap the build system, should be the other way around.

by Ferret7446

3/31/2026 at 4:44:55 PM

Yes, but I mean... this is Python we're talking about. There are several build systems / coordinators written in Python (scons, colcon, etc) not to mention Python packages that themselves contain compiled bits written in other languages.

I know nowadays we have formalized, cross-platform ways to build bindings (scikit-build-core, etc), but that is a relatively recent development; for a long ass time it was pretty common place to have a setup.py full of shell-outs to native toolchains and build tools. It's not hard to imagine a person in that headspace feeling like being able to pull that stuff directly from pypi would be an upgrade over trying to detect it missing and instruct the user to install it before trying again.

by mikepurvis

3/30/2026 at 3:49:14 PM

Or lack of a tool like Goreleaser in the language ecosystem that handles that

by verdverm

3/30/2026 at 5:34:57 PM

Because the development world either hasn't heard of nix or has collectively decided to not use nix.

by tadfisher

3/30/2026 at 1:47:25 PM

What a messy and frankly, absurd situation to be left in. To fork a project in order to provide a tool through Pypi, only to then stop updating it on a broken version. That's more a disservice than a service for the community... If you're going to stay stuck, better to drop the broken release and stay stuck on the previous working one.

by j1elo

3/30/2026 at 6:26:24 PM

Ninja is possibly the best example of the "Do one thing and do it well" philosophy. All it does is execute commands based on a static build graph.

It's syntax is simple enough that it's trivial to e.g. write a shell script to generate the build items if you need dynamic dependencies.

by aidenn0

3/30/2026 at 5:51:13 PM

An under noticed ninja feature I adore, which was implemented relatively recently, is the ability to configure how its build progress is printed. In my fish config, I have the `NINJA_STATUS` envvar:

    set -x NINJA_STATUS "STEP: %f/%t  
    [%p / %P] 
    [%w + %W]
    "
Which prints the time elapsed and projected in a readable multi-line format.

by Conscat

3/30/2026 at 4:05:53 PM

Similar to make, it does mtime chronological comparison of dependencies with target to determinate if dependencies changed. This is just so flawed and simple to fool by operations on filesystem that do not change mtime (move, rename):

1) pick a source file and make a copy of it for for later 2) edit selected source file and rebuild 3) move the copy to it's original location 4) try to rebuild, nothing happens

by mizmar

3/30/2026 at 5:19:36 PM

[ninja author] I did some thinking about this problem and eventually revisited with what I think is a pretty neat solution. I wrote about it here: https://neugierig.org/software/blog/2022/03/n2.html

by evmar

3/30/2026 at 6:15:49 PM

Imagine if filesystems had exposed the file hash next to its mtime.

by actionfromafar

3/30/2026 at 9:23:35 PM

I might be missing your sarcasm, but this is a common approach for large scale builds. Virtual filesystems are used to provide a pre-computed tree hash as a xattr. In a more typical case, you can read the git tree hash.

by oftenwrong

3/31/2026 at 7:06:55 AM

Not sure it was meant as sarcasm really. I just think so many build (and other) problems could have been avoided it a file hash was available on every file by default.

by actionfromafar

3/31/2026 at 3:42:04 PM

That hash would be expensive to maintain, and the end result would still be racy since the file could be modified after the hash was read .

by sagarm

3/31/2026 at 4:39:43 PM

In the current POSIX paradigm yes, it would be expensive. But if the hash was defined as the hash of fixed blocks, it wouldn't be expensive. The raciness depends, a lot, on the semantics we would define. (In the context of a build system, it's no different than that the file could get a new mtime after we read the mtime.)

by actionfromafar

3/30/2026 at 4:29:05 PM

Copy (1) and edit (2) both bump mtime, usually. It's not obvious that in the workflow you describe ninja is problematic, rather than the workflow itself (which is atypical).

by loeg

3/30/2026 at 6:21:18 PM

> 3) move the copy to it's original location

Copy and edit do, but move (aka rename) generally does not, and that is the part that is problematic.

I don't think the described sequence of operations is all that unusual. Not the most common case for sure, but hardly unlikely in the grand scheme of things.

by usefulcat

3/30/2026 at 5:09:42 PM

ninja fails to detect that file changed from last build - all it's mtime, ctime, inode and size can change, yet it's not detected as long as mtime is not newer than target.

by mizmar

3/30/2026 at 5:47:03 PM

Again, this is just a weird workflow, and you're assuming copy/edit don't bump mtime. That usually isn't the case. If you're doing this weird thing, you can just run `touch` when you move files over existing files like this to explicitly bump mtime.

by loeg

3/30/2026 at 6:35:07 PM

I run into this issue when building against different environments, each with a

1. A library depends on a system package. To test against the different versions of the system package, the library is compiled within a container.

2. To minimize the incremental rebuild time, the `build` directory is mounted into the build container. Even when using a different version of the system package, this allows re-use of system-independent portions of the build.

3. When switching to a build container with a different version of the system package, the mtime of the system package is that of its compilation, not that of the build container's initialization. Therefore, the library is erroneously considered up-to-date.

Because the mtime is the only field checked to see if the library is up to date, I need to choose between having larger disk footprint (separate `build` directory for each build container), slower builds (touch the system package on entering the container, forcing a rebuild), or less safe incremental builds (shared `build` directory, manually touch files when necessary).

by MereInterest

3/30/2026 at 6:25:30 PM

>you're assuming copy/edit don't bump mtime

Incorrect, I only assume move/rename of backup to original location doesn't change it's mtime (which it doesn't with default flags or from IDE or file manager). And I don't think this is a weird or obscure workflow, I do it all the time - have two versions of a file or make a backup before some experimental changes, and restore it later.

by mizmar

3/31/2026 at 4:52:35 AM

I used to do that some 20 years ago when I was learning programming, but now it does seem like a weird workflow when git exists and handles this case well.

by seba_dos1

3/31/2026 at 11:18:44 AM

Good point. I think it would be fixable by using the Change Time instead of the Modify Time, because that changes when moving the copy over the original.

by jhasse

3/30/2026 at 6:18:17 PM

mtime rebuild logic is half-baked even by 1970s standards.

Bazel and Shake avoid this class of bug with content hashes, so a rename, restore, or tar extract does not leave the build graph in a stale state. Speed matters, but not enough to bet your repo on timestamp luck.

by hrmtst93837

3/30/2026 at 3:27:42 PM

Postgres uses Meson+Ninja in their builds. That seems like a pretty big endorsement.

by jbonatakis

3/30/2026 at 3:47:14 PM

Most of Gnome is built using that too.

by ddtaylor

3/30/2026 at 7:03:23 PM

Ninja is great and feels natural coming from Make. What it lacks in features it makes up for with speed, which is what ultimately matters.

Also worth mentioning is samurai[1], a pure C implementation of Ninja that's almost as fast yet easier to bootstrap needing only a C compiler.

[1] https://github.com/michaelforney/samurai

by throwaway2046

3/30/2026 at 5:57:24 PM

Serious question: how can a build tool be fast or slow? From my understanding all it does is delegate the build steps to other tools, so wouldn't those be the bottleneck? Is it the resolution of order of build steps that takes so much time that a different build system can make a difference?

by HiPhish

3/31/2026 at 5:14:53 PM

In my observations ninja more consistently uses multiple CPUs than GNU make. e. g. make -j40 will run up to 40 parallel processes (of clang/gcc) but a significant fraction of the time it will less than 40. With ninja average CPU utilization AFAIR was higher reducing build time. Not sure if it's specific to the project I was building (and how cmake generates makefiles) or would work for other projects too.

by citrin_ru

3/30/2026 at 6:04:06 PM

[ninja author] My first post about Ninja goes into this: https://neugierig.org/software/chromium/notes/2011/02/ninja....

by evmar

3/30/2026 at 6:24:51 PM

I'm afraid I still don't understand. One factor is having fewer features and not looking for obsolete files, that I can understand. I guess the other thing is using better rules to figure out when a file truly need to be rebuilt?

by HiPhish

3/31/2026 at 5:02:49 AM

To be honest, it's not clear to me why other systems are not faster. Ninja is relatively straightforward but also not too clever.

Now that I think about it, I did write more about some of the performance stuff we did here: https://aosabook.org/en/posa/ninja.html Looking back over that, I guess we did do some lower-level optimization work. I think a lot of it was just coming at it from a performance mindset.

by evmar

3/30/2026 at 4:15:34 PM

I used ninja only a few years ago when contributing to KDE software (Dolphin, Kate, KTextEditor, etc.). I had no prior experience with it and it was easy to apprehend, so a rather good experience.

by p4bl0

3/30/2026 at 1:27:48 PM

My teammate has a great time reimplementing Ninja (slop-free) in Go here https://github.com/buildbuddy-io/reninja to make it even faster with Remote Build Execution.

by sluongng

3/30/2026 at 1:54:39 PM

This is cool. Going to see if I can use it at work.

by setheron

3/30/2026 at 4:42:45 PM

[dead]

by mjbstrategic

3/30/2026 at 9:58:26 AM

All the main build tools (cmake, meson/ninja and GNU configure) have different benefits. For instance, I expect "--help" to work, but only really GNU configure supports it as-is. I could list more advantages and disadvantages in general here, but by and large I prefer meson/ninja. To me it feels by far the fastest and I also have the fewest issues usually (excluding python breaking its pip stack but that's not the fault of meson as such). ninja can be used via cmake too but most uses I see are from meson.

by shevy-java

3/30/2026 at 10:56:41 AM

> ninja can be used via cmake too but most uses I see are from meson

How do you know though when the choice of cmake-generator is entirely up to the user? E.g. you can't look at a cmake file and know what generator the user will select to build the project.

FWIW I usually prefer the Ninja generator over the Makefile generator since ninja better 'auto-parallelises' - e.g. with the Makefile generator the two 'simple' options are either to run the build single-threaded or completely grind the machine to a halt because the default setting for 'parallel build' seems to heavily overcommit hardware resources. Ninja just generally does the right thing (run parallel build, but not enough parallelism to make the computer unusable).

by flohofwoe

3/30/2026 at 12:34:44 PM

ninja supports separate build groups and different max number of parallel jobs for each. CMake's ninja generator puts compilation and linking steps in their own respective groups. End result is by default `nproc` parallel jobs for compilation but 1 job for linking. This helps because linking can be way more memory intensive or sometimes the linker itself has support for parallelism. Most projects have only a handful of linking steps to run anyway.

by plq

3/30/2026 at 3:00:25 PM

I find Meson's --help fairly useful, at least compared to the disaster that is CMake's. (Try to find out, as a user not experienced with either, how you'd make a debug build.) I agree that configure --help is more useful for surfacing project-specific options, though.

by Sesse__

3/30/2026 at 12:16:43 PM

The absolute best thing about coding agents is not having to waste time on build systems. I had Claude code port my autotools scripts to meson (which uses ninja) and it’s been a huge quality of life improvement.

by jonstewart

3/30/2026 at 2:43:02 PM

Getting builds to work was some of the most tedious, least interesting work. It is so satisfying to watch the agent try 4 different things and magically go on its way. No more makefile syntax searches or cmake hell.

by hakrgrl

3/30/2026 at 5:06:24 PM

For real. I stopped writing Makefiles because of how tedious it was, but now with AI I'm back to throwing Makefiles in everything. It's wonderful to have the same build, test, release commands in all projects instead of mix compile in one, npm build in another, etc. This is my favorite part of AI

by freedomben

3/30/2026 at 12:34:23 PM

Soon you won’t even have to waste time on the code part…

by reactordev

3/30/2026 at 12:45:39 PM

It's only a waste of your tine if you don't want to do it.

by tom_

3/30/2026 at 5:00:48 PM

Ninja religions following of treating timestamps (mtime) as 'modified' marker makes it useless with Git and large projects.

You switched some branches back and forward? Enjoy your 20 minutes rebuild.

* https://github.com/ninja-build/ninja/issues/1459

by Svoka

3/30/2026 at 5:05:59 PM

I believe the same author has made a ninja successor (n2) that uses hashes instead. Haven't tried personally but hopefully get around to try that in the near future

by dezgeg

3/30/2026 at 5:13:24 PM

I have never observed that issue, and I have been using it to build MMLoC repositories. Perhaps the reason being is that I always use it coupled with ccache. Have you tried that?

by menaerus

3/30/2026 at 5:26:55 PM

ccache is a workaround for the mtime problem. You can either hash with ccache or hash directly in the build system, but either way there's no substitute for hashing something. Ccache is hashing the input to the build, but there may be elements of the build that lie outside of ccache's awareness that having a hash-aware build system would take care of. Partial rebuilds devolve to a cache invalidation problem pretty quickly either way.

by throwway120385

3/30/2026 at 7:56:42 PM

I'm obviously aware that ccache solves the mtime problem which is why I find disingenious to say that switching branches with ninja is "totally unusable". Therefore my question.

Hash-aware build systems like bazel, if that's what you're imputing, are a nightmare to work with and come with their own set of problems which make it much less appealing to work with than (some) limitations found in cmake+ninja

by menaerus

3/30/2026 at 2:53:24 PM

[dead]

by throwaway613746

3/30/2026 at 12:40:15 PM

[dead]

by chmorgan_

3/30/2026 at 2:30:43 PM

I can remember having to uninstall ninja temporarily because it messed with building packages. I only use it because other packages need it.

by amelius