1/13/2025 at 3:36:45 AM
Fun story, while at FAANG I was once tasked with making a parameter configurable via flag. This wasn't super easy for various reasons, but I pointed out that it was configurable already: all you had to do was submit a PR to change the hardcoded value in the code and it would update everywhere and that wasn't much different from how we already configured runtime parameters by submitting a PR that updated the config file.I got a good chuckle out of my coworkers but as I've gotten more experienced there's a grain of truth in that. Actually the main non-joking objection was that deployments are slow so if we needed to, reconfiguring runtime flags was a good amount faster. But that made me think: if deployments and builds are super fast, would we still need configuration in the conventional yaml and friends sense? It could save a lot of foot guns from misconfiguration and practically speaking, we essentially never restarted the server to reconfigure it. From the article,
> Unless you have really good reason to, you probably should not do this for normal options - it makes your library much more rigid by requiring that the user of your library know the options at comptime.
I dunno, actually that sounds really great to me.
by kevmo314
1/13/2025 at 4:40:54 AM
I saw a talk at a C++ conference where the presenter floated the idea of writing a compile-time C++ GUI library for use in industrial equipment UIs.Why parse markup and generate objects on the fly when there is exactly one UI that will ever be burned into the CNC machine's firmware? Why even bother with an object library that instantiates components at runtime when you know upfront exactly which components will be instantiated and all of the interactions that can occur among them?
At the time, I filed the idea under "C++ wizard has too much fun with metaprogramming," but he was probably on to something.
Another way to think about the idea is "let's invent a new programming language that allows us to express a single UI, and the output of the compiler will be a native program that IS an optimized implementation of that UI."
by MathMonkeyMan
1/13/2025 at 12:30:01 PM
I've been thinking about this kind of approach a lot ever since I learned Rust.A small aside about a personal theory about language design: Every new major language feature like templates or whatever gets "tacked on" without really redoing the fundamentals of how the language works. As in: you could remove it and things would still be fine. For example, you can use C without the preprocessor, it's just a bit clunky. Then, later, sometimes much later a language comes along that really leans into the feature to the point that it can no longer be removed. It becomes fundamental.
The ultimate metaprogramming capability would be to have the compiler phases exposed to the programmer. That is, the compiler would no longer be a binary black box into which text is fed and binary pops out. Instead, the compiler and its phases would be "just" the standard library.
Rust started down this path but the designers seemed to shy away from fully committing. Zig is closer still to this idealised vision, but still isn't 100% there.
Ideally, one should be able to control every part of code generation with code, including C# style "source generators", Zig-style comptime, custom optimisation passes or extensions, custom code-gen, etc...
In a system like this, a single GUI framework could be used to either statically or dynamically generate UI elements, with templating code being run either at comptime or runtime depending on attributes similar to passing a value by copy or by reference.
Look at it this way: We're perfectly happy writing code to generate code. We do it all the time! As long as it is HTML or JavaScript and sent over the wire...
by jiggawatts
1/13/2025 at 9:29:52 PM
> Ideally, one should be able to control every part of code generationThe issue with this approach is that the more you can control, the less the compiler can assume. This in turn means that it can check less for you, and tools become harder to write because code analysis often heavily realies on those assumptions. Just to make an example, Zig doesn't (and with the current approach can't) have declaration checked generics.
> In a system like this, a single GUI framework could be used to either statically or dynamically generate UI elements, with templating code being run either at comptime or runtime
I feel like this is overly optimistic. Some things will always be runtime-only, even some very basic ones like allocating heap memory. You can likely sidestep this issue and still precompute a lot at compile time, but then chances are this way of computing will be less efficient at runtime. In the end you'll likely still end up with different code for comptime and runtime just because of specific optimizations.
by SkiFire13
1/13/2025 at 10:14:32 PM
> The issue with this approach is that the more you can control, the less the compiler can assume.That's absolutely true, but there's a workaround, albeit a complicated one. The compiler internals need the same kind of constraints or traits that abstract code such as language interfaces or template parameters can have. These can then be used to constrain the internals in a way that then would allow assumptions to be safely "plumbed through" the various layers. The (big!) challenge here is that these abstractions haven't been well-developed in the industry. Certainly nowhere near as well as the typical runtime "type theory" as seen in modern languages.
> some very basic ones like allocating heap memory.
Well... this is sort-of my point! For example, what's the fundamental difference between allocating memory in some heap[1] structure at runtime and a compiler allocating members in a struct/record/class for optimal packing?
IMHO, not much.
E.g.: Watch this talk by Andrei Alexandrescu titled "std::allocator Is to Allocation what std::vector Is to Vexation": https://www.youtube.com/watch?v=LIb3L4vKZ7U
It really opened my eyes to how one could very elegantly make a very complex and high-performance heap allocator from trivial parts composed at compile-time.
There's no reason that a nearly identical abstraction couldn't also be used to efficiently "bin pack" variables into a struct. E.g.: accounting for alignment, collecting like-sized items into contiguous sections, extra padding for "lock" objects to prevent cache issues, etc...
This is what my dream is: that a struct might just be treated as a sort-of comptime heap without deletions. Or even with deletions, allowing fun stuff like type algebra that supports division. I.e.: The SELECT or PROJECT-AWAY operators!
There was some experimental work done in the Jai language to support this kind of thing, allowing layouts such as structure-of-arrays or arrays-of-structures to be defined in code but natively implemented by the compiler as-if it was a built-in capability.
[1] Not really a traditional heap in most language runtimes these days. Typically a combination of various different allocators specialised for small, medium, and large allocations.
PS: The biggest issue I'm aware of with ideas like mine is that tab-complete and IDE assistance becomes very difficult to implement. On the other hand, keeping the full compiler running and directly implementing the LSP can help mitigate this... to a degree. Unsolved problems definitely remain!
by jiggawatts
1/13/2025 at 10:55:59 PM
> Well... this is sort-of my point! For example, what's the fundamental difference between allocating memory in some heap[1] structure at runtime and a compiler allocating members in a struct/record/class for optimal packing?There are many fundamental differences! For example at compile time you can't know what the address of some data will be at runtime due to stuff like ASLR, nor can you know the actual `malloc` implementation that will be used since that might be dynamically loaded.
Of course this does not prevent you from trying to fake heap allocations at comptime, but this will have various issues or limitations depending on how you fake them.
by SkiFire13
1/14/2025 at 8:36:56 AM
(It helps if you've watched the linked allocator video.)Conceptually, an abstract allocator in the style proposed by Andrei can be set up to just return an offset
This offset can then later be interpreted as "from the start of the struct" or "from the start of memory identified by the pointer to 0".
Fundamentally it's the same thing: take a contiguous (or not!) space of bytes and "carve it up" using some simple algorithm. Then, compose the simple algorithms to make complicated allocators. How you use this later is up to you.
I guarantee you that there's code just like this in the guts of any compiler that can reorder struct/record members, such as the Rust compiler. It might be spaghetti code [1], but it could instead look just like Andrei's beautiful component-based code!
I think it ought to be possible for developers to plug in something as complex as a CP-SAT solver if they want to. It might squeeze out 5% performance, which could be worth millions at the scale of a FAANG!
[1] Eww: https://doc.rust-lang.org/beta/nightly-rustc/src/rustc_middl...
by jiggawatts
1/14/2025 at 9:30:23 AM
> This offset can then later be interpreted as "from the start of the struct" or "from the start of memory identified by the pointer to 0".The issue is, how do you create such an offset, at comptime, such that you can interpet it as "from the start of memory identified by the pointer to 0" at runtime. Because at runtime you'll most often want to do the latter (and surely you don't want to branch on whether it's one or the other), but creating such offsets is the part that's very tricky if not impossible to do properly at comptime.
> I guarantee you that there's code just like this in the guts of any compiler that can reorder struct/record members
How is that relevant though? Ok, the code might look similar, but that's not the problematic part.
by SkiFire13
1/13/2025 at 5:40:14 AM
I'm trying to figure out why this wouldn't be much more widely applicable? I.e. all mobile apps are basically a finite series of screens with limited actions as well.by XorNot
1/14/2025 at 5:59:20 AM
You can't continuously release a mobile app thanks to app stores.So, a lot of runtime stuff is papering over the fact that your submission to Apple takes too long to resolve.
by bsder
1/13/2025 at 5:41:40 AM
I think it's somewhat hard to get the API right. This is what React-based static-site generators are but people love defeating them by introducing runtime dependencies and configuration.by kevmo314
1/13/2025 at 2:42:14 PM
Sounds a lot like static website generation that a lot of JS frameworks do.There are a lot of pitfalls with this approach, but for a subset of problems it is very good.
by DanielHB
1/13/2025 at 5:31:07 AM
Reminded me of this post https://medium.com/wise-engineering/where-to-put-application...by vermon
1/13/2025 at 1:24:32 PM
You definitely have a point. We need to distinguish between in-house developed IT-systems, and software sold as a product. They are quite different. For the former, whether a file is a source file or a configuration file often does not really matter. Like you say, they are managed in version control, some build magic happens and they are deployed. Differences between environments, automated tests etc matter, but with that in mind, there is absolutely room for simplification in many cases.For the latter, it is more clear. The developer develops the code, the user changes the configuration
by mongol
1/13/2025 at 10:04:18 AM
IMO the main issue with this is that it means you'd have to patch the production version, and it might be different from the current (master/main/develop) branch, so you'd also need to backport your fix. Keeping configuration separate allows you to avoid that.by nasretdinov
1/13/2025 at 4:31:00 PM
If you're using Kubernetes / Argo then who cares. There's no backporting, there's only moving forward. If the configurable code in question is only ever used from such deployments then making the configuration static and compile-time makes some sense, and saves you having to write code that handles configuration at run-time. One might like to think that this sort of code might get run in other contexts too, so one might tend to prefer flexible, run-time configuration, but if you know that code will only run from deployments then you might as well not waste the effort on run-time configuration.by cryptonector
1/13/2025 at 11:40:36 AM
Releasing more often, keeping master/main/trunk deployable, and not having a develop branch or long lived feature branches solves a lot of that.Of course that sometimes needs some kind of feature flags for bigger changes, which is a configuration option too, but at least the stable state of the code is simpler and not a nest of code + config that never really changes.
by chikere232
1/13/2025 at 6:50:39 PM
And we get ci/cd so any flag change means a pr anyway…by rad_gruchalski