5/13/2026 at 8:37:12 PM
> The header is the cost. Not the reflection. The reflection algorithm is fast – asymptotically ~0.07 ms per enumerator, essentially the same as the hand-rolled switch in the X-macro version (~0.06 ms). What makes reflection look expensive is <meta>: just including it costs ~155 ms per TU over the baseline.So speaking of old ways, I'm not a C++ dev, but a while ago saw someone comment that they still organize their C++ projects using tips from John Lakos' Large-scale C++ software design from 1997, and that their compile times are incredibly fast. So I decided to find a digital copy on the high seas and read it out of historical curiosity. While I didn't finish it, one wild thing stood out to me: he advised for using redundant external include guards around every include, e.g.
#ifndef INCLUDED_MATH
#include <math>
#define INCLUDED_MATH
#endif
The reason for this being that (in 1997) every include required that the pre-processor opened the file just to check for an include guard and reading it all the way to the end to find the closing #endif, causing potentially O(N*2) disk read overhead (if anyone feels like verifying this, it's explained on pages 85 to 87).Again, that was in 1997. I have no idea what mitigations for this problem exist in compilers by now, but I hope at least a few, right?
This conclusion is making me wonder if following that advice still would have a positive impact on compile times today after all though. Surely not, right? Can anyone more knowledgeable about this comment on that?
by vanderZwan
5/13/2026 at 8:46:10 PM
This cost is not significant nowadays, it's the frontend/parsing time.You can also use `#pragma once` which works everywhere, is nicer, and technically needs less work by the compiler, but compilers have optimized for include guards since a long time ago.
Some random measurements I found: https://github.com/Return-To-The-Roots/s25client/issues/1073
by SuperV1234
5/13/2026 at 9:14:15 PM
Yes, I've heard that before, but comments like this one in your linked issue still make me wonder:> at least for gcc and Visual Studio using #pragma once has a significant impact. The fact is, the compiler does not need to continue parsing the whole file when reaching a #pragma once. otherwise the compiler always needs to do it even if the include guard afterwards will avoid double processing of the content afterwards.
As written the explanation for these optimizationst suggest that both "pragma once" and include guard optimization still requires opening and closing the file each time an include is encountered, even if you bail after parsing the first line. Is that overhead zero? Or are the optimizations explained poorly and is repeatedly opening/closing the file also avoided?
Either way, do you know what causes the slowdown as a result of including <meta>?
by vanderZwan
5/14/2026 at 6:53:44 AM
The compiler doesn't need to open the same file multiple times. It can remember if a a file is guarded or not every time it sees its name.My understanding is that this is an optimization that has been available for a very long time now.
The only issue is if a file is referred through multiple names (because of hard links, symlinks, mounts). That might cause the file to be opened again, and can actually break pragma once.
by gpderetta
5/13/2026 at 10:38:48 PM
The overhead isn't zero, but with SSDs (and filesystem caches in the gigabytes these days) it's damn near insignificant in pure terms of opening files and such.by Quekid5
5/14/2026 at 12:03:37 AM
What I found (so far on MSVC) is that #pragma once does only process the file once, where as include guards still open the file each time it is included. Though it takes almost no time to do so but it still appears on the traces.I'm going to experiment with other compilers and figure out how they handle it.
by daemin