2/20/2026 at 5:33:22 AM
The article is a bit dense, but what it's announcing is effectively golang's `defer` (with extra braces) or a limited form of C++'s RAII (with much less boilerplate).Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.
by kjgkjhfkjf
2/20/2026 at 5:50:53 AM
Probably closer to defer in Zig than in Go, I would imagine. Defer in Go executes when the function deferred within returns; defer in Zig executes when the scope deferred within exits.by Zambyte
2/20/2026 at 8:16:51 AM
This is the crucial difference. Scope-based is much better.By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.
by rwmj
2/20/2026 at 10:02:26 AM
I wonder what the thought process of the Go designers was when coming up with that approach. Function scope is rarely what a user needs, has major pitfalls, and is more complex to implement in the compiler (need to append to an unbounded list).by CodesInChaos
2/20/2026 at 12:17:41 PM
> I wonder what the thought process of the Go designers was when coming up with that approach.Sometimes we need block scoped cleanup, other times we need the function one.
You can turn the function scoped defer into a block scoped defer in a function literal.
AFAICT, you cannot turn a block scoped defer into the function one.
So I think the choice was obvious - go with the more general(izable) variant. Picking the alternative, which can do only half of the job, would be IMO a mistake.
by 0xjnml
2/20/2026 at 4:10:27 PM
> AFAICT, you cannot turn a block scoped defer into the function one.You kinda-sorta can by creating an array/vector/slice/etc. of thunks (?) in the outer scope and then `defer`ing iterating through/invoking those.
by aw1621107
2/20/2026 at 10:11:03 AM
I hate that you can't call defer in a loop.I hate even more that you can call defer in a loop, and it will appear to work, as long as the loop has relatively few iterations, and is just silently massively wasteful.
by mort96
2/20/2026 at 10:18:09 AM
The go way of dealing with it is wrapping the block with your defers in a lambda. Looks weird at first, but you can get used to it.by usrnm
2/20/2026 at 10:21:47 AM
I know. Or in some cases, you can put the loop body in a dedicated function. There are workarounds. It's just bad that the wrong way a) is the most obvious way, and b) is silently wrong in such a way that it appears to work during testing, often becoming a problem only when confronted with real-world data, and often surfacing only as being a hard-to-debug performance or resource usage issue.by mort96
2/20/2026 at 1:48:48 PM
What's the use-case for block-level defer?In a tight loop you'd want your cleanup to happen after the fact. And in, say, an IO loop, you're going to want concurrency anyway, which necessarily introduces new function scope.
by 9rx
2/20/2026 at 2:12:41 PM
> In a tight loop you'd want your cleanup to happen after the fact.Why? Doing 10 000 iterations where each iteration allocates and operates a resource, then later going through and freeing those 10 000 resources, is not better than doing 10 000 iterations where each iteration allocates a resource, operates on it, and frees it. You just waste more resources.
> And in, say, an IO loop, you're going to want concurrency anyway
This is not necessarily true; not everything is so performance sensitive that you want to add the significant complexity of doing it async. Often, a simple loop where each iteration opens a file, reads stuff from it, and closes it, is more than good enough.
Say you have a folder with a bunch of data files you need to work on. Maybe the work you do per file is significant and easily parallelizable; you would probably want to iterate through the files one by one and process each file with all your cores. There are even situations where the output of working on one file becomes part of the input for work on the next file.
Anyway, I will concede that all of this is sort of an edge case which doesn't come up that often. But why should the obvious way be the wrong way? Block-scoped defer is the most obvious solution since variable lifetimes are naturally block-scoped; what's the argument for why it ought to be different?
by mort96
2/20/2026 at 2:13:34 PM
Let's say you're opening files upon each loop iteration. If you're not careful you'll run out of open file descriptors before the loop finishes.by nasretdinov
2/20/2026 at 2:24:58 PM
It doesn't just have to be files, FWIW. I once worked in a Go project which used SDL through CGO for drawing. "Widgets" were basically functions which would allocate an SDL surface, draw to it using Cairo, and return it to Go code. That SDL surface would be wrapped in a Go wrapper with a Destroy method which would call SDL_DestroySurface.And to draw a surface to the screen, you need to create an SDL texture from it. If that's all you want to do, you can then destroy the SDL surface.
So you could imagine code like this:
strings := []string{"Lorem", "ipsum", "dolor", "sit", "amet"}
stringTextures := []SDLTexture{}
for _, s := range strings {
surface := RenderTextToSurface(s)
defer surface.Destroy()
stringTextures = append(stringTextures, surface.CreateTexture())
}
Oops, you're now using way more memory than you need!
by mort96
2/20/2026 at 2:34:47 PM
Why would you allocate/destroy memory in each iteration when you can reuse it to much greater effect? Aside from bad API design, but a language isn't there to paper over bad design decisions. A good language makes bad design decisions painful.by win311fwg
2/20/2026 at 2:39:32 PM
The surfaces are all of different size, so the code would have to be more complex, resizing some underlying buffer on demand. You'd have to split up the text rendering into an API to measure the text and an API to render the text, so that you could resize the buffer. So you'd introduce quite a lot of extra complexity.And what would be the benefit? You save up to one malloc and free per string you want to render, but text rendering is so demanding it completely drowns out the cost of one allocation.
by mort96
2/20/2026 at 2:57:19 PM
Why does the buffer need to be resized? Your malloc version allocates a fixed amount of memory on each iteration. You can allocate the same amount of memory ahead of time.If you were dynamically changing the malloc allocation size on each iteration then you have a case for a growable buffer to do the same, but in that case you would already have all the complexity of which you speak as required to support a dynamically-sized malloc.
by win311fwg
2/20/2026 at 3:21:19 PM
The example allocates an SDL_Surface large enough to fit the text string each iteration.Granted, you could do a pre-pass to find the largest string and allocate enough memory for that once, then use that buffer throughout the loop.
But again, what do you gain from that complexity?
by mort96
2/20/2026 at 3:27:06 PM
> The example allocates an SDL_Surface large enough to fit the text string each iteration.Impossible without knowing how much to allocate, which you indicate would require adding a bunch of complexity. However, I am willing to chalk that up to being a typo. Given that we are now calculating how much to allocate on each iteration, where is the meaningful complexity? I see almost no difference between:
while (next()) {
size_t size = measure_text(t);
void *p = malloc(size);
draw_text(p, t);
free(p);
}
and void *p = NULL;
while (next()) {
size_t size = measure_text(t);
void *p = galloc(p, size);
draw_text(p, t);
}
free(p);
by win311fwg
2/20/2026 at 3:44:31 PM
>> The example allocates an SDL_Surface large enough to fit the text string each iteration.> Impossible without knowing how much to allocate
But we do know how much to allocate? The implementation of this example's RenderTextToSurface function would use SDL functions to measure the text, then allocate an SDL_Surface large enough, then draw to that surface.
> I see almost no difference between: (code example) and (code example)
What? Those two code examples aren't even in the same language as the code I showed.
The difference would be between the example I gave earlier:
stringTextures := []SDLTexture{}
for _, str := range strings {
surface := RenderTextToSurface(str)
defer surface.Destroy()
stringTextures = append(stringTextures, surface.CreateTexture())
}
and: surface := NewSDLSurface(0, 0)
defer surface.Destroy()
stringTextures := []SDLTexture{}
for _, str := range strings {
size := MeasureText(s)
if size.X > surface.X || size.Y > surface.Y {
surface.Destroy()
surface = NewSDLSurface(size.X, size.Y)
}
surface.Clear()
RenderTextToSurface(surface, str)
stringTextures = append(stringTextures, surface.CreateTextureFromRegion(0, 0, size.X, size.Y))
}
Remember, I'm talking about the API to a Go wrapper around SDL. How the C code would've looked if you wrote it in C is pretty much irrelevant.I have to ask again though, since you ignored me the first time: what do you gain? Text rendering is really really slow compared to memory allocation.
by mort96
2/20/2026 at 3:51:19 PM
> Remember, I'm talking about the API to a Go wrapper around SDL.We were talking about using malloc/free vs. a resizable buffer. Happy to progress the discussion towards a Go API, however. That, obviously, is going to look something more like this:
renderer := SDLRenderer()
defer renderer.Destroy()
for _, str := range strings {
surface := renderer.RenderTextToSurface(str)
textures = append(textures, renderer.CreateTextureFromSurface(surface))
}
I have no idea why you think it would look like that monstrosity you came up with.
by win311fwg
2/20/2026 at 3:56:29 PM
> No. We were talking about using malloc/free vs. a resizable buffer.No. This is a conversation about Go. My example[1], that you responded to, was an example taken from a real-world project I've worked on which uses Go wrappers around SDL functions to render text. Nowhere did I mention malloc or free, you brought those up.
The code you gave this time is literally my first example (again, [1]), which allocates a new surface every time, except that you forgot to destroy the surface. Good job.
Can this conversation be over now?
by mort96
2/20/2026 at 3:58:34 PM
I invite you to read the code again. You missed a few things. Notably it uses a shared memory buffer, as discussed, and does free it upon defer being executed. It is essentially equivalent to the second C snippet above, while your original example is essentially equivalent to the first C snippet.by win311fwg
2/20/2026 at 4:06:05 PM
Wait, so your wrapper around SDL_Renderer now also inexplicably contains a scratch buffer? I guess that explains why you put RenderTextToSurface on your SDL_Renderer wrapper, but ... that's some really weird API design. Why does the SDL_Renderer wrapper know how to use SDL_TTF or PangoCairo to draw text to a surface? Why does SDL_Renderer then own the resulting surface?To anyone used to SDL, your proposed API is extremely surprising.
It would've made your point clearer if you'd explained this coupling between SDL_Renderer and text rendering in your original post.
But yes, I concede that if there was any reason to do so, putting a scratch surface into your SDL_Renderer that you can auto-resize and render text to would be a solution that makes for slightly nicer API design. Your SDL_Renderer now needs to be passed around as a parameter to stuff which only ought to need to concern itself with CPU rendering, and you now need to deal with mutexes if you have multiple goroutines rendering text, but those would've been alright trade-offs -- again, if there was a reason to do so. But there's not; the allocation is fast and the text rendering is slow.
by mort96
2/20/2026 at 4:12:33 PM
You're right to call out that the SDLRenderer name was a poor choice. SDL is an implementation detail that should be completely hidden from the user of the API. That it may or may not use SDL under the hood is irrelevant to the user of the API. If the user wanted to use SDL, they would do so directly. The whole point of this kind of abstraction, of course, is to decouple of the dependence on something like SDL. Point taken.Aside from my failure in dealing with the hardest problem in computer science, how would you improve the intent of the API? It is clearly improved over the original version, but we would do well to iterate towards something even better.
by win311fwg
2/20/2026 at 4:14:40 PM
I think the most obvious improvement would be: just make it a free function which returns a surface, text rendering is slow and allocation is fastby mort96
2/20/2026 at 4:17:50 PM
That is a good point. If text rendering is slow, why are you not doing it in parallel? This is what 9rx called out earlier.by win311fwg
2/20/2026 at 4:24:03 PM
Some hypothetical example numbers: if software-rendering text takes 0.1 milliseconds, and I have a handful of text strings to render, I may not care that rendering the strings takes a millisecond or two.But that 0.1 millisecond to render a string is an eternity compared to the time it takes to allocate some memory, which might be on the order of single digit microseconds. Saving a microsecond from a process which takes 0.1 milliseconds isn't noticeable.
by mort96
2/20/2026 at 4:31:24 PM
You might not care today, but the next guy tasked to render many millions of strings tomorrow does care. If he has to build yet another API that ultimately does the same thing and is almost exactly the same, something has gone wrong. A good API is accommodating to users of all kinds.by win311fwg
2/20/2026 at 3:36:47 PM
I think I've been successfully nerd sniped.It might be preferable to create a font atlas and just allocate printable ASCII characters as a spritesheet (a single SDL_Texture* reference and an array of rects.) Rather than allocating a texture for each string, you just iterate the string and blit the characters, no new allocations necessary.
If you need something more complex, with kerning and the like, the current version of SDL_TTF can create font atlases for various backends.
by krapp
2/20/2026 at 3:52:00 PM
Completely depends on context. If you're rendering dynamically changing text, you should do as you say. If you have some completely static text, there's really nothing wrong with doing the text rendering once using PangoCairo and then re-using that texture. Doing it with PangoCairo also lets you do other fancy things like drop shadows easier.by mort96
2/20/2026 at 2:25:52 PM
Files are IO, which means a lot of waiting. For what reason wouldn't you want to open them concurrently?by 9rx
2/20/2026 at 2:29:46 PM
Opening a file is fairly fast (at least if you're on Linux; Windows not so much). Synchronous code is simpler than concurrent code. If processing files sequentially is fast enough, for what reason would you want to open them concurrently?by mort96
2/20/2026 at 2:49:49 PM
For concurrent processing you'd probably do something like splitting the file names into several batches and process those batches sequentially in each goroutine, so it's very much possible that you'd have an exact same loop for the concurrent scenario.P.S. If you have enough files you don't want to try to open them all at once — Go will start creating more and more threads to handle the "blocked" syscalls (open(2) in this case), and you can run out of 10,000 threads too
by nasretdinov
2/20/2026 at 3:09:23 PM
You'd probably have to be doing something pretty unusual to not use a worker queue. Your "P.S." point being a perfect case in point as to why.If you have a legitimate reason for doing something unusual, it is fine to have to use the tools unusually. It serves as a useful reminder that you are purposefully doing something unusual rather than simply making a bad design choice. A good language makes bad design decisions painful.
by win311fwg
2/20/2026 at 3:30:30 PM
You have now transformed the easy problem of "iterate through some files" into the much more complex problem of either finding a work queue library or writing your own work queue library; and you're baking in the assumption that the only reasonable way to use that work queue is to make each work item exactly one file.What you propose is not a bad solution, but don't come here and pretend it's the only reasonable solution for almost all situations. It's not. Sometimes, you want each work item to be a list of files, if processing one file is fast enough for synchronisation overhead to be significant. Often, you don't have to care so much about the wall clock time your loop takes and it's fast enough to just do sequentially. Sometimes, you're implementing a non-important background task where you intentionally want to only bother one core. None of these are super unusual situations.
It is telling that you keep insisting that any solution that's not a one-file-per-work-item work queue is super strange and should be punished by the language's design, when you haven't even responded to my core argument that: sometimes sequential is fast enough.
by mort96
2/20/2026 at 3:48:22 PM
> It is telling that you keep insistingKeep insisting? What do you mean by that?
> when you haven't even responded to my core argument that: sometimes sequential is fast enough.
That stands to reason. I wasn't responding to you. The above comment was in reply to nasretdinov.
by win311fwg
2/20/2026 at 3:59:43 PM
Your comment was in reply to nasretdinov, but its fundamental logic ignores what I've been telling you this whole time. You're pretending that the only solution to iterating through files is a work queue and that any solution that does a synchronous open/close for each iteration is fundamentally bad. I have told you why it isn't: you don't always need the performance.by mort96
2/20/2026 at 4:33:39 PM
Using a "work queue", i.e. a channel would still have a for loop like for filename := range workQueue {
fp, err := os.Open(filename)
if err != nil { ... }
defer fp.Close()
// do work
}
Which would have the same exact problem :)
by nasretdinov
2/20/2026 at 4:40:46 PM
I don't see the problem. for _, filename := range files {
queue <- func() {
f, _ := os.Open(filename)
defer f.Close()
}
}
or more realistically, var group errgroup.Group
group.SetLimit(10)
for _, filename := range files {
group.Go(func() error {
f, err := os.Open(filename)
if err != nil {
return fmt.Errorf("failed to open file %s: %w", filename, err)
}
defer f.Close()
// ...
return nil
})
}
if err := group.Wait(); err != nil {
return fmt.Errorf("failed to process files: %w", err)
}
Perhaps you can elaborate?I did read your code, but it is not clear where the worker queue is. It looks like it ranges over (presumably) a channel of filenames, which is not meaningfully different than ranging over a slice of filenames. That is the original, non-concurrent solution, more or less.
by win311fwg
2/20/2026 at 5:24:15 PM
I think they imagine a solution like this: // Spawn workers
for _ := range 10 {
go func() {
for path := range workQueue {
fp, err := os.Open(path)
if err != nil { ... }
defer fp.Close()
// do work
}
}()
}
// Iterate files and give work to workers
for _, path := range paths {
workQueue <- path
}
by mort96
2/20/2026 at 6:49:25 PM
Maybe, but why would one introduce coupling between the worker queue and the work being done? That is a poor design.Now we know why it was painful. What is interesting here is that the pain wasn't noticed as a signal that the design was off. I wonder why?
We should dive into that topic. I suspect at the heart of it lies why there is so much general dislike for Go as a language, with it being far less forgiving to poor choices than a lot of other popular languages.
by win311fwg
2/20/2026 at 7:06:00 PM
I think your issue is that you're an architecture astronaut. This is not a compliment. It's okay for things to just do the thing they're meant to do and not be super duper generic and extensible.by mort96
2/20/2026 at 7:25:24 PM
It is perfectly okay inside of a package. Once you introduce exports, like as seen in another thread, then there is good reason to think more carefully about how users are going to use it. Pulling the rug out from underneath them later when you discover your original API was ill-conceived is not good citizenry.But one does still have to be mindful if they want to write software productively. Using a "super duper generic and extensible" solution means that things like error propagation is already solved for you. Your code, on the other hand, is going to quickly become a mess once you start adding all that extra machinery. It didn't go unnoticed that you conveniently left that out.
Maybe that no longer matters with LLMs, when you don't even have to look the code and producing it is effectively free, but LLMs these days also understand how defer works so then this whole thing becomes moot.
by win311fwg
2/20/2026 at 9:35:55 AM
I would like to second this.In Golang if you iterate over a thousand files and
defer File.close()
your OS will run out of file descriptors
by bashkiddie
2/20/2026 at 1:26:12 PM
Well, unless you're on Windows :D Even on Windows XP Home Edition I could open a million file handles with no problems.Seriously, why is default ulimit on file descriptors on Linux measly 1024?
by Joker_vD
2/20/2026 at 2:16:03 PM
Some system calls like select() will not work if there are more than 1024 FDs open (https://man7.org/linux/man-pages/man2/select.2.html), so it probably (?) makes sense to default to it. Although I don't really think that in 2k26 it makes sense to have such a low limit on desktops, that is true.by nasretdinov
2/20/2026 at 2:30:58 PM
defer was invented by Andrei Alexandrescu who spelled it scope(exit)/scope(failure) [Zig's errdefer]/scope(success) ... it first appeared in D 2.0 after Andrei convinced Walter Bright to add it.by jibal
2/20/2026 at 7:57:01 AM
Both defer and RAII have proven to be useful, but RAII has also proven to be quite harmful in cases, in the limit introducing a lot of hidden control flow.I think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.
by L-4
2/20/2026 at 1:28:39 PM
Defer is also hidden control flow. At the end of every block, you need to read backwards in the entire block to see if a defer was declared in order to determine where control will jump to. Please stop pretending that defer isn't hidden control flow.> RAII has also proven to be quite harmful in cases
The downsides of defer are much worse than the "downsides" of RAII. Defer is manual and error-prone, something that you have to remember to do every single time.
by kibwen
2/20/2026 at 2:32:09 PM
Defer is a restricted form of COMEFROM with automatic labels. You COMEFROM the end of the next `defer` block in the same scope, or from the end of the function (before `return`) if there is no more `defer`. The order of execution of defer-blocks is backwards (bottom-to-top) rather than the typical top-to-bottom. puts("foo");
defer { puts("bar"); }
puts("baz");
defer { puts("qux"); }
puts("corge");
return;
Will evaluate: puts("foo");
puts("baz");
puts("corge");
puts("qux");
puts("bar");
return;
by sparkie
2/20/2026 at 3:13:42 PM
That is the most cursed description I have seen on how defer works. Ever.by vlowther
2/20/2026 at 3:32:24 PM
This is how it would look with explicit labels and comefrom: puts("foo");
before_defer0:
comefrom after_defer1;
puts("bar");
after_defer0:
comefrom before_defer0;
puts("baz");
before_defer1:
comefrom before_ret;
puts("qux");
after_defer1:
comefrom before_defer1;
puts("corge");
before_ret:
comefrom after_defer0;
return;
---`defer` is obviously not implemented in this way, it will re-order the code to flow top-to-bottom and have fewer branches, but the control flow is effectively the same thing.
In theory a compiler could implement `comefrom` by re-ordering the basic blocks like `defer` does, so that the actual runtime evaluation of code is still top-to-bottom.
by sparkie
2/20/2026 at 8:54:52 AM
But of course what you call "surprising" and "hidden" is also RAII's strength.It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.
by fauigerzigerk
2/20/2026 at 10:11:38 AM
RAII also composes.by gpderetta
2/20/2026 at 5:00:18 PM
> with extra bracesThe extra braces appear to be optional according to the examples in https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3734.pdf (see pages 13-14)
by omoikane
2/20/2026 at 10:02:52 AM
This certainly isn't RAII—the term is quite literal, Resource Acquisition Is Initialization, rather than calling code as the scope exits. This is the latter of course, not the former.by throwaway27448
2/20/2026 at 10:09:56 AM
People often say that "RAII" is kind of a misnomer; the real power of RAII is deterministic destruction. And I agree with this sentiment; resource acquisition is the boring part of RAII, deterministic destruction is where the utility comes from. In that sense, there's a clear analogy between RAII and defer.But yeah, RAII can only provide deterministic destruction because resource acquisition is initialization. As long as resource acquisition is decoupled from initialization, you need to manually track whether a variable has been initialized or not, and make sure to only call a destruction function (be that by putting free() before a return or through 'defer my_type_destroy(my_var)') in the paths where you know that your variable is initialized.
So "A limited form of RAII" is probably the wrong way to think about it.
by mort96
2/20/2026 at 10:16:37 AM
> and make sure to...call a destruction functionWhich removes half the value of RAII as I see it—needing when and to know how to unacquire the resource is half the battle, a burden that using RAII removes.
Of course, calling code as the scope exits is still useful. It just seems silly to call it any form of RAII.
by throwaway27448
2/20/2026 at 10:38:24 AM
In my opinion, it's the initialization part of RAII which is really powerful and still missing from most other languages. When implemented properly, RAII completely eliminates a whole class of bugs related to uninitialized or partially initialized objects: if all initialization happens during construction, then you either have a fully initialized correct object, or you exit via an exception, no third state. Additionaly, tying resources to constructors makes the correct order of freeing these resources automatic. If you consume all your dependencies during construction, then destructors just walk the dependency graph in the correct order without you even thinking about it. Agreed, that writing your code like this requires some getting used to and isn't even always possible, but it's still a very powerful idea that goes beyond simple automatic destructionby usrnm
2/20/2026 at 10:50:32 AM
This sounds like a nice theoretical benefit to a theoretical RAII system (or even a practical benefit to RAII in Rust), but in C++, I encounter no end of bugs related to uninitialized or partially initialized objects. All primitive types have a no-op constructor, so objects of those types are uninitialized by default. Structs containing members of primitive types can be in partially initialized states where some members are uninitialized because of a missing '= 0'.It's not uncommon that I encounter a bug when running some code on new hardware or a new architecture or a new compiler for the first time because the code assumed that an integer member of a class would be 0 right after initialization and that happened to be true before. ASan helps here, but it's not trivial to run in all embedded contexts (and it's completely out of the question on MCUs).
by mort96
2/20/2026 at 3:38:47 PM
I think you are both right, to some degree.It's been some since I have used C++, but as far as I understand it RAII is primarily about controlling leaks, rather than strictly defined state (even if the name would imply that) once the constructor runs. The core idea is that if resource allocations are condensed in constructors then destructors gracefully handle deallocations, and as long you don't forget about the object (_ptr helpers help here) the destructors get called and you don't leak resources. You may end up with a bunch of FooManager wrapper classes if acquisition can fail (throw), though. So yes, I agree with your GP comment, it's the deterministic destruction that is the power of RAII.
On the other hand, what you refer to in this* comment and what parent hints at with "When implemented properly" is what I have heard referred to (non English) type totality. Think AbstractFoo vs ConcreteFoo, but used not only for abstracting state and behavior in class hierarchy, but rather to ensure that objects are total. Imagine, dunno, database connection. You create some AbstractDBConnection (bad name), which holds some config data, then the open() method returns OpenDBCOnnection() object. In this case Abstract does not even need to call close() and the total object can safely call close() in the destructor. Maybe not the best example. This avoids resources that are in an undefined state.
by friendzis
2/20/2026 at 10:55:44 AM
You're talking about the part of C++ that was inherited from C. Unfortunately, it was way too late to fix by the time RAII was even inventedby usrnm
2/20/2026 at 11:01:54 AM
And the consequence is that, at least in C++, we don't see the benefit you describe of "objects can never be in an uninitialized or partially-initialized state".Anyway, I think this could be fixed, if we wanted to. C just describes the objects as being uninitialized and has a bunch of UB around uninitialized objects. Nothing in C says that an implementation can't make every uninitialized object 0. As such, it would not harm C interoperability if C++ just declared that all variable declarations initialize variables to their zero value unless the declaration initializes it to something else.
by mort96
2/20/2026 at 3:31:46 PM
It's possible to fix this in application code with a Primitive<T> or NoDefault<T> wrapper that acts like a T, except doesn't have a default constructor. Use Primitive<int> wherever you'd use int that it matters (e.g. struct fields), and leaving it uninitialized will be a compiler error.by oasisaimlessly
2/20/2026 at 4:01:55 PM
Yea no. I'm not gonna do that.by mort96
2/20/2026 at 10:11:23 AM
[dead]by ceteia
2/20/2026 at 10:23:13 AM
To be fair, RAII is so much more than just automatic cleanup. It's a shame how misunderstood this idea has become over the yearsby usrnm
2/20/2026 at 12:49:01 PM
Can you share some sources that give a more complete overview of it?I got out my 4e Stroustrup book and checked the index, RAII only comes up when discussing resource management.
Interestingly, the verbatim introduction to RAII given is:
> ... RAII allows us to eliminate "naked new operations," that is, to avoid allocations in general code and keep them buried inside the implementation of well-behaved abstractions. Similarly "naked delete" operations should be avoided. Avoiding naked new and naked delete makes code far less error-prone and far easier to keep free of resource leaks
From the embedded standpoint, and after working with Zig a bit, I'm not convinced about that last line. Hiding heap allocations seems like it make it harder to avoid resource leaks!
by randusername