12/28/2025 at 12:03:56 PM
Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.by winternewt
12/28/2025 at 12:37:55 PM
I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...
by yoshuaw
12/29/2025 at 5:57:04 AM
The scope guard statement is even better!https://dlang.org/articles/exception-safe.html
https://dlang.org/spec/statement.html#ScopeGuardStatement
Yes, D also has destructors.
by WalterBright
12/29/2025 at 12:07:54 PM
I use this library a lot for scope guards in C++ https://github.com/Neargye/scope_guard, especially for rolling back state on errors, e.g.In a function that inserts into 4 separate maps, and might fail between each insert, I'll add a scope exit after each insert with the corresponding erase.
Before returning on success, I'll dismiss all the scopes.
I suppose the tradeoff vs RAII in the mutex example is that with the guard you still need to actually call it every time you lock a mutex, so you can still forget it and end up with the unreleased mutex, whereas with RAII that is not possible.
by indiosmo
12/29/2025 at 10:25:43 PM
The tradeoff with RAII is once you have 3 things that must all succeed or all fail, that is very clumsy to do with RAII, but easy with scope guard.by WalterBright
12/29/2025 at 7:18:05 AM
Scope guards are neat, particularly since D has had them since 2006! (https://forum.dlang.org/thread/dtr2fg$2vqr$4@digitaldaemon.c...) But they are syntactically confusing since they look like a function invocations with some kind of aliased magic-value passed in.by Defletter
12/28/2025 at 12:17:20 PM
A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.by sigwinch28
12/28/2025 at 2:09:17 PM
You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
by mort96
12/28/2025 at 1:58:36 PM
Any fallible cleanup function is awkward, regardless of error handling mechanism.by leni536
12/28/2025 at 3:43:51 PM
Java solved it by having exceptions be able to attach secondary exceptions, in particular those occurring during stack unwinding (via try-with-resources).The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.
by layer8
12/29/2025 at 7:50:22 PM
I often miss this feature in other languages. It has saved me more times than I can count.by NBJack
12/28/2025 at 1:23:35 PM
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?by mandarax8
12/28/2025 at 3:58:04 PM
You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
by winternewt
12/28/2025 at 3:54:19 PM
Just panic. What's the caller realistically going to do with that information?by EPWN3D
12/28/2025 at 6:17:18 PM
> The entire point of the article is that you cannot throw from a destructor.You need to read the article again because your assertion is patently false. You can throw and handle exceptions in destructors. What you cannot do is not catch those exceptions, because as per the standard uncaught exceptions will lead the application to be immediately terminated.
by locknitpicker
12/29/2025 at 12:06:44 PM
You can throw in a destructor but not from one, as the quoted text rightly notes.by nemetroid
12/28/2025 at 8:43:29 PM
So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errorsby afiori
12/28/2025 at 9:04:48 PM
> So inside a destructor throw has a radically different behaviour that makes it useless for communicating non-fatal errorsIt's weird how you tried to frame a core design feature of the most successful programming language in the history of mankind as "useless".
Perhaps the explanation lies in how you tried to claim that exceptions had any place in "communicating non-fatal errors", not to mention that your scenario, handling non-fatal errors when destroying a resource, is fundamentally meaningless.
Perhaps you should take a step back and think whether it makes sense to extrapolate your mental models to languages you're not familiar with.
by locknitpicker
12/28/2025 at 2:01:04 PM
That tastes like leftover casserole instead of pizza.by DonHopkins
12/28/2025 at 1:19:46 PM
But they're addressing different problemsSure destructors are great but you still want a "finally" for stuff you can't do in a destructor
by raverbashing
12/28/2025 at 12:24:51 PM
Python has that too, it's called a context manager, basically the same thing as C++ RAII.You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
by dist-epoch
12/28/2025 at 1:11:15 PM
It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.by logicchains
12/29/2025 at 4:46:27 AM
To be fair, that's just an artifact Python's chosen design. A different language could make it so that acquiring the object whose context is being managed could require one to use the context manager. For example, in Python terms, imagine if `with open("foo") as f:` was the only way to call `open`, and gave an error if you just called it on its own.by kibwen
12/28/2025 at 3:52:50 PM
How do you return a file in the happy path when using a context manager?If you can't, it's not remotely "basically the same as C++ RAII".
by mort96
12/28/2025 at 12:13:32 PM
Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
by jchw
12/28/2025 at 12:19:49 PM
> Most of the languages that have finally clauses also have destructors.Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
by mort96
12/28/2025 at 1:14:24 PM
In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`. using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting: using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)I think Java has something similar called try with resources.
by brewmarche
12/28/2025 at 3:13:53 PM
Java's is try (var foo = new Foo()) {
}
// foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error.
by cogman10
12/28/2025 at 3:35:27 PM
What do you do if you wanna return the file (or an object containing the file) in the happy path but close it in the error path?by mort96
12/28/2025 at 3:42:52 PM
You'd write it like this void bar() {
try (var f = foo()) {
doMoreHappyPath(f);
}
catch(IOException ex) {
handleErrors();
}
}
File foo() throws IOException {
File f = openFile();
doHappyPath(f);
if (badThing) {
throw new IOException("Bad thing");
}
return f;
}
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
by cogman10
12/28/2025 at 3:56:14 PM
In Java, I agree with you that the opening and closing of a resource should happen at the same scope. This is a reasonable rule in Java, and not following it in Java is a recipe for errors because Java isn't RAII.In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
by mort96
12/28/2025 at 4:33:14 PM
> You can't make the mistake of forgetting to close the file.But you can make a few mistakes that can be hard to see. For example, if you put a mutex in an object you can accidentally hold it open for longer than you expect since you've now bound the life of the mutex to the life of the object you attached it to. Or you can hold a connection to a DB or a file open for longer than you expected by merely leaking out the file handle and not promptly closing it when you are finished with it.
Trying to keep resource open and close in the same scope is an ownership thing. Even for C++ or Rust, I'd consider it not great to leak out RAII resources from out of the scope that acquired them. When you spread that sort of ownership throughout the code it becomes hard to conceptualize what the state of a program would be at any given location.
The exception is memory.
by cogman10
12/28/2025 at 3:32:32 PM
That approach doesn't allow you to move the file into some long lived object or return it in the happy path though, does it?by mort96
12/28/2025 at 3:41:04 PM
As someone coming from RAII to C#, you get used to it, I'd say. You "just" have to think differently. Lean into records and immutable objects whenever you can and IDisposable interface ("using") when you can't. It's not perfect but neither is RAII. I'm on a learning path but I'd say I'm more productive in C# than I ever was in C++.by actionfromafar
12/28/2025 at 3:50:50 PM
I agree with this. I don't dislike non-RAII languages (even though I do prefer RAII). I was mostly asking a rhetorical question to point out that it really isn't the same at all. As you say, it's not a RAII language, and you have to think differently than when using a RAII language with proper destructors.by mort96
12/28/2025 at 9:59:54 PM
Pondering - is there a language similar to C++ (whatever that means, it's huge, but I guess a sprinkle of don't pay for what you don't use and being compiled) which has no raw pointers and such (sacrificing C compatibility) but which is otherwise pretty similar to C++?by actionfromafar
12/28/2025 at 11:37:37 PM
Rust is the only one I really know of. It's many things to many people, but to me as a C++ developer, it's a C++ with a better template model, better object lifetime semantics (destructive moves <3) and without all the cruft stemming from C compat and from the 40 years of evolution.The biggest essential differences between Rust and C++ are probably the borrow checker (sometimes nice, sometimes just annoying, IMO) and the lack of class inheritance hierarchies. But both are RAII languages which compile to native code with a minimal runtime, both have a heavy emphasis on generic programming through templates, both have a "C-style syntax" with braces which makes Rust feel relatively familiar despite its ML influence.
by mort96
12/28/2025 at 7:01:34 PM
You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).In addition, if the caller itself is a long-lived object it can remember the object and implement dispose itself by delegating. Then the user of the long-lived object can manage it.
by brewmarche
12/28/2025 at 7:10:59 PM
> You can move the burden of disposing to the caller (return the disposable object and let the caller put it in a using statement).That doesn't help. Not if the function that wants to return the disposable object in the happy path also wants to destroy the disposable object in the error path.
by mort96
12/28/2025 at 7:17:43 PM
You have to write a disposable wrapper to return. Return it in error case too. readonly record struct Result<TResult, TDisposable>(TResult? IfHappy, TDisposable? Disposable): IDisposable where TDisposable : IDisposable
{
public void Dispose() => Disposable?.Dispose();
}
by brewmarche
12/28/2025 at 7:21:57 PM
Usage at call site: using (var result = foo.GetSomethingIfLucky())
{
if (result.IfHappy is {} success)
{
// do something
}
}
by brewmarche
12/28/2025 at 12:27:49 PM
Technically CPython has deterministic destructors, __del__ always gets called immediately when ref count goes to zero, but it's just an implementation detail, not a language spec thing.by dist-epoch
12/28/2025 at 12:45:18 PM
I don't view finalizers and destructors as different concepts. The notion only matters if you actually need cleanup behavior to be deterministic rather than just eventual, or you are dealing with something like thread locals. (Historically, C# even simply called them destructors.)by jchw
12/28/2025 at 12:53:37 PM
There's a huge difference in programming model. You can rely on C++ or Rust destructors to free GPU memory, close sockets, free memory owned through an opaque pointer obtained through FFI, implement reference counting, etc.I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
by mort96
12/28/2025 at 3:15:54 PM
For GCed languages, I think finalizers are a mistake. They only serve to make it harder to reason about the code while masking problems. They also have negative impacts on GC performance.Java is actively removing it's finalizers.
by cogman10
12/28/2025 at 1:12:46 PM
> I don't view finalizers and destructors as different concepts.They are fundamentally different concepts.
See Destructors, Finalizers, and Synchronization by Hans Boehm - https://dl.acm.org/doi/10.1145/604131.604153
by rramadass
12/28/2025 at 5:02:26 PM
It would suffice to say I don't always agree with even some of the best in the field, and they don't always agree with each other, either. Anders Hejlsberg isn't exactly a random n00b when it comes to programming language design and still called the C# equivalent a "destructor", though it is now known as a finalizer in line with other programming languages. They are things that clean up resources at the end of the life of an object; the difference between GC'd languages and RAII languages is that in a GC'd runtime the lifespan of an object is non-deterministic. That may very well change the programming model, as it does in many other ways, but it doesn't make the two concepts "fundamentally different" by any means. They're certainly related concepts...by jchw
12/29/2025 at 5:58:03 AM
They are related but fundamentally different. It is a vital semantic difference (influencing the programming model itself) since destructors (C++ style) are synchronous and deterministic while finalizers (Java style) are asynchronous and non-deterministic.It is because of all the problems that the finalize method was deprecated in Java 9 and marked "deprecated for removal"(JEP 421) in Java 18. More details at https://stackoverflow.com/questions/56139760/why-is-the-fina... and https://inside.java/2022/01/12/podcast-021/
PS: JEP 421: Deprecate Finalization for Removal - https://openjdk.org/jeps/421 Also details alternative features/techniques to use.
by rramadass
12/29/2025 at 7:08:46 AM
I grasp the entirety of why people differentiate "finalizers" from "destructors", but in my opinion, the practical differences in their application are not the result of the concept itself being fundamentally different, it's a result of object lifetimes being different between GC'd and non-GC'd languages. In my opinion, the concept itself is pretty close to identical. You want to clean up resources at the end of the lifetime of an object. And yes, it's practically a mess because the object lifetime ends at a non-deterministic point in the future and usually not even necessarily on the same thread. Being a big fan of Go and having had to occasionally make use of finalizers for lack of a better option in some limited scenarios, I really genuinely do grasp this, but I dispute that it has anything to do with whether or not a language has try...finally, anymore than it has anything to do with a language having any other convenient structured control flow measures, like pattern matching or else blocks on for loops.(I do also realize that finalizer behavior in some languages is weird, for performance reasons and sometimes just legacy reasons. Go is one such language.)
But I think we've both hit a level of digression that wouldn't be helpful even if we were disagreeing about the facts (which I don't really think we are. I think this is entirely about frames of reference rather than a material dispute over the facts.) Forgetting whether finalizers are truly a form of destructor or not, the point I was trying to make really was that I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`. You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes. (Though I ultimately still prefer errors being passed around as value types, like with std::expected, rather than exception handling blocks.)
I believe the reason why we don't have languages (that I can think of) that demonstrate this exact combination is specifically because try/catch exception blocks fell out of favor at the same time that new compiled/"low-level" programming languages started picking up steam. A lot of new programming language designs that do use explicit lifetimes (Zig, Rust, etc.) simply don't have try...catch style exception blocks in the first place, if they even have anything that resemble exceptions. Even a lot of new garbage collected languages don't use try...catch exceptions, like of course Go.
Now honestly I could've made a better attempt at conveying my position earlier in this thread, but I'm gonna be honest, once I realized I struck a nerve with some people I became pretty unmotivated to bother, sometimes I'm just not in the mood to try to win over the crowd and would rather just let them bury me, at least until the thread died down a bit.
by jchw
12/29/2025 at 11:45:39 AM
> not the result of the concept itself being fundamentally different, it's a result of object lifetimes being different between GC'd and non-GC'd languages.This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.
The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
> I don't view RAII/scoped destructors as being equivalent or alternatives to things like `finally` blocks or `defer` statements. In C++ you basically use scope guards for everything because they are the only option, but I think C++ would still ultimately benefit from at least having `finally`.
Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
> You can kind of emulate it, but not 100%: `finally` blocks are outside of the scope of the exception and can throw a new exception, unlike a destructor in an exception frame. Having more options in structured control flow can sometimes add complexity for little gain, but `finally` can genuinely be useful sometimes.
The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at - https://stackoverflow.com/questions/74607300/should-i-use-st... and https://en.cppreference.com/w/cpp/error/uncaught_exception.h...
Exception handling is always tricky to implement/use in any language since there are multiple models (i.e. Termination vs. Resumption) and a language designer is often constrained in his choice. Wikipedia has a very nice explanation - https://en.wikipedia.org/wiki/Exception_handling_(programmin... In particular, see the Eiffel contract approach mentioned in it and then the detailed rationale in Bertrand Meyer's OOSC2 book - https://bertrandmeyer.com/OOSC2/
by rramadass
12/30/2025 at 3:55:34 AM
> This is the fundamental misunderstanding. The RAII ctor/dtor pattern is a very general mechanism not limited to just managing object (in the OO sense) lifetimes. That is why you don't need finally/defer etc. in C++. You can get all of these policies using just this one mechanism.> The correct way to think about it is as scoped entry and exit function calls i.e. a scoped guard. For example, every C++ programmer writes a LogTrace class to log function (or other scope) entry and exit messages. This is purely exploiting the feature to make function calls with nothing whatever to do with objects (in the sense of managing state) at all. Raymond gives a good example when he points to how wil::scope_exit takes a user-defined lambda function to be run by a dummy object's dtor when it goes out of scope.
Hahaha. It is certainly not a fundamental misunderstanding.
All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.
> Scope guards using ctor/dtor mechanism is enough to implement all the policies like finally/defer etc. That was the point of the article.
You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use. But can you emulate `finally`? Again, no. FTA:
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost. Update: Adam Rosenfield points out that Python 3.2 now saves the original exception as the context of the new exception, but it is still the new exception that is thrown.
> In C++, an exception thrown from a destructor triggers automatic program termination if the destructor is running due to an exception.
C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, and have spent a lot of my time on -fno-exceptions (among many other reasons.)
> The article already points out the main issues (in both non-GC/GC languages) here but it is actually much more nuanced. While it is advised not to throw exceptions from a dtor C++ does give you std::uncaught_exceptions() which one can use for those special times when you must handle/throw exceptions in a dtor. More details at ...
Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter. You can typically do that in `finally`.
When Java introduced `finally` (I do not know if Java was the first language to have it, though it certainly must have been early) it was intended for just resource cleanup, and indeed, I imagine most uses of finally ever were just for closing files, one of the types of resources that you would want to be scoped like that.
However, in my experience the utility of `finally` has actually increased over time. Nowadays there's all kinds of random things you might want to do regardless of whether an exception is thrown. It's usually in the weeds a bit, like adjusting internal state to maintain consistency, but other times it is just handy to throw a log statement or something like that somewhere. Rather than break out a scope guard for these things, most of the time when I see this need arise in a C++ program, instead the logic is just duplicated both at the end of the `try` and `catch` blocks. I bet if I search long enough, I could find it in the wild on GitHub search.
by jchw
12/31/2025 at 5:37:41 AM
> All scope guards are built off of stack-allocated object lifetimes, specifically the scope guard itself. That is not "my opinion" or "my perspective", it is the reality. Try constructing a scope guard that isn't based off of the lifetime of an object on the stack. You can't do this, because the fact that it is tied to an object's lifespan is the point. One of the few points in C++'s favor is the fact that this relatively elegant mechanism can do so much.You are still looking at it backwards. C++ chose to tie user-defined object lifetimes to lexical scopes (for automatic storage objects defined in that scope) via stack-based creation/deletion because it was built on C's abstract machine model. Thus the implicit function calls to ctor/dtor were necessitated which turned out to be a far more general mechanism usable for scope-based control via function calls.
But the lifetime of a user-defined object allocated on the heap is not limited to lexical scope and hence the connection between lexical scope and object lifetime does not exist. However the ctor/dtor are now synchronous with calls to new/delete.
So you have two things viz. lexical scope and object lifetime and they can be connected or not. This is why i insist on disambiguating both in one's mental model.
Java chose the heap-based object lifetime model for all user-defined types and thus there is no connection between lexical scope and object lifetimes. It is because of this that Java had to provide the finally block to provide some sort of lexical scope control even-though it is GC-based. The Java object model is also the reason that finalize in Java is fundamentally different to dtor in C++ which i had pointed out earlier.
> You can kind of implement Go-style defer statements. Since Go-style defer statements run at the end of the current function rather than scope, you'd probably want a scope guard that you instantiate at the beginning of a function with a LIFO queue of std::functions that you can push to throughout the function. Seems like it works to me, not particularly elegant to use.
For lexical scopes you don't need anything new in C++, you can just use RAII at different levels using various techniques. However to make it even more clearer the upcoming C2Y standard does have proposals for syntactic sugar for defer (https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3489.pdf) and Scoped Guards (https://github.com/bemanproject/scope/blob/main/papers/scope...).
We started this discussion with your claim that dtors and finalize are essentially the same which i have refuted comprehensively.
Now you want to discuss finally and its behaviour w.r.t exception handling. In the absence of exceptions RAII gives you all of finally-like behaviour.
In the presence of exceptions;
> C++'s behavior here is actually one of the reasons why I don't like C++ exceptions very much, ... Again, you can't really 100% emulate `finally` behavior using C++ destructors, because you can't throw a new exception from a destructor. `std::uncaught_exceptions()` really has nothing to do with this at all. Choosing not to throw in the destructor is not the same as being able to throw a new exception in the destructor and have it unwind from there. C++ just can't do the latter.
This is again a misunderstanding. I had already pointed you to Termination vs. Resumption exception handling models with a particular emphasis on meyer's contract-based approach to their usage. Now read Andrei Alexandrescu's classic old article Change the Way You Write Exception-Safe Code — Forever - https://erdani.org/publications/cuj-12-2000.php.html
Both C++ and Java use the Termination model but because the object model of C++ vs. Java is so very different (C++ has two types of object lifetimes viz. lexical scope for automatic and program scope for heap-based with no GC while Java only has program scope for heap-based reclaimed by GC) their implementation is necessarily different.
C++ does provide std::nested_exception and related api (https://en.cppreference.com/w/cpp/error/nested_exception.htm...) to handle chaining/handling of exceptions in any function. However the ctor/dtor are special functions because of the behaviour of the object model detailed above. Thus the decision was made to not allow a dtor to throw while an uncaught exception is in flight. Note that this does not mean a dtor cannot throw (though it has been made implicit noexcept from C++11) but only that the programmer needs to take care when to throw or not. An uncaught exception means there has been a violation of contract and hence the system is in a undefined state; and hence there is no point in proceeding further.
This where the std::uncaught_exceptions comes in which the stack overflow article i linked to earlier quotes Herb Sutter;
A type that wants to know whether its destructor is being run to unwind this object can query uncaught_exceptions in its constructor and store the result, then query uncaught_exceptions again in its destructor; if the result is different, then this destructor is being invoked as part of stack unwinding due to a new exception that was thrown later than the object’s construction.
Now the dtor can catch the uncaught exception and do proper logging/processing before exiting cleanly.
Finally, note also that Java itself has introduced new constructs like try-with-resources which should be used instead of try-finally for resources etc.
by rramadass
12/28/2025 at 1:10:51 PM
Sometimes „eventually“ is „At the end of the process“. For many resources this is not acceptable.by adrianN