Date: Mon, 10 Apr 2023 15:01:34 +0300
On 4/10/23 14:56, Andrey Semashev wrote:
> On 4/10/23 11:42, Edward Catmur via Std-Discussion wrote:
>>
>>
>> On Sun, 9 Apr 2023 at 11:35, Andrey Semashev via Std-Discussion
>> <std-discussion_at_[hidden]
>> <mailto:std-discussion_at_[hidden]>> wrote:
>>
>> On 4/9/23 06:18, Edward Catmur via Std-Discussion wrote:
>> >
>> > On Sat, 8 Apr 2023 at 19:47, Andrey Semashev via Std-Discussion
>> > <std-discussion_at_[hidden]
>> <mailto:std-discussion_at_[hidden]>
>> > <mailto:std-discussion_at_[hidden]
>> <mailto:std-discussion_at_[hidden]>>> wrote:
>> >
>> > On 4/9/23 02:23, Edward Catmur wrote:
>> > >
>> > > Hm, there's something I'm failing to understand here. Let's
>> say a
>> > > coroutine is constructed and its frame constructs a
>> scope_failure
>> > object
>> > > while some number - say 3 - exceptions are in flight. The
>> coroutine is
>> > > then suspended, and one of the exceptions is cleaned up,
>> leaving 2
>> > > exceptions in flight. The coroutine is then resumed, and
>> throws an
>> > > exception out, so that again there are 3 exceptions in flight.
>> > Wouldn't
>> > > that result in the uncaught exception count comparison
>> incorrectly
>> > > returning true?
>> >
>> > That's the point of saving the unhandled exception counter to the
>> > coroutine state. When the coroutine is resumed,
>> > co_unhandled_exceptions() would still return 3, so long as no
>> exception
>> > is thrown from it. When the exception is thrown, it would
>> increment the
>> > coroutine-specific counter to 4, which would allow scope_fail
>> to detect
>> > the exception. When the exception propagates to the caller and
>> then
>> > rethrown, then the caller's counter is incremented from 2 to 3
>> before
>> > the stack unwinding proceeds.
>> >
>> > Thanks, I think I understand. But how would throwing an exception
>> > increment that counter? Are we talking extra per-thread state,
>> possibly
>> > a thread-local linked list of coroutines (bear in mind that there
>> can be
>> > multiple active coroutine frames on a thread, if they call each other
>> > rather than awaiting) or is the unwinder detecting coroutine frames to
>> > find the corresponding counter?
>>
>> It doesn't really matter, whatever approach is suitable to compiler
>> implementers would be fine.
>>
>>
>> It does matter, because a lot hinges on whether this is implementable
>> efficiently. If the overhead is updating a thread-local linked list
>> whenever a coroutine is resumed or suspended - even if they don't use
>> this facility - I don't think that would be acceptable.
>
> I don't think that updating a pointer during a context switch would be a
> deal breaker. The co_unhandled_exceptions() mechanism only needs to know
> the current coroutine state of the current thread, it doesn't need a
> list. If a linked list is needed to be able to return from one coroutine
> to another then such a list must be already in place.
>
> Now that I think of it, co_unhandled_exceptions() might not require any
> new pointers. The coroutine already has to keep a pointer to its state
> somewhere to be able to reference its local variables. I don't see why
> co_unhandled_exceptions() couldn't use that pointer.
>
>> > Can this be implemented without breaking
>> > backwards compatibility (linking new object files against old
>> runtime)?
>>
>> I don't think linking a new object against an older runtime (meaning the
>> runtime that is older than the headers that were used by the object) is
>> really an option. There's no problem with newer objects requiring a
>> newer runtime.
>>
>> It's not a problem per se, but it is a barrier to adoption.
>
> No, because that's the status quo. At least with some major compilers.
>
>> > For example:
>> >
>> > void f(bool b) {
>> > unique_ptr<scope_fail> psf1;
>> > {
>> > unique_ptr<scope_fail> psf2;
>> > try {
>> > unique_ptr<scope_fail> psf = ...;
>> > scope_success ss([&] { psf1 = std::move(psf); });
>> > scope_fail sf([&] { psf2 = std::move(psf); });
>> > if (b)
>> > throw 1;
>> > } catch (...) {
>> > }
>> > }
>> > throw 2;
>> > }
>> >
>> > If `b` is true, the dynamic lifetime scope_fail is transferred to
>> psf2,
>> > so its destruction is immediately caused by the
>> > nonexceptional destruction of psf2, but that was only possible because
>> > of the exceptional destruction of sf, so which counts as "causing" the
>> > destruction?
>>
>> psf2 destruction is not caused by an exception, so the moved scope_fail
>> guard should not execute its action. It doesn't matter how the program
>> arrived to this state.
>>
>> And *psf2 destruction is not caused by an exception; it's caused by the
>> destructor of psf2 calling `delete` on a pointer. Why is the causation
>> transitive through the destructor of psf2 but not through the action of
>> sf transferring *psf to psf2?
>
> Why would psf2 destruction depend on anything that happened in the past?
> I think you're stretching the "causation" term way too thin. You could
> say that everything happening in the program was caused by user hitting
> Enter when he ran the program, but that wouldn't be useful for the sake
> of the discussion re. scope guards.
>
> When I say "the destruction of the scope guard is caused by a stack
> unwinding procedure" I mean literally that the stack unwinding procedure
> is present higher up in the call stack from the scope guard destructor.
> I'm choosing this criteria because that's what makes most sense to me in
> relation to the expected behavior of the scope guards.
>
>> > The answer to that is
>> > pretty clear, as the abstract machine is either in the process of
>> > unwinding the stack, or it isn't; there is no third option, AFAIK.
>> >
>> > But you can do anything you like while stack unwinding is
>> occurring; you
>> > can call whatever code you like from a destructor, as long as you
>> don't
>> > leak another exception out. So stack unwinding is a necessary but not
>> > sufficient condition.
>>
>> Yes, you can run arbitrary code during stack unwinding, but that is not
>> the default. Most of the code that is run in destructors is aimed at
>> destroying objects and freeing resources. Indeed, this code is rather
>> special as it is not allowed to fail or return values. There are
>> exceptions to this, of course, and scope guards are a good example of
>> that. Scope guard actions can be considered as "arbitrary code" that is
>> not necessarily destroying anything. I'd even say, the fact that this
>> code is called in the context of a destructor is a hack, an abuse of the
>> core language feature to ensure that a piece of code is called upon
>> leaving, no matter what. Other languages provide constructs distinct
>> from destructors for this purpose (e.g. `finally` blocks in Java). In a
>> perfect world, I would prefer to have scope guard-like functionality in
>> the core language (but not in the form of try/finally from Java).
>>
>> Even if we had scope guard in the core language, its actions would be
>> executed during stack unwinding, no?
>
> Yes, but a core language feature could have better syntax and better
> defined semantics as it wouldn't have to rely on unhandled_eceptions().
>
>> Anyway, first, let's consider only the "destroying code" that is typical
>> and intended for destructors. The fact whether the code is ultimately
>> being called by runtime unwinding the stack is unambiguous and is a good
>> criteria for detecting failure for the purpose of scope_fail. That is,
>> it doesn't matter, whether scope_fail is allocated as a complete object
>> on the stack, or dynamically allocated and pointed to by unique_ptr, or
>> a subobject of a class that is an element of a container, or whatever
>> else - if its destruction is directly or indirectly called by the stack
>> unwinding procedure then the scope guard can assume failure.
>>
>> Sorry, but no. It can't assume failure, since the scope could have been
>> entered when stack unwinding was already in progress.
>>
>> Let's consider the archetypal example: a scope guard that protects a
>> database transaction, committing it on success and cancelling it on
>> failure. It is entirely legitimate to open a database transaction during
>> stack unwinding - saving work in progress, for example. It would be very
>> much untoward if the scope guard were to perform the wrong action in
>> that case. This is why C++17 replaced uncaught_exception() with
>> uncaught_exceptions().
>
> I'm explicitly talking about "destroying code" in this paragraph, and
> initiating a database transaction doesn't count as one, IMO.
>
>> Now let's consider the "arbitrary code" that is run during stack
>> unwinding. For the sake of simplicity, let's assume this code is a scope
>> guard action, but the same argument can be made regardless of the method
>> you use to get such code to run. For such arbitrary code, it doesn't
>> make sense to indicate whether it is run as a result of an exception.
>> For example:
>>
>> try
>> {
>> scope_fail sg1{[]
>> {
>> scope_fail sg2{[]
>> {
>> std::cout << "2" << std::endl;
>> }};
>>
>> std::cout << "1" << std::endl;
>> }};
>>
>> throw 1;
>> }
>> catch (...) {}
>>
>> Here, it would be reasonable to expect "1" to be printed, but not "2",
>> because while the outmost code has failed with an exception, the scope
>> guard sg1 action hasn't, so sg2 is not supposed to trigger. In other
>> words, for a scope guard action, the fact that the scope guard's
>> destructor is called due to an exception must be concealed.
>>
>> Agreed.
>>
>> Currently, the language does not offer a way to differentiate the
>> "destroying code" and "arbitrary code", and I don't see a way for the
>> compiler to infer this information from the code. Which means the code
>> will probably need to express this information explicitly.
>>
>> Prior to coroutines, it was possible to infer this from
>> uncaught_exceptions(). If at construction of sg1 uncaught_exceptions()
>> was n, then at the destruction of sg1 it would be n+1, whereas at the
>> construction and destruction of sg2 it would be n+1 both times.
>
> Of course.
>
>> Furthermore,
>> there needs to be a way to query whether the code is being called as a
>> result of stack unwinding, and if the code indicates that it wants to
>> act as the "arbitrary code" that query needs to return false. For
>> example, a scope_fail destructor could look something like this then:
>>
>> ~scope_fail()
>> {
>> bool& is_exception = std::is_exception_unwinding();
>> if (is_exception)
>> {
>> bool was_exception = std::exchange(is_exception, false);
>> m_action();
>> is_exception = was_exception;
>> }
>> }
>>
>>
>> Shouldn't you be using a scope guard to restore is_exception? :p
>
> I was simplifying this pseudocode for brevity. I could have posted a
> code with a scope guard, and a noexcept specification to boot, but that
> would take more time and space in an already long post.
>
> And this is truly just a pseudocode to illustrate the idea. The actual
> implementation might not have is_exception_unwinding() function but
> something else, with a similar effect.
>
>> It's an interesting idea, but I think I can see some issues. Firstly,
>> when is is_exception_unwinding() reset to false? It can't be necessarily
>> on a catch, since there could be an outer exception unwinding in
>> process. I suppose you could consult uncaught_exceptions().
>
> If we don't consider the ability for the user's code to set it to false,
> is_exception_unwinding() would return true if co_uncaught_exceptions() >
> 0. I.e., the flag is set to true by the runtime when an exception is
> thrown and reset to false when no exceptions are pending.
>
> When we take modification into the picture, it gets more complicated.
> Essentially, when user's code sets the flag to false, it means that any
> uncaught exceptions are concealed from the following code. The first
> exception thrown by the following code will set the flag to true again,
> and catching that exception will reset it to false.
>
> Perhaps, it would make more sense to just allow modifying the uncaught
> exceptions counter. Then the implementation would look like this:
>
> ~scope_fail()
> {
> int count = co_uncaught_exceptions();
> if (count > m_original_count)
> {
> co_set_uncaught_exceptions(0);
> scope_exit guard([&]
> {
> co_set_uncaught_exceptions(
> co_uncaught_exceptions() + count);
> });
> m_action();
> }
> }
>
> Here, m_original_count would be initialized from
> co_uncaught_exceptions() on scope guard construction. We could assume it
> to be always zero in a program that consistently sets it the uncaught
> exceptions count to zero in scope guard-like classes' destructors, like
> above. But to be on the safe side, capturing the value on construction
> also works.
>
>> More
>> importantly, it looks like it would be difficult to get adoption, since
>> libraries won't be written to use the facility until it's present, but
>> if some use the facility and others don't you would have a problem:
>> anyone using an existing scope guard library (or just themselves calling
>> code from destructors) would find that your scope_fail thinks that there
>> is a failure condition when they would consider there not to be, because
>> they haven't cleared the is_exception flag.
>
> This would be fixed if we allowed to modify the uncaught exceptions
> counter, like above.
>
>> I don't think coupling destructors with storage types and discriminating
>> complete objects and subobjects is useful because I don't think these
>> properties should influence scope guards behavior.
>>
>> But if that were necessary to being able to implement them at all?
>
> I don't think that should be necessary.
>
>> I would say, using scope guards in contexts other than a complete
>> automatic objects is not forbidden currently, and I don't see why it
>> shouldn't work.
>>
>> Because we don't know how to implement them otherwise. Using dynamic
>> information was fine before coroutines, but now there doesn't appear to
>> be a dynamic implementation technique that won't impose considerable
>> overhead or require simultaneous mass adoption.
>>
>> Whereas using static information (whether the destructor call of a
>> complete automatic object was emitted for normal scope exit or for
>> unwinding) will always give predictable answers, surely has negligible
>> overhead, and the downsides (being able to use scope_success and
>> scope_failure only as complete automatic objects) may not actually matter.
>
> At this point, if that's our only option, I would rather have a
> dedicated core language feature for scope guards.
To clarify, by this I don't mean a new way of defining destructors, but
a new syntax for introducing scope guards in user's code that doesn't
involve destructors or classes in the first place.
> On 4/10/23 11:42, Edward Catmur via Std-Discussion wrote:
>>
>>
>> On Sun, 9 Apr 2023 at 11:35, Andrey Semashev via Std-Discussion
>> <std-discussion_at_[hidden]
>> <mailto:std-discussion_at_[hidden]>> wrote:
>>
>> On 4/9/23 06:18, Edward Catmur via Std-Discussion wrote:
>> >
>> > On Sat, 8 Apr 2023 at 19:47, Andrey Semashev via Std-Discussion
>> > <std-discussion_at_[hidden]
>> <mailto:std-discussion_at_[hidden]>
>> > <mailto:std-discussion_at_[hidden]
>> <mailto:std-discussion_at_[hidden]>>> wrote:
>> >
>> > On 4/9/23 02:23, Edward Catmur wrote:
>> > >
>> > > Hm, there's something I'm failing to understand here. Let's
>> say a
>> > > coroutine is constructed and its frame constructs a
>> scope_failure
>> > object
>> > > while some number - say 3 - exceptions are in flight. The
>> coroutine is
>> > > then suspended, and one of the exceptions is cleaned up,
>> leaving 2
>> > > exceptions in flight. The coroutine is then resumed, and
>> throws an
>> > > exception out, so that again there are 3 exceptions in flight.
>> > Wouldn't
>> > > that result in the uncaught exception count comparison
>> incorrectly
>> > > returning true?
>> >
>> > That's the point of saving the unhandled exception counter to the
>> > coroutine state. When the coroutine is resumed,
>> > co_unhandled_exceptions() would still return 3, so long as no
>> exception
>> > is thrown from it. When the exception is thrown, it would
>> increment the
>> > coroutine-specific counter to 4, which would allow scope_fail
>> to detect
>> > the exception. When the exception propagates to the caller and
>> then
>> > rethrown, then the caller's counter is incremented from 2 to 3
>> before
>> > the stack unwinding proceeds.
>> >
>> > Thanks, I think I understand. But how would throwing an exception
>> > increment that counter? Are we talking extra per-thread state,
>> possibly
>> > a thread-local linked list of coroutines (bear in mind that there
>> can be
>> > multiple active coroutine frames on a thread, if they call each other
>> > rather than awaiting) or is the unwinder detecting coroutine frames to
>> > find the corresponding counter?
>>
>> It doesn't really matter, whatever approach is suitable to compiler
>> implementers would be fine.
>>
>>
>> It does matter, because a lot hinges on whether this is implementable
>> efficiently. If the overhead is updating a thread-local linked list
>> whenever a coroutine is resumed or suspended - even if they don't use
>> this facility - I don't think that would be acceptable.
>
> I don't think that updating a pointer during a context switch would be a
> deal breaker. The co_unhandled_exceptions() mechanism only needs to know
> the current coroutine state of the current thread, it doesn't need a
> list. If a linked list is needed to be able to return from one coroutine
> to another then such a list must be already in place.
>
> Now that I think of it, co_unhandled_exceptions() might not require any
> new pointers. The coroutine already has to keep a pointer to its state
> somewhere to be able to reference its local variables. I don't see why
> co_unhandled_exceptions() couldn't use that pointer.
>
>> > Can this be implemented without breaking
>> > backwards compatibility (linking new object files against old
>> runtime)?
>>
>> I don't think linking a new object against an older runtime (meaning the
>> runtime that is older than the headers that were used by the object) is
>> really an option. There's no problem with newer objects requiring a
>> newer runtime.
>>
>> It's not a problem per se, but it is a barrier to adoption.
>
> No, because that's the status quo. At least with some major compilers.
>
>> > For example:
>> >
>> > void f(bool b) {
>> > unique_ptr<scope_fail> psf1;
>> > {
>> > unique_ptr<scope_fail> psf2;
>> > try {
>> > unique_ptr<scope_fail> psf = ...;
>> > scope_success ss([&] { psf1 = std::move(psf); });
>> > scope_fail sf([&] { psf2 = std::move(psf); });
>> > if (b)
>> > throw 1;
>> > } catch (...) {
>> > }
>> > }
>> > throw 2;
>> > }
>> >
>> > If `b` is true, the dynamic lifetime scope_fail is transferred to
>> psf2,
>> > so its destruction is immediately caused by the
>> > nonexceptional destruction of psf2, but that was only possible because
>> > of the exceptional destruction of sf, so which counts as "causing" the
>> > destruction?
>>
>> psf2 destruction is not caused by an exception, so the moved scope_fail
>> guard should not execute its action. It doesn't matter how the program
>> arrived to this state.
>>
>> And *psf2 destruction is not caused by an exception; it's caused by the
>> destructor of psf2 calling `delete` on a pointer. Why is the causation
>> transitive through the destructor of psf2 but not through the action of
>> sf transferring *psf to psf2?
>
> Why would psf2 destruction depend on anything that happened in the past?
> I think you're stretching the "causation" term way too thin. You could
> say that everything happening in the program was caused by user hitting
> Enter when he ran the program, but that wouldn't be useful for the sake
> of the discussion re. scope guards.
>
> When I say "the destruction of the scope guard is caused by a stack
> unwinding procedure" I mean literally that the stack unwinding procedure
> is present higher up in the call stack from the scope guard destructor.
> I'm choosing this criteria because that's what makes most sense to me in
> relation to the expected behavior of the scope guards.
>
>> > The answer to that is
>> > pretty clear, as the abstract machine is either in the process of
>> > unwinding the stack, or it isn't; there is no third option, AFAIK.
>> >
>> > But you can do anything you like while stack unwinding is
>> occurring; you
>> > can call whatever code you like from a destructor, as long as you
>> don't
>> > leak another exception out. So stack unwinding is a necessary but not
>> > sufficient condition.
>>
>> Yes, you can run arbitrary code during stack unwinding, but that is not
>> the default. Most of the code that is run in destructors is aimed at
>> destroying objects and freeing resources. Indeed, this code is rather
>> special as it is not allowed to fail or return values. There are
>> exceptions to this, of course, and scope guards are a good example of
>> that. Scope guard actions can be considered as "arbitrary code" that is
>> not necessarily destroying anything. I'd even say, the fact that this
>> code is called in the context of a destructor is a hack, an abuse of the
>> core language feature to ensure that a piece of code is called upon
>> leaving, no matter what. Other languages provide constructs distinct
>> from destructors for this purpose (e.g. `finally` blocks in Java). In a
>> perfect world, I would prefer to have scope guard-like functionality in
>> the core language (but not in the form of try/finally from Java).
>>
>> Even if we had scope guard in the core language, its actions would be
>> executed during stack unwinding, no?
>
> Yes, but a core language feature could have better syntax and better
> defined semantics as it wouldn't have to rely on unhandled_eceptions().
>
>> Anyway, first, let's consider only the "destroying code" that is typical
>> and intended for destructors. The fact whether the code is ultimately
>> being called by runtime unwinding the stack is unambiguous and is a good
>> criteria for detecting failure for the purpose of scope_fail. That is,
>> it doesn't matter, whether scope_fail is allocated as a complete object
>> on the stack, or dynamically allocated and pointed to by unique_ptr, or
>> a subobject of a class that is an element of a container, or whatever
>> else - if its destruction is directly or indirectly called by the stack
>> unwinding procedure then the scope guard can assume failure.
>>
>> Sorry, but no. It can't assume failure, since the scope could have been
>> entered when stack unwinding was already in progress.
>>
>> Let's consider the archetypal example: a scope guard that protects a
>> database transaction, committing it on success and cancelling it on
>> failure. It is entirely legitimate to open a database transaction during
>> stack unwinding - saving work in progress, for example. It would be very
>> much untoward if the scope guard were to perform the wrong action in
>> that case. This is why C++17 replaced uncaught_exception() with
>> uncaught_exceptions().
>
> I'm explicitly talking about "destroying code" in this paragraph, and
> initiating a database transaction doesn't count as one, IMO.
>
>> Now let's consider the "arbitrary code" that is run during stack
>> unwinding. For the sake of simplicity, let's assume this code is a scope
>> guard action, but the same argument can be made regardless of the method
>> you use to get such code to run. For such arbitrary code, it doesn't
>> make sense to indicate whether it is run as a result of an exception.
>> For example:
>>
>> try
>> {
>> scope_fail sg1{[]
>> {
>> scope_fail sg2{[]
>> {
>> std::cout << "2" << std::endl;
>> }};
>>
>> std::cout << "1" << std::endl;
>> }};
>>
>> throw 1;
>> }
>> catch (...) {}
>>
>> Here, it would be reasonable to expect "1" to be printed, but not "2",
>> because while the outmost code has failed with an exception, the scope
>> guard sg1 action hasn't, so sg2 is not supposed to trigger. In other
>> words, for a scope guard action, the fact that the scope guard's
>> destructor is called due to an exception must be concealed.
>>
>> Agreed.
>>
>> Currently, the language does not offer a way to differentiate the
>> "destroying code" and "arbitrary code", and I don't see a way for the
>> compiler to infer this information from the code. Which means the code
>> will probably need to express this information explicitly.
>>
>> Prior to coroutines, it was possible to infer this from
>> uncaught_exceptions(). If at construction of sg1 uncaught_exceptions()
>> was n, then at the destruction of sg1 it would be n+1, whereas at the
>> construction and destruction of sg2 it would be n+1 both times.
>
> Of course.
>
>> Furthermore,
>> there needs to be a way to query whether the code is being called as a
>> result of stack unwinding, and if the code indicates that it wants to
>> act as the "arbitrary code" that query needs to return false. For
>> example, a scope_fail destructor could look something like this then:
>>
>> ~scope_fail()
>> {
>> bool& is_exception = std::is_exception_unwinding();
>> if (is_exception)
>> {
>> bool was_exception = std::exchange(is_exception, false);
>> m_action();
>> is_exception = was_exception;
>> }
>> }
>>
>>
>> Shouldn't you be using a scope guard to restore is_exception? :p
>
> I was simplifying this pseudocode for brevity. I could have posted a
> code with a scope guard, and a noexcept specification to boot, but that
> would take more time and space in an already long post.
>
> And this is truly just a pseudocode to illustrate the idea. The actual
> implementation might not have is_exception_unwinding() function but
> something else, with a similar effect.
>
>> It's an interesting idea, but I think I can see some issues. Firstly,
>> when is is_exception_unwinding() reset to false? It can't be necessarily
>> on a catch, since there could be an outer exception unwinding in
>> process. I suppose you could consult uncaught_exceptions().
>
> If we don't consider the ability for the user's code to set it to false,
> is_exception_unwinding() would return true if co_uncaught_exceptions() >
> 0. I.e., the flag is set to true by the runtime when an exception is
> thrown and reset to false when no exceptions are pending.
>
> When we take modification into the picture, it gets more complicated.
> Essentially, when user's code sets the flag to false, it means that any
> uncaught exceptions are concealed from the following code. The first
> exception thrown by the following code will set the flag to true again,
> and catching that exception will reset it to false.
>
> Perhaps, it would make more sense to just allow modifying the uncaught
> exceptions counter. Then the implementation would look like this:
>
> ~scope_fail()
> {
> int count = co_uncaught_exceptions();
> if (count > m_original_count)
> {
> co_set_uncaught_exceptions(0);
> scope_exit guard([&]
> {
> co_set_uncaught_exceptions(
> co_uncaught_exceptions() + count);
> });
> m_action();
> }
> }
>
> Here, m_original_count would be initialized from
> co_uncaught_exceptions() on scope guard construction. We could assume it
> to be always zero in a program that consistently sets it the uncaught
> exceptions count to zero in scope guard-like classes' destructors, like
> above. But to be on the safe side, capturing the value on construction
> also works.
>
>> More
>> importantly, it looks like it would be difficult to get adoption, since
>> libraries won't be written to use the facility until it's present, but
>> if some use the facility and others don't you would have a problem:
>> anyone using an existing scope guard library (or just themselves calling
>> code from destructors) would find that your scope_fail thinks that there
>> is a failure condition when they would consider there not to be, because
>> they haven't cleared the is_exception flag.
>
> This would be fixed if we allowed to modify the uncaught exceptions
> counter, like above.
>
>> I don't think coupling destructors with storage types and discriminating
>> complete objects and subobjects is useful because I don't think these
>> properties should influence scope guards behavior.
>>
>> But if that were necessary to being able to implement them at all?
>
> I don't think that should be necessary.
>
>> I would say, using scope guards in contexts other than a complete
>> automatic objects is not forbidden currently, and I don't see why it
>> shouldn't work.
>>
>> Because we don't know how to implement them otherwise. Using dynamic
>> information was fine before coroutines, but now there doesn't appear to
>> be a dynamic implementation technique that won't impose considerable
>> overhead or require simultaneous mass adoption.
>>
>> Whereas using static information (whether the destructor call of a
>> complete automatic object was emitted for normal scope exit or for
>> unwinding) will always give predictable answers, surely has negligible
>> overhead, and the downsides (being able to use scope_success and
>> scope_failure only as complete automatic objects) may not actually matter.
>
> At this point, if that's our only option, I would rather have a
> dedicated core language feature for scope guards.
To clarify, by this I don't mean a new way of defining destructors, but
a new syntax for introducing scope guards in user's code that doesn't
involve destructors or classes in the first place.
Received on 2023-04-10 12:01:47