Date: Sun, 9 Apr 2023 18:35:00 +0300
On 4/9/23 06:18, Edward Catmur via Std-Discussion wrote:
>
> On Sat, 8 Apr 2023 at 19:47, Andrey Semashev via Std-Discussion
> <std-discussion_at_[hidden]
> <mailto:std-discussion_at_[hidden]>> wrote:
>
> On 4/9/23 02:23, Edward Catmur wrote:
> >
> > Hm, there's something I'm failing to understand here. Let's say a
> > coroutine is constructed and its frame constructs a scope_failure
> object
> > while some number - say 3 - exceptions are in flight. The coroutine is
> > then suspended, and one of the exceptions is cleaned up, leaving 2
> > exceptions in flight. The coroutine is then resumed, and throws an
> > exception out, so that again there are 3 exceptions in flight.
> Wouldn't
> > that result in the uncaught exception count comparison incorrectly
> > returning true?
>
> That's the point of saving the unhandled exception counter to the
> coroutine state. When the coroutine is resumed,
> co_unhandled_exceptions() would still return 3, so long as no exception
> is thrown from it. When the exception is thrown, it would increment the
> coroutine-specific counter to 4, which would allow scope_fail to detect
> the exception. When the exception propagates to the caller and then
> rethrown, then the caller's counter is incremented from 2 to 3 before
> the stack unwinding proceeds.
>
> Thanks, I think I understand. But how would throwing an exception
> increment that counter? Are we talking extra per-thread state, possibly
> a thread-local linked list of coroutines (bear in mind that there can be
> multiple active coroutine frames on a thread, if they call each other
> rather than awaiting) or is the unwinder detecting coroutine frames to
> find the corresponding counter?
It doesn't really matter, whatever approach is suitable to compiler
implementers would be fine.
> Can this be implemented without breaking
> backwards compatibility (linking new object files against old runtime)?
I don't think linking a new object against an older runtime (meaning the
runtime that is older than the headers that were used by the object) is
really an option. There's no problem with newer objects requiring a
newer runtime. The newer runtime has to support older objects though,
and this proposal doesn't affect that.
> > In any case, I'm not insisting on this co_unhandled_exceptions()
> > implementation, or even on the particular
> co_unhandled_exceptions()
> > interface. If there is a more efficient way to discover whether an
> > exception is in flight, including in coroutines, that's great
> and let's
> > standardize that.
> >
> > "In flight" is the tricky thing. There may be any number of exceptions
> > in flight over the life of a stack frame, and all the more so a
> > coroutine frame, with the function (or coroutine) blissfully
> unaware of
> > this. Really, the question we are asking is whether the destruction of
> > the object was *caused* by an exception being thrown. But this is
> > impossible to answer; causality has no physical correlate, so it's
> > definitely impossible to observe from within the abstract machine.
> > However, we can determine whether an object whose complete object
> is an
> > automatic variable was destroyed through exceptional stack
> unwinding or
> > through normal control flow.
>
> Storage type or membership are irrelevant. For example, destroying e.g.
> a stack-allocated unique_ptr pointing to a dynamically allocated scope
> guard should work the same as destroying the scope guard allocated
> directly on the stack. Really, the definitive factor that decides
> whether scope_fail should invoke the action is whether the destructor is
> ultimately called because of the stack unwinding.
>
> "Because of" is problematic. That implies causality, which is a huge and
> unsolved area of philosophy.
>
> For example:
>
> void f(bool b) {
> unique_ptr<scope_fail> psf1;
> {
> unique_ptr<scope_fail> psf2;
> try {
> unique_ptr<scope_fail> psf = ...;
> scope_success ss([&] { psf1 = std::move(psf); });
> scope_fail sf([&] { psf2 = std::move(psf); });
> if (b)
> throw 1;
> } catch (...) {
> }
> }
> throw 2;
> }
>
> If `b` is true, the dynamic lifetime scope_fail is transferred to psf2,
> so its destruction is immediately caused by the
> nonexceptional destruction of psf2, but that was only possible because
> of the exceptional destruction of sf, so which counts as "causing" the
> destruction?
psf2 destruction is not caused by an exception, so the moved scope_fail
guard should not execute its action. It doesn't matter how the program
arrived to this state.
> The answer to that is
> pretty clear, as the abstract machine is either in the process of
> unwinding the stack, or it isn't; there is no third option, AFAIK.
>
> But you can do anything you like while stack unwinding is occurring; you
> can call whatever code you like from a destructor, as long as you don't
> leak another exception out. So stack unwinding is a necessary but not
> sufficient condition.
Yes, you can run arbitrary code during stack unwinding, but that is not
the default. Most of the code that is run in destructors is aimed at
destroying objects and freeing resources. Indeed, this code is rather
special as it is not allowed to fail or return values. There are
exceptions to this, of course, and scope guards are a good example of
that. Scope guard actions can be considered as "arbitrary code" that is
not necessarily destroying anything. I'd even say, the fact that this
code is called in the context of a destructor is a hack, an abuse of the
core language feature to ensure that a piece of code is called upon
leaving, no matter what. Other languages provide constructs distinct
from destructors for this purpose (e.g. `finally` blocks in Java). In a
perfect world, I would prefer to have scope guard-like functionality in
the core language (but not in the form of try/finally from Java).
Anyway, first, let's consider only the "destroying code" that is typical
and intended for destructors. The fact whether the code is ultimately
being called by runtime unwinding the stack is unambiguous and is a good
criteria for detecting failure for the purpose of scope_fail. That is,
it doesn't matter, whether scope_fail is allocated as a complete object
on the stack, or dynamically allocated and pointed to by unique_ptr, or
a subobject of a class that is an element of a container, or whatever
else - if its destruction is directly or indirectly called by the stack
unwinding procedure then the scope guard can assume failure.
Now let's consider the "arbitrary code" that is run during stack
unwinding. For the sake of simplicity, let's assume this code is a scope
guard action, but the same argument can be made regardless of the method
you use to get such code to run. For such arbitrary code, it doesn't
make sense to indicate whether it is run as a result of an exception.
For example:
try
{
scope_fail sg1{[]
{
scope_fail sg2{[]
{
std::cout << "2" << std::endl;
}};
std::cout << "1" << std::endl;
}};
throw 1;
}
catch (...) {}
Here, it would be reasonable to expect "1" to be printed, but not "2",
because while the outmost code has failed with an exception, the scope
guard sg1 action hasn't, so sg2 is not supposed to trigger. In other
words, for a scope guard action, the fact that the scope guard's
destructor is called due to an exception must be concealed.
Currently, the language does not offer a way to differentiate the
"destroying code" and "arbitrary code", and I don't see a way for the
compiler to infer this information from the code. Which means the code
will probably need to express this information explicitly. Furthermore,
there needs to be a way to query whether the code is being called as a
result of stack unwinding, and if the code indicates that it wants to
act as the "arbitrary code" that query needs to return false. For
example, a scope_fail destructor could look something like this then:
~scope_fail()
{
bool& is_exception = std::is_exception_unwinding();
if (is_exception)
{
bool was_exception = std::exchange(is_exception, false);
m_action();
is_exception = was_exception;
}
}
> > Indeed, coroutines add a third option to this; a coroutine stack frame
> > automatic variable can also be destroyed by calling destroy() on its
> > coroutine_handle while suspended. The problem is that the language
> can't
> > know whether the call to destroy() was caused by unwinding or by
> normal
> > control flow. Should such destruction be considered success, failure,
> > both, or neither?
>
> This is an interesting question, but I tend to think we should keep
> things straightforward. That is, if the call to
> coroutine_handle::destroy() is done during the stack unwinding, that
> should be considered as such (i.e. all destructors called as a result of
> calling destroy() should be able to tell they are being called due to
> stack unwinding). This would be consistent with the current semantics of
> coroutine_handle being essentially a pointer to the coroutine state and
> coroutine_handle::destroy() essentially calling `operator delete` on it.
>
> It's difficult to know whether destroying a suspended coroutine should
> be considered success or failure, and I think that's orthogonal to the
> immediate cause of the destroy(). Say you have two coroutines, one
> performing a long running task and the other a watchdog (a timeout,
> etc.) on the first. If the first is destroyed while suspended that's
> probably a bad thing (the task failed) but if the second is destroyed
> while suspended that's usually a good thing (the task succeeded, so the
> watchdog is no longer needed)!
If you're calling destroy() as part of your normal code flow, which is
expected when you're destroying a coroutine that is no longer needed,
then that case is unambiguous and should not count as a failure. If you
want to treat is as a failure then you should write the code calling
destroy() to also handle the failure.
The case when you call destroy() as a result of stack unwinding is less
clear and could be argued either way. But I'm still leaning towards
keeping this case straightforward, meaning that the scope guards in the
coroutine state should see this as a failure.
> Syntactically and semantically, there are just destructors currently.
> They don't discriminate between complete objects or subobjects. Or have
> I missed some new core language feature?
>
> It's not exposed to syntax, but it is there in ABI, so arguably in
> semantics.
The multiple destructor symbols generated by compilers are an
implementation detail, used to implement calling user-defined
deallocation functions with virtual destructors. You could say, those
symbols are just wrappers that call the actual destructors defined in
the program and then, possibly, the deallocation function. The
destructors themselves don't know anything about the storage where the
object is allocated, or whether the object is a complete object or a
subobject.
I don't think coupling destructors with storage types and discriminating
complete objects and subobjects is useful because I don't think these
properties should influence scope guards behavior.
> I should note that adding this runtime piece of information would be an
> ABI breaking change. As well as adding a new set of destructors for
> unwinding/normal destruction (which would be detrimental to code sizes
> on its own). In practice, I don't think this would be acceptable by
> compiler writers.
>
> It would only be these specific classes that would have the extra
> destructors, and they would likely be inline. Obviously, this would only
> work for scope guards complete automatic objects, which is why I'm
> interested to know whether they are expected to work as subobjects or
> dynamically.
I would say, using scope guards in contexts other than a complete
automatic objects is not forbidden currently, and I don't see why it
shouldn't work.
>
> On Sat, 8 Apr 2023 at 19:47, Andrey Semashev via Std-Discussion
> <std-discussion_at_[hidden]
> <mailto:std-discussion_at_[hidden]>> wrote:
>
> On 4/9/23 02:23, Edward Catmur wrote:
> >
> > Hm, there's something I'm failing to understand here. Let's say a
> > coroutine is constructed and its frame constructs a scope_failure
> object
> > while some number - say 3 - exceptions are in flight. The coroutine is
> > then suspended, and one of the exceptions is cleaned up, leaving 2
> > exceptions in flight. The coroutine is then resumed, and throws an
> > exception out, so that again there are 3 exceptions in flight.
> Wouldn't
> > that result in the uncaught exception count comparison incorrectly
> > returning true?
>
> That's the point of saving the unhandled exception counter to the
> coroutine state. When the coroutine is resumed,
> co_unhandled_exceptions() would still return 3, so long as no exception
> is thrown from it. When the exception is thrown, it would increment the
> coroutine-specific counter to 4, which would allow scope_fail to detect
> the exception. When the exception propagates to the caller and then
> rethrown, then the caller's counter is incremented from 2 to 3 before
> the stack unwinding proceeds.
>
> Thanks, I think I understand. But how would throwing an exception
> increment that counter? Are we talking extra per-thread state, possibly
> a thread-local linked list of coroutines (bear in mind that there can be
> multiple active coroutine frames on a thread, if they call each other
> rather than awaiting) or is the unwinder detecting coroutine frames to
> find the corresponding counter?
It doesn't really matter, whatever approach is suitable to compiler
implementers would be fine.
> Can this be implemented without breaking
> backwards compatibility (linking new object files against old runtime)?
I don't think linking a new object against an older runtime (meaning the
runtime that is older than the headers that were used by the object) is
really an option. There's no problem with newer objects requiring a
newer runtime. The newer runtime has to support older objects though,
and this proposal doesn't affect that.
> > In any case, I'm not insisting on this co_unhandled_exceptions()
> > implementation, or even on the particular
> co_unhandled_exceptions()
> > interface. If there is a more efficient way to discover whether an
> > exception is in flight, including in coroutines, that's great
> and let's
> > standardize that.
> >
> > "In flight" is the tricky thing. There may be any number of exceptions
> > in flight over the life of a stack frame, and all the more so a
> > coroutine frame, with the function (or coroutine) blissfully
> unaware of
> > this. Really, the question we are asking is whether the destruction of
> > the object was *caused* by an exception being thrown. But this is
> > impossible to answer; causality has no physical correlate, so it's
> > definitely impossible to observe from within the abstract machine.
> > However, we can determine whether an object whose complete object
> is an
> > automatic variable was destroyed through exceptional stack
> unwinding or
> > through normal control flow.
>
> Storage type or membership are irrelevant. For example, destroying e.g.
> a stack-allocated unique_ptr pointing to a dynamically allocated scope
> guard should work the same as destroying the scope guard allocated
> directly on the stack. Really, the definitive factor that decides
> whether scope_fail should invoke the action is whether the destructor is
> ultimately called because of the stack unwinding.
>
> "Because of" is problematic. That implies causality, which is a huge and
> unsolved area of philosophy.
>
> For example:
>
> void f(bool b) {
> unique_ptr<scope_fail> psf1;
> {
> unique_ptr<scope_fail> psf2;
> try {
> unique_ptr<scope_fail> psf = ...;
> scope_success ss([&] { psf1 = std::move(psf); });
> scope_fail sf([&] { psf2 = std::move(psf); });
> if (b)
> throw 1;
> } catch (...) {
> }
> }
> throw 2;
> }
>
> If `b` is true, the dynamic lifetime scope_fail is transferred to psf2,
> so its destruction is immediately caused by the
> nonexceptional destruction of psf2, but that was only possible because
> of the exceptional destruction of sf, so which counts as "causing" the
> destruction?
psf2 destruction is not caused by an exception, so the moved scope_fail
guard should not execute its action. It doesn't matter how the program
arrived to this state.
> The answer to that is
> pretty clear, as the abstract machine is either in the process of
> unwinding the stack, or it isn't; there is no third option, AFAIK.
>
> But you can do anything you like while stack unwinding is occurring; you
> can call whatever code you like from a destructor, as long as you don't
> leak another exception out. So stack unwinding is a necessary but not
> sufficient condition.
Yes, you can run arbitrary code during stack unwinding, but that is not
the default. Most of the code that is run in destructors is aimed at
destroying objects and freeing resources. Indeed, this code is rather
special as it is not allowed to fail or return values. There are
exceptions to this, of course, and scope guards are a good example of
that. Scope guard actions can be considered as "arbitrary code" that is
not necessarily destroying anything. I'd even say, the fact that this
code is called in the context of a destructor is a hack, an abuse of the
core language feature to ensure that a piece of code is called upon
leaving, no matter what. Other languages provide constructs distinct
from destructors for this purpose (e.g. `finally` blocks in Java). In a
perfect world, I would prefer to have scope guard-like functionality in
the core language (but not in the form of try/finally from Java).
Anyway, first, let's consider only the "destroying code" that is typical
and intended for destructors. The fact whether the code is ultimately
being called by runtime unwinding the stack is unambiguous and is a good
criteria for detecting failure for the purpose of scope_fail. That is,
it doesn't matter, whether scope_fail is allocated as a complete object
on the stack, or dynamically allocated and pointed to by unique_ptr, or
a subobject of a class that is an element of a container, or whatever
else - if its destruction is directly or indirectly called by the stack
unwinding procedure then the scope guard can assume failure.
Now let's consider the "arbitrary code" that is run during stack
unwinding. For the sake of simplicity, let's assume this code is a scope
guard action, but the same argument can be made regardless of the method
you use to get such code to run. For such arbitrary code, it doesn't
make sense to indicate whether it is run as a result of an exception.
For example:
try
{
scope_fail sg1{[]
{
scope_fail sg2{[]
{
std::cout << "2" << std::endl;
}};
std::cout << "1" << std::endl;
}};
throw 1;
}
catch (...) {}
Here, it would be reasonable to expect "1" to be printed, but not "2",
because while the outmost code has failed with an exception, the scope
guard sg1 action hasn't, so sg2 is not supposed to trigger. In other
words, for a scope guard action, the fact that the scope guard's
destructor is called due to an exception must be concealed.
Currently, the language does not offer a way to differentiate the
"destroying code" and "arbitrary code", and I don't see a way for the
compiler to infer this information from the code. Which means the code
will probably need to express this information explicitly. Furthermore,
there needs to be a way to query whether the code is being called as a
result of stack unwinding, and if the code indicates that it wants to
act as the "arbitrary code" that query needs to return false. For
example, a scope_fail destructor could look something like this then:
~scope_fail()
{
bool& is_exception = std::is_exception_unwinding();
if (is_exception)
{
bool was_exception = std::exchange(is_exception, false);
m_action();
is_exception = was_exception;
}
}
> > Indeed, coroutines add a third option to this; a coroutine stack frame
> > automatic variable can also be destroyed by calling destroy() on its
> > coroutine_handle while suspended. The problem is that the language
> can't
> > know whether the call to destroy() was caused by unwinding or by
> normal
> > control flow. Should such destruction be considered success, failure,
> > both, or neither?
>
> This is an interesting question, but I tend to think we should keep
> things straightforward. That is, if the call to
> coroutine_handle::destroy() is done during the stack unwinding, that
> should be considered as such (i.e. all destructors called as a result of
> calling destroy() should be able to tell they are being called due to
> stack unwinding). This would be consistent with the current semantics of
> coroutine_handle being essentially a pointer to the coroutine state and
> coroutine_handle::destroy() essentially calling `operator delete` on it.
>
> It's difficult to know whether destroying a suspended coroutine should
> be considered success or failure, and I think that's orthogonal to the
> immediate cause of the destroy(). Say you have two coroutines, one
> performing a long running task and the other a watchdog (a timeout,
> etc.) on the first. If the first is destroyed while suspended that's
> probably a bad thing (the task failed) but if the second is destroyed
> while suspended that's usually a good thing (the task succeeded, so the
> watchdog is no longer needed)!
If you're calling destroy() as part of your normal code flow, which is
expected when you're destroying a coroutine that is no longer needed,
then that case is unambiguous and should not count as a failure. If you
want to treat is as a failure then you should write the code calling
destroy() to also handle the failure.
The case when you call destroy() as a result of stack unwinding is less
clear and could be argued either way. But I'm still leaning towards
keeping this case straightforward, meaning that the scope guards in the
coroutine state should see this as a failure.
> Syntactically and semantically, there are just destructors currently.
> They don't discriminate between complete objects or subobjects. Or have
> I missed some new core language feature?
>
> It's not exposed to syntax, but it is there in ABI, so arguably in
> semantics.
The multiple destructor symbols generated by compilers are an
implementation detail, used to implement calling user-defined
deallocation functions with virtual destructors. You could say, those
symbols are just wrappers that call the actual destructors defined in
the program and then, possibly, the deallocation function. The
destructors themselves don't know anything about the storage where the
object is allocated, or whether the object is a complete object or a
subobject.
I don't think coupling destructors with storage types and discriminating
complete objects and subobjects is useful because I don't think these
properties should influence scope guards behavior.
> I should note that adding this runtime piece of information would be an
> ABI breaking change. As well as adding a new set of destructors for
> unwinding/normal destruction (which would be detrimental to code sizes
> on its own). In practice, I don't think this would be acceptable by
> compiler writers.
>
> It would only be these specific classes that would have the extra
> destructors, and they would likely be inline. Obviously, this would only
> work for scope guards complete automatic objects, which is why I'm
> interested to know whether they are expected to work as subobjects or
> dynamically.
I would say, using scope guards in contexts other than a complete
automatic objects is not forbidden currently, and I don't see why it
shouldn't work.
Received on 2023-04-09 15:35:06