C++ Logo


Advanced search

Re: [std-proposals] Allowing coroutine_handle::from_promise to accept a pointer-interconvertible object

From: Aaron Jacobs <jacobsa_at_[hidden]>
Date: Tue, 3 Jan 2023 17:23:08 +1100
Hi Jason, thanks for the feedback. Replies inline below.

On Tue, Jan 3, 2023 at 2:15 PM Jason McKesson via Std-Proposals
<std-proposals_at_[hidden]> wrote:
> As it currently stands, it's pretty easy to build your own
> asynchronous continuation-based promise/future types in such a way
> that they interact effectively with similar types from other systems.
> As long as you don't have your promise interfere in the `co_await`
> process, you can await on any asynchronous mechanism that will resume
> your coroutine at some point in the future. The process you're
> awaiting on is not required to know anything about your coroutine
> machinery. And your coroutine machinery doesn't need to know anything
> about what you're awaiting on.
> This is *important*. It's what allows non-coroutine continuation
> mechanisms to be built in a way that is compatible with `co_await`.
> The networking library, future filesystem IO libraries, etc can all
> use awaitable types to hide mechanisms that aren't coroutines. This
> works because the basic act of awaiting is an interaction between a
> type-erased coroutine handle and the type being awaited on.
> That stops working once you start casting `void*`s on the assumption
> that they point to some particular promise type.

I don't think the proposal interferes with this, or if it does I don't see how.
If I'm wrong I'd appreciate it if you could sketch some actual code to make the
problem more concrete for me.

Note that I'm not talking about casting arbitrary `void*`s on the assumption
that they point to a particular promise type (see below). I'm just asking to be
able to make a coroutine handle for a known, common promise base type. Nothing
about this prevents continuing to use type erasure for interoperability,
including `std::coroutine_handle<>`. My library already interoperates with
other "outsider" threading models using `std::coroutine_handle<>` in just the
way you describe (which I agree is a good idea).

> It's also unclear what your call stack mechanisms expect to do when
> they reach root processes like networking or filesystem operations
> that aren't coroutines.

They terminate at the last coroutine frame they know how to deal with, i.e.
where they can no longer follow the chain. So the async stack traces would show
only coroutines that use my library's promise type, which is totally acceptable
(and it's not clear how you could do any better). Ditto with cancellation,
which is a concept that isn't built into the standard so only makes sense
within a single ecosystem, or across specific bridges that adapt the concept
between ecosystems.

> It's also unclear what you expect to do with this promise base class.
> After all, there's no legal way to access the derived class.

Neither of my applications require accessing the derived class:

* Async stack traces need only the resume function pointer located at the
  coroutine frame's address, available from std::coroutine_handle::address
  (this is ABI-specific, but so is all unwinding, and it's the same across all
  three major compilers).

* The cancellation process just calls std::coroutine_handle::destroy.

Neither of these use the promise at all, except to iterate. The promise is
necessary only to provide the "next" link in the linked list, but it's a
natural place to put the link because the promise already needs it for resuming
when done. That's the point of the proposal: this can be put in a
pointer-interconvertible base class to avoid needing to know the templated
promise type.

> Overall, it's not clear that your motivating examples are actually good ideas.

Async stack traces are a very useful debugging aid, a big advancement in
usability that was impossible/difficult before and totally practical with
coroutines. They are present in other modern languages that have thought hard
about async usability.

The cancellation scheme I described is running in production at Google, in a
system that processes billions of requests per second. I am confident this is
not an impractical idea. :-)


Received on 2023-01-03 06:23:36