Date: Fri, 17 Apr 2026 13:52:24 +0500
The bookkeeping for example could be initializing a std:: variant<T&...>
For every T in the T... Passed to the tuple as a template argument, and
storing them in an array. The subscript would simply index and return any
of them.
On Fri, 17 Apr 2026, 1:33 pm Sebastian Wittmeier via Std-Proposals, <
std-proposals_at_[hidden]> wrote:
> The difference is that "runtime indexed heterogeneous lists" as far as you
> described them, have no difference to std::tuple.
>
> What kind of additional book-keeping would be stored?
>
>
>
> With SSO (small string optimization) one can directly say, there is some
> static storage up to length x, there is a pointer to the actual string. If
> the pointer does not point to the static storage, the string class can
> assume it points to the heap, which has to be freed at destruction.
>
> Yes, the implementers have the freedom to have a different layout of their
> string class, but then they have to stick with it. They can't have a
> different layout for each compiled program.
>
>
>
> For runtime indexed heterogeneous lists, the compiler would just store all
> the objects one after the other. As the types are known (at compile-time
> from the template parameters), the compiler knows the offsets at
> compile-time. And if the 3rd element is requested at run-time, it just
> translates the number 3 into an offset. Either by calculation (compilers
> are quite good at finding an arithmetic formula working for several cases),
> a table (once needed per type, not per instance), with conditions and
> branches or with branchless "select" assembler instructions.
>
>
>
> We have no idea, what kind of book-keeping could improve the runtime
> access further beyond what the compiler can do anyway.
>
> I asked you to provide an example object layout to show the difference.
>
> So far you only speak about abstract optimizations. That the compiler
> would have more information. Or the object would store more information
> (book-keeping).
>
> Many people on this list can normally think of several optimizations
> possible with book-keeping for all kind of objects, but in this case all
> they, it would be the same as std::tuple. No more book-keeping is needed or
> useful, as std::tuple already has the complete information.
>
>
>
> Let's try to definen a new type, which perhaps is faster??? You want to
> store a dobuble, an int, and a float.
>
>
>
>
> *std::tuple:*
>
> double d;
>
> int i;
>
> float f;
>
>
>
>
> *std::extended_tuple:*
>
> void* p[3] = { &d, &i, &f };
>
> double d;
>
> int i;
>
> float f;
>
>
>
>
> Is the extended_tuple faster?
>
>
>
> The compiler can generate a static
>
>
>
> size_t offsets[3] = { 0, 8, 12 };
>
>
>
> table for std::tuple. So, normally std::tuple with the static table would
> be preferred.
>
>
>
> I can't come up with any other book-keeping possible.
>
>
>
>
>
> -----Ursprüngliche Nachricht-----
> *Von:* Muneem via Std-Proposals <std-proposals_at_[hidden]>
> *Gesendet:* Fr 17.04.2026 09:41
> *Betreff:* Re: [std-proposals] Extension to std::tuples to allow runtime
> indexing.
> *An:* Jonathan Wakely <cxx_at_[hidden]>; std-proposals_at_[hidden];
> *CC:* Muneem <itfllow123_at_[hidden]>;
> Sorry for ending it twice (I messed up while choosing the "to" part of the
> email)
> 1.The point is that the standard shouldn't rely on users reinventing the
> wheel a thousand times, each time that they want to do so.
> 2. The point is that sometimes you dont need to access code at fixed
> points, sometimes, you want bookkeeping information. For example C++
> strings employ small string optimization which again, in some cases might
> be argued to cause overhead, but it serves the notion of a string well. The
> only reason such optimizations can exist is because of the fact that we
> aren't using C styles string everywhere, or else the compiler couldn't just
> employ such optimizations. Similarly the growth of a string (according to
> C++ programming language 4th edition by bjarne stroustrup) is: " To allow
> Strings to efficiently ‘‘grow’’ by adding characters at the end, I
> implement a scheme for keeping extra space for such growth similar to the
> one used for vector (§13.6.1). This makes String a suitable target for
> various forms of input. ". Similarly standard library facilities provide
> predictable exceptions gurrentied as well, which is important to formalize
> the notion of something that you are using in your code, so that your code
> makes sense. Basically, all these facilities can only be provides if there
> is a change in existing tuple implementations, and that change in code
> might cause overhead in other tuples and break ABI's, hence a completely
> new feature can providing whatever it wants to provide it for (in my case
> runtime indexed heterogeneous lists).
> 3. The point on the usage of inline assembly comes from (according to C++
> programming language 4th edition by bjarne stroustrup)'s section 8.2.6,
> where he describes how plain old data(a notion replaced by notions about
> trivial and standard layout) allows memcpy to efficiently copy data by "
> block-move machine instruction ". The point is that just how the compiler
> can optimize Plain old data copies, it can also perhaps copy accessing a
> reference to an element if it has no compatibility with existing code to
> worry about.
> 4. Thanks anyway for answering on Mckasson's claim on pointers.
>
> On Fri, Apr 17, 2026 at 12:30 PM Muneem <itfllow123_at_[hidden]> wrote:
>
> 1.The point is that the standard shouldn't rely on users reinventing the
> wheel a thousand times, each time that they want to do so.
> 2. The point is that sometimes you dont need to access code at fixed
> points, sometimes, you want bookkeeping information. For example C++
> strings employ small string optimization which again, in some cases might
> be argued to cause overhead, but it serves the notion of a string well. The
> only reason such optimizations can exist is because of the fact that we
> aren't using C styles string everywhere, or else the compiler couldn't just
> employ such optimizations. Similarly the growth of a string (according to
> C++ programming language 4th edition by bjarne stroustrup) is: " To allow
> Strings to efficiently ‘‘grow’’ by adding characters at the end, I
> implement a scheme for keeping extra space for such growth similar to the
> one used for vector (§13.6.1). This makes String a suitable target for
> various forms of input. ". Similarly standard library facilities provide
> predictable exceptions gurrentied as well, which is important to formalize
> the notion of something that you are using in your code, so that your code
> makes sense.
> 3. The point on the usage of inline assembly comes from (according to C++
> programming language 4th edition by bjarne stroustrup)'s section 8.2.6,
> where he describes how plain old data(a notion replaced by notions about
> trivial and standard layout) allows memcpy to efficiently copy data by "
> block-move machine instruction ". The point is that just how the compiler
> can optimize Plain old data copies, it can also perhaps copy accessing a
> reference to an element if it has no compatibility with existing code to
> worry about.
> 4. Thanks anyway for answering on Mckasson's claim on pointers.
>
> On Fri, Apr 17, 2026 at 12:13 PM Jonathan Wakely <cxx_at_[hidden]> wrote:
>
>
>
> On Fri, 17 Apr 2026, 05:43 Muneem via Std-Proposals, <
> std-proposals_at_[hidden]> wrote:
>
> Thanks for your feedback:
> > Existing tuples cannot be optimized for runtime indexing without
> breaking ABI (Application Binary Interface).
> >You responded with: Where's the evidence for it?
> The evidence is that without *STANDARD* support for runtime indexing,
> programmers will reinvent the wheel that might not always be efficient.
>
>
> OK, that's true.
>
>
>
> To optimize such a wheel, the optimizer needs complete freedom, which it
> does not have because it has to maintain ABI compatibility for the
> components of the wheel.
>
>
> This makes no sense. The compiler doesn't need to be able to alter the
> tuple ABI in order to optimize code accessing at fixed byte offsets.
>
>
>
> Again, standard library defeats the user in optimizing say a container,
> since the container can be built with inline assembly if the implementor
> thinks that is efficient for that implementation,
>
>
> This is a straw man argument. No C++ implementation actually does that.
>
>
>
> where as user written code is built on multiple blocks, where each has its
> own implied invariant that contributes to the semantic meaning of that
> block, hence it cant be optimized away by the compiler. Since, According to
> basic compiler theory, the optimizer can only optimize such that the
> semantic meaning of the code dosent change.
> > This also means common implementations of variants (tagged unions or
> "closed set unions") might not suffice. Instead, implementations should use
> techniques similar to typeid or other internal mechanisms.
> You can't have a union of references, but however, as I said, you can have
> reference wrappers which are like pointers.
> >The main reason for this variant specialization to exist is that it
> cannot be valueless.
> Variants of references dont exist, so adding allowing references to exist
> in variants requires a new set of rules. Basically, a variant of const T
> will act differently than T, Based on the behaviour of the const type
> modifier. SImilarly variants of references should act differently based on
> how references already act. Vector of bools however does not take a
> completely different type, instead it takes a bool which is its own type
> unlike const or references for a particular type T.
> Many many thanks for your feedback
>
> On Fri, Apr 17, 2026 at 8:57 AM Jason McKesson via Std-Proposals <
> std-proposals_at_[hidden]> wrote:
>
> On Thu, Apr 16, 2026 at 10:26 PM Muneem via Std-Proposals
> <std-proposals_at_[hidden]> wrote:
> >
> > Abstract: This proposal provides a specialization of std::tuple that can
> be indexed at runtime. It also introduces specializations for std::variant
> and std::optional to ensure the interface of such a tuple remains efficient.
> > I wrote 3 files, the first was a complete draft that I am gonna paste as
> a txt file, the second was a formatted docx file that I wanted to float
> around the internet to get some feedback before I post it over here (sadly
> no on was interested in the poor man's document), and the third is the
> shortened version where I removed all the code and kept it conceptual.
> Sorry for pasting three files at once, but I just wanted to see which one
> is acceptable and which one isn't. Personally, I like the first two.
>
> Your proposal is full of key statements that you say confidently, as
> if they were self-evidently true, but you offer no evidence for it
> (anecdotal or otherwise). And these aren't minor nit-picks; they're
> *foundational* to the choices your proposal makes.
>
> For example:
>
> > Existing tuples cannot be optimized for runtime indexing without
> breaking ABI (Application Binary Interface).
>
> Where's the evidence for it?
>
> A tuple is just a struct that uses compile-time numeric indices
> instead of compile-time names. If you use a compile-time name or
> compile-time index, this is converted into an address within the
> "struct". For every struct member, there is a byte offset from the
> start of the struct to that member.
>
> The thing that determines what the byte offset is for a given member
> is the implementation of the tuple/struct. This is baked into the ABI
> of the object. Which means that the *only part* of this process that
> is in any way ABI dependent is the specific mapping from an index to a
> byte offset..
>
> Your hypothetical runtime-optimized tuple would also have a way to
> convert an index into a byte offset. Every member would have a
> separate byte offset. So... why would one mapping of index:byte-offset
> be inherently slower at runtime than a different one? Why does ABI
> matter *at all* for converting a runtime index to a byte offset?
>
> The veracity of your statement is the core reason why your proposal
> uses a `tuple` specialization. That is, if this statement is untrue,
> then there's no reason to not just allow any `tuple` to be able to use
> runtime indexing.
>
> So if you're going to make that a part of your proposal, it needs to
> actually justify this statement.
>
> Similarly:
>
> > This also means common implementations of variants (tagged unions or
> "closed set unions") might not suffice. Instead, implementations should use
> techniques similar to typeid or other internal mechanisms.
>
> Um... why not?
>
> A reference is just a pointer. A variant of references is really just
> a variant of pointers. And pointers are all the same size. So a
> variant holding a union of pointers is functionally no different from
> a variant holding a `void*` that it casts to the appropriate type as
> needed.
>
> The only optimization I could see is if an implementation is able to
> hide the index somewhere inside the `void*` itself, using bits in the
> address that are otherwise unused. But I'm not sure that's faster than
> just adding a byte before the `void*`.
>
> Again, if you have some evidence for this statement, put it into the paper.
>
> On a different topic:
>
> > The main reason for this variant specialization to exist is that it
> cannot be valueless.
>
> Specializations of a template should not radically alter the
> fundamental behavior of the core template. `vector<bool>` is a bad
> idea because its interface and behavior does not match what `vector`
> is supposed to mean. The same goes for this `variant`; it should work
> just like any other `variant`. You should be able to assign to it,
> change the bound type, etc. And if it can't do those things, there
> needs to be a *very* good reason as to why it doesn't.
>
> And feeling that valueless by exception is icky is not a good reason.
>
> If you don't want runtime indexing of a `tuple` to return a `variant`
> that actually does what a `variant` does, then it should be a new type
> with a new name.
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
For every T in the T... Passed to the tuple as a template argument, and
storing them in an array. The subscript would simply index and return any
of them.
On Fri, 17 Apr 2026, 1:33 pm Sebastian Wittmeier via Std-Proposals, <
std-proposals_at_[hidden]> wrote:
> The difference is that "runtime indexed heterogeneous lists" as far as you
> described them, have no difference to std::tuple.
>
> What kind of additional book-keeping would be stored?
>
>
>
> With SSO (small string optimization) one can directly say, there is some
> static storage up to length x, there is a pointer to the actual string. If
> the pointer does not point to the static storage, the string class can
> assume it points to the heap, which has to be freed at destruction.
>
> Yes, the implementers have the freedom to have a different layout of their
> string class, but then they have to stick with it. They can't have a
> different layout for each compiled program.
>
>
>
> For runtime indexed heterogeneous lists, the compiler would just store all
> the objects one after the other. As the types are known (at compile-time
> from the template parameters), the compiler knows the offsets at
> compile-time. And if the 3rd element is requested at run-time, it just
> translates the number 3 into an offset. Either by calculation (compilers
> are quite good at finding an arithmetic formula working for several cases),
> a table (once needed per type, not per instance), with conditions and
> branches or with branchless "select" assembler instructions.
>
>
>
> We have no idea, what kind of book-keeping could improve the runtime
> access further beyond what the compiler can do anyway.
>
> I asked you to provide an example object layout to show the difference.
>
> So far you only speak about abstract optimizations. That the compiler
> would have more information. Or the object would store more information
> (book-keeping).
>
> Many people on this list can normally think of several optimizations
> possible with book-keeping for all kind of objects, but in this case all
> they, it would be the same as std::tuple. No more book-keeping is needed or
> useful, as std::tuple already has the complete information.
>
>
>
> Let's try to definen a new type, which perhaps is faster??? You want to
> store a dobuble, an int, and a float.
>
>
>
>
> *std::tuple:*
>
> double d;
>
> int i;
>
> float f;
>
>
>
>
> *std::extended_tuple:*
>
> void* p[3] = { &d, &i, &f };
>
> double d;
>
> int i;
>
> float f;
>
>
>
>
> Is the extended_tuple faster?
>
>
>
> The compiler can generate a static
>
>
>
> size_t offsets[3] = { 0, 8, 12 };
>
>
>
> table for std::tuple. So, normally std::tuple with the static table would
> be preferred.
>
>
>
> I can't come up with any other book-keeping possible.
>
>
>
>
>
> -----Ursprüngliche Nachricht-----
> *Von:* Muneem via Std-Proposals <std-proposals_at_[hidden]>
> *Gesendet:* Fr 17.04.2026 09:41
> *Betreff:* Re: [std-proposals] Extension to std::tuples to allow runtime
> indexing.
> *An:* Jonathan Wakely <cxx_at_[hidden]>; std-proposals_at_[hidden];
> *CC:* Muneem <itfllow123_at_[hidden]>;
> Sorry for ending it twice (I messed up while choosing the "to" part of the
> email)
> 1.The point is that the standard shouldn't rely on users reinventing the
> wheel a thousand times, each time that they want to do so.
> 2. The point is that sometimes you dont need to access code at fixed
> points, sometimes, you want bookkeeping information. For example C++
> strings employ small string optimization which again, in some cases might
> be argued to cause overhead, but it serves the notion of a string well. The
> only reason such optimizations can exist is because of the fact that we
> aren't using C styles string everywhere, or else the compiler couldn't just
> employ such optimizations. Similarly the growth of a string (according to
> C++ programming language 4th edition by bjarne stroustrup) is: " To allow
> Strings to efficiently ‘‘grow’’ by adding characters at the end, I
> implement a scheme for keeping extra space for such growth similar to the
> one used for vector (§13.6.1). This makes String a suitable target for
> various forms of input. ". Similarly standard library facilities provide
> predictable exceptions gurrentied as well, which is important to formalize
> the notion of something that you are using in your code, so that your code
> makes sense. Basically, all these facilities can only be provides if there
> is a change in existing tuple implementations, and that change in code
> might cause overhead in other tuples and break ABI's, hence a completely
> new feature can providing whatever it wants to provide it for (in my case
> runtime indexed heterogeneous lists).
> 3. The point on the usage of inline assembly comes from (according to C++
> programming language 4th edition by bjarne stroustrup)'s section 8.2.6,
> where he describes how plain old data(a notion replaced by notions about
> trivial and standard layout) allows memcpy to efficiently copy data by "
> block-move machine instruction ". The point is that just how the compiler
> can optimize Plain old data copies, it can also perhaps copy accessing a
> reference to an element if it has no compatibility with existing code to
> worry about.
> 4. Thanks anyway for answering on Mckasson's claim on pointers.
>
> On Fri, Apr 17, 2026 at 12:30 PM Muneem <itfllow123_at_[hidden]> wrote:
>
> 1.The point is that the standard shouldn't rely on users reinventing the
> wheel a thousand times, each time that they want to do so.
> 2. The point is that sometimes you dont need to access code at fixed
> points, sometimes, you want bookkeeping information. For example C++
> strings employ small string optimization which again, in some cases might
> be argued to cause overhead, but it serves the notion of a string well. The
> only reason such optimizations can exist is because of the fact that we
> aren't using C styles string everywhere, or else the compiler couldn't just
> employ such optimizations. Similarly the growth of a string (according to
> C++ programming language 4th edition by bjarne stroustrup) is: " To allow
> Strings to efficiently ‘‘grow’’ by adding characters at the end, I
> implement a scheme for keeping extra space for such growth similar to the
> one used for vector (§13.6.1). This makes String a suitable target for
> various forms of input. ". Similarly standard library facilities provide
> predictable exceptions gurrentied as well, which is important to formalize
> the notion of something that you are using in your code, so that your code
> makes sense.
> 3. The point on the usage of inline assembly comes from (according to C++
> programming language 4th edition by bjarne stroustrup)'s section 8.2.6,
> where he describes how plain old data(a notion replaced by notions about
> trivial and standard layout) allows memcpy to efficiently copy data by "
> block-move machine instruction ". The point is that just how the compiler
> can optimize Plain old data copies, it can also perhaps copy accessing a
> reference to an element if it has no compatibility with existing code to
> worry about.
> 4. Thanks anyway for answering on Mckasson's claim on pointers.
>
> On Fri, Apr 17, 2026 at 12:13 PM Jonathan Wakely <cxx_at_[hidden]> wrote:
>
>
>
> On Fri, 17 Apr 2026, 05:43 Muneem via Std-Proposals, <
> std-proposals_at_[hidden]> wrote:
>
> Thanks for your feedback:
> > Existing tuples cannot be optimized for runtime indexing without
> breaking ABI (Application Binary Interface).
> >You responded with: Where's the evidence for it?
> The evidence is that without *STANDARD* support for runtime indexing,
> programmers will reinvent the wheel that might not always be efficient.
>
>
> OK, that's true.
>
>
>
> To optimize such a wheel, the optimizer needs complete freedom, which it
> does not have because it has to maintain ABI compatibility for the
> components of the wheel.
>
>
> This makes no sense. The compiler doesn't need to be able to alter the
> tuple ABI in order to optimize code accessing at fixed byte offsets.
>
>
>
> Again, standard library defeats the user in optimizing say a container,
> since the container can be built with inline assembly if the implementor
> thinks that is efficient for that implementation,
>
>
> This is a straw man argument. No C++ implementation actually does that.
>
>
>
> where as user written code is built on multiple blocks, where each has its
> own implied invariant that contributes to the semantic meaning of that
> block, hence it cant be optimized away by the compiler. Since, According to
> basic compiler theory, the optimizer can only optimize such that the
> semantic meaning of the code dosent change.
> > This also means common implementations of variants (tagged unions or
> "closed set unions") might not suffice. Instead, implementations should use
> techniques similar to typeid or other internal mechanisms.
> You can't have a union of references, but however, as I said, you can have
> reference wrappers which are like pointers.
> >The main reason for this variant specialization to exist is that it
> cannot be valueless.
> Variants of references dont exist, so adding allowing references to exist
> in variants requires a new set of rules. Basically, a variant of const T
> will act differently than T, Based on the behaviour of the const type
> modifier. SImilarly variants of references should act differently based on
> how references already act. Vector of bools however does not take a
> completely different type, instead it takes a bool which is its own type
> unlike const or references for a particular type T.
> Many many thanks for your feedback
>
> On Fri, Apr 17, 2026 at 8:57 AM Jason McKesson via Std-Proposals <
> std-proposals_at_[hidden]> wrote:
>
> On Thu, Apr 16, 2026 at 10:26 PM Muneem via Std-Proposals
> <std-proposals_at_[hidden]> wrote:
> >
> > Abstract: This proposal provides a specialization of std::tuple that can
> be indexed at runtime. It also introduces specializations for std::variant
> and std::optional to ensure the interface of such a tuple remains efficient.
> > I wrote 3 files, the first was a complete draft that I am gonna paste as
> a txt file, the second was a formatted docx file that I wanted to float
> around the internet to get some feedback before I post it over here (sadly
> no on was interested in the poor man's document), and the third is the
> shortened version where I removed all the code and kept it conceptual.
> Sorry for pasting three files at once, but I just wanted to see which one
> is acceptable and which one isn't. Personally, I like the first two.
>
> Your proposal is full of key statements that you say confidently, as
> if they were self-evidently true, but you offer no evidence for it
> (anecdotal or otherwise). And these aren't minor nit-picks; they're
> *foundational* to the choices your proposal makes.
>
> For example:
>
> > Existing tuples cannot be optimized for runtime indexing without
> breaking ABI (Application Binary Interface).
>
> Where's the evidence for it?
>
> A tuple is just a struct that uses compile-time numeric indices
> instead of compile-time names. If you use a compile-time name or
> compile-time index, this is converted into an address within the
> "struct". For every struct member, there is a byte offset from the
> start of the struct to that member.
>
> The thing that determines what the byte offset is for a given member
> is the implementation of the tuple/struct. This is baked into the ABI
> of the object. Which means that the *only part* of this process that
> is in any way ABI dependent is the specific mapping from an index to a
> byte offset..
>
> Your hypothetical runtime-optimized tuple would also have a way to
> convert an index into a byte offset. Every member would have a
> separate byte offset. So... why would one mapping of index:byte-offset
> be inherently slower at runtime than a different one? Why does ABI
> matter *at all* for converting a runtime index to a byte offset?
>
> The veracity of your statement is the core reason why your proposal
> uses a `tuple` specialization. That is, if this statement is untrue,
> then there's no reason to not just allow any `tuple` to be able to use
> runtime indexing.
>
> So if you're going to make that a part of your proposal, it needs to
> actually justify this statement.
>
> Similarly:
>
> > This also means common implementations of variants (tagged unions or
> "closed set unions") might not suffice. Instead, implementations should use
> techniques similar to typeid or other internal mechanisms.
>
> Um... why not?
>
> A reference is just a pointer. A variant of references is really just
> a variant of pointers. And pointers are all the same size. So a
> variant holding a union of pointers is functionally no different from
> a variant holding a `void*` that it casts to the appropriate type as
> needed.
>
> The only optimization I could see is if an implementation is able to
> hide the index somewhere inside the `void*` itself, using bits in the
> address that are otherwise unused. But I'm not sure that's faster than
> just adding a byte before the `void*`.
>
> Again, if you have some evidence for this statement, put it into the paper.
>
> On a different topic:
>
> > The main reason for this variant specialization to exist is that it
> cannot be valueless.
>
> Specializations of a template should not radically alter the
> fundamental behavior of the core template. `vector<bool>` is a bad
> idea because its interface and behavior does not match what `vector`
> is supposed to mean. The same goes for this `variant`; it should work
> just like any other `variant`. You should be able to assign to it,
> change the bound type, etc. And if it can't do those things, there
> needs to be a *very* good reason as to why it doesn't.
>
> And feeling that valueless by exception is icky is not a good reason.
>
> If you don't want runtime indexing of a `tuple` to return a `variant`
> that actually does what a `variant` does, then it should be a new type
> with a new name.
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
Received on 2026-04-17 08:52:39
