Date: Tue, 1 Feb 2022 16:18:32 +0100
Hi Daveed,
On Tue, Feb 1, 2022 at 3:38 PM Daveed Vandevoorde <daveed_at_[hidden]> wrote:
>
> For splicing, yes.
>
> See also P1717, which layers injection on top of the facilities of P1240
> (well, an earlier version of it): That doesn’t have the constraint.
>
>
> >
> > I agree that performance or reflection API is important, and I agree
> that the fundamental primitive reflection operations like name_of, etc.
> should be value-based consteval functions (or maybe builtins). However I
> think that we are sacrificing usability, readability and the possibility to
> write a library of complex reflection algorithms (see q.3 above) just for
> some increase in performance, if the only representation of metaobjects is
> std::meta::info.
>
> It’s not just a matter of performance (as in one is Nx faster than the
> other); It’s a matter of scalability: The more ambitious the meta
> programming project, the larger the discrepancy.
> >
> > I'd also suggest doing compile-time measurements and comparisons with
> other approaches on *real-life* use-cases (I don't think that simple
> programs will do 10k+ reflection operations) and in larger programs the
> cost of reflection will typically be only a small fraction of the
> compilation time.
>
>
> Oh, I very much disagree with that. And we have historical precedent.
>
> When we first started documenting TMP techniques, we said the exact same
> thing (I was one of them): "This is a bit cumbersome, but it’s just for
> small utilities that will be encapsulated and whose cost is entirely
> fine”. And here we are now, with developers complaining about the compile
> times of their heavily meta-programmed code, and others reporting about
> compilers running out of memory.
>
> I predict the same will happen with reflective meta programming: Once we
> have it, libraries will apply it at increasingly larger scales. 10k
> reflection operations is already entirely realistic today… I expect we’ll
> see _many millions_ per translation unit.
>
> An additional effect, here, I suspect, is that we will increasingly see
> “core language feature” proposals morph into “reflective-metaprogrammed
> library feature” proposals (because the core language syntax space is
> getting crowded). See for examples the proposals for meta programmed
> annotations/attributes (P1887), Herb’s meta classes proposal (P0707), etc.
>
> Metaclasses alone is likely to put reflection on the critical path of
> future modern C++ compilation performance.
>
These are all valid points, however I've recently implemented (on top of
the TS implementation) and did a presentation of several common reflection
use-cases:
- Enum / string conversion
- Serialization and deserialization
- Parsing of command line arguments into a config structure.
- Parsing of JSON file into a config structure
- RPC stubs and skeletons (without the networking part)
- A wrapper for REST API, URL handling, dispatching functions (again
without the networking part)
- Automated registering with scripting engine (ChaiScript)
- Generating UML diagrams from code
- Fetching and converting data from a SQL (sqlite3) database
- Generating SQL queries from the names in a "interface" class
- Implementation of the factory pattern (creating object from data in JSON
or data input from a GUI)
out of these 11 use-cases 10 did splicing of types, constructors,
functions, enumerators, etc.
since splicing requires (\something like) NTTPs and an instantiation
context (=basically a template function in disguise), the performance gains
from the pure-consteval API almost disappears compared to a template
function.
IMO any tricks that can be used to make the new API faster compared to TMP
can be applied to make TMP compilation faster and this would be by itself
an useful goal since TMP is not going anywhere.
The only difference between foo<meta::info>() and foo(wrapped<meta::info>)
is the "unwrapping" of the wrapped<meta::info>. Do we have any measurements
showing that this causes a significant increase in compile times?
--Matus
On Tue, Feb 1, 2022 at 3:38 PM Daveed Vandevoorde <daveed_at_[hidden]> wrote:
>
> For splicing, yes.
>
> See also P1717, which layers injection on top of the facilities of P1240
> (well, an earlier version of it): That doesn’t have the constraint.
>
>
> >
> > I agree that performance or reflection API is important, and I agree
> that the fundamental primitive reflection operations like name_of, etc.
> should be value-based consteval functions (or maybe builtins). However I
> think that we are sacrificing usability, readability and the possibility to
> write a library of complex reflection algorithms (see q.3 above) just for
> some increase in performance, if the only representation of metaobjects is
> std::meta::info.
>
> It’s not just a matter of performance (as in one is Nx faster than the
> other); It’s a matter of scalability: The more ambitious the meta
> programming project, the larger the discrepancy.
> >
> > I'd also suggest doing compile-time measurements and comparisons with
> other approaches on *real-life* use-cases (I don't think that simple
> programs will do 10k+ reflection operations) and in larger programs the
> cost of reflection will typically be only a small fraction of the
> compilation time.
>
>
> Oh, I very much disagree with that. And we have historical precedent.
>
> When we first started documenting TMP techniques, we said the exact same
> thing (I was one of them): "This is a bit cumbersome, but it’s just for
> small utilities that will be encapsulated and whose cost is entirely
> fine”. And here we are now, with developers complaining about the compile
> times of their heavily meta-programmed code, and others reporting about
> compilers running out of memory.
>
> I predict the same will happen with reflective meta programming: Once we
> have it, libraries will apply it at increasingly larger scales. 10k
> reflection operations is already entirely realistic today… I expect we’ll
> see _many millions_ per translation unit.
>
> An additional effect, here, I suspect, is that we will increasingly see
> “core language feature” proposals morph into “reflective-metaprogrammed
> library feature” proposals (because the core language syntax space is
> getting crowded). See for examples the proposals for meta programmed
> annotations/attributes (P1887), Herb’s meta classes proposal (P0707), etc.
>
> Metaclasses alone is likely to put reflection on the critical path of
> future modern C++ compilation performance.
>
These are all valid points, however I've recently implemented (on top of
the TS implementation) and did a presentation of several common reflection
use-cases:
- Enum / string conversion
- Serialization and deserialization
- Parsing of command line arguments into a config structure.
- Parsing of JSON file into a config structure
- RPC stubs and skeletons (without the networking part)
- A wrapper for REST API, URL handling, dispatching functions (again
without the networking part)
- Automated registering with scripting engine (ChaiScript)
- Generating UML diagrams from code
- Fetching and converting data from a SQL (sqlite3) database
- Generating SQL queries from the names in a "interface" class
- Implementation of the factory pattern (creating object from data in JSON
or data input from a GUI)
out of these 11 use-cases 10 did splicing of types, constructors,
functions, enumerators, etc.
since splicing requires (\something like) NTTPs and an instantiation
context (=basically a template function in disguise), the performance gains
from the pure-consteval API almost disappears compared to a template
function.
IMO any tricks that can be used to make the new API faster compared to TMP
can be applied to make TMP compilation faster and this would be by itself
an useful goal since TMP is not going anywhere.
The only difference between foo<meta::info>() and foo(wrapped<meta::info>)
is the "unwrapping" of the wrapped<meta::info>. Do we have any measurements
showing that this causes a significant increase in compile times?
--Matus
Received on 2022-02-01 15:18:44