Compiling a consteval function will "optimize" `2+3` to `5`, but that seems to be it, at least in the major compilers.  E.g. as you and Wyatt et al point out in P1240, in no major compiler is subobject access optimized during constant evaluation.  (Some detailed testing based on the P1240 example here: https://lists.llvm.org/pipermail/cfe-dev/2021-March/067809.html; see Richard’s response in the next message as well, which explains away GCC’s apparent huge advantage and notes improvements coming in clang.)

I am not aware of any further optimizations presently performed when compiling consteval functions to a module, in clang at least, though I suppose this could be done.  Attributes, too, could perhaps be used to instruct the compiler to spend extra time somehow "optimizing" a given consteval function, but such optimizations would have to be invented from scratch.

Why is this important?  It is not yet clear to me that constant evaluation done via interpretation or however it is currently done in each compiler will be able to efficiently perform the kind of heavy duty metaprogramming users will increasingly ask of it, relative to alternatives which can perform metaprogramming via binary code (like plugins and, IIUC, Circle).  There is work being done on constant evaluation speed in clang at least, but will it be enough?  If it is indeed viable, i.e. close enough in speed to binary alternatives, backslaps all around — that would be a huge achievement, and this discussion is largely moot.

I am optimistic that we can do much better with dedicated virtual machines and precompiled, checked, and optimized metaprograms in modules. I have two requirements for the language:

1. Compile-time evaluation does not execute undefined behavior.
2. Compile-time evaluation only invokes functions defined in the current translation unit or in an imported module.

The first imposes runtime overhead, but I think good analysis and optimization and a capable VM will give really nice compile-times. I like to believe we can optimize away most reference checks on subobject accesses. The alternative is... not great. Imagine how many bug reports Microsoft will get when somebody's untested metaprogram references a nullptr and causes an undiagnosable ICE.

The second prevents libraries from opening sockets on behalf of your compiler.


Until then, it seems to me C++ has left room for these "lower level" (i.e. faster and more capable, but less safe) means of metaprogramming.  The slow development of the standard features, while understandable, has magnified and led to some solid exploration of this room.  
Imagine, for example, if clang were to introduce a stable API/ABI for their AST.  If I understand Circle correctly, it would then be straightforward for the Circle folks to add support for this API — allowing the user to call clang functions (e.g. `void myMetaFunc(clang::Decl *)` during constant evaluation by passing reflections to them (`myMetaFunc(^foo);`).  Basically plugins, but now callable on individual declarations via the language, rather than on an entire translation unit via command line options.

This would offer the user full reflection information (including expressions etc.), interfaceable from a mature API, arbitrary AST transformation capabilities, and all of this available via lightning fast calls to precompiled binary libraries wherever desired — and all this atop the other functionality Circle currently supplies.  This would be a compelling metaprogramming-centric C++ offshoot.  And indeed such a language would arguably adhere more closely to the "zen" of C++, in that it would give users as much rope as they wish to make their (compile-time) code ever more powerful and efficient, leaving safety as a matter of expertise, encapsulation etc.

Sounds great unless I'm using GCC, MSVC, or ICC. Also, how does that work for cross-compiling? 


To be sure, the current proposed approach is definitely cleaner and safer, particularly for injection, and so long as it is in the same ballpark in terms of efficiency and capability, it wins.  But it needs to be committed to staying in that ballpark, lest the "room" remain and C++ offshoots become more viable.

I think this view is backward. There are already lots of alternative solutions for metaprogramming problems that are perfectly viable, including, I suppose, Circle. C++ is competing with all of those other approaches, not the other way around.

Andrew