Bloomberg actual data has library depths of applications in the mid 30s. 

On Fri, Apr 15, 2022, 07:30 Boris Kolpackov via Ext <ext@lists.isocpp.org> wrote:
Ben Boeckel <ben.boeckel@kitware.com> writes:

> On Thu, Apr 14, 2022 at 15:09:35 +0200, Boris Kolpackov wrote:
>
> > With the mapper approach you don't need to pre-scan everything but
> > you do need to map module names to module interface files. Having
> > a separate extension for interface files helps narrow down the
> > candidate pool.
>
> While true, I have doubts about the scalability of holding umpteen
> compiler instances around while executing more to find the bottleneck
> TU.

Likewise, I have scalability doubts of the "pre-scan the world"
approach.

The nested compiler invocations is a potential issue, that's true.
I, however, believe the following will mitigate it:

1. The dependency depth of real-world software appears to be quite
   low (I believe Rene looked into this and concluded that header
   dependency depth rarely exceeds 10, IIRC).

2. The memory consumption (the primary concern) of both the compiler
   that got suspended waiting for the BMI and the compiler that will
   be executed to produce the BMI will be low. IME, the bulk of the
   memory consumption happens during the code generation phase. The
   suspended compiler hasn't reached that phase yet while the BMI-
   producing compiler presumably can produce it without requiring much
   memory at which point the suspended compiler can proceed.

   It's possible that the BMI producing compiler will then proceed
   (in parallel with the unblocked compiler) to code generation and
   consume a large amount of memory. But with the module mapper
   protocol (at least as implemented in GCC), the build system can
   effectively suspend it (by delaying the reply to the "BMI is ready"
   request) until later (e.g., until the unblocked compiler exits).

This is of course all conjecture and only experience will tell for
sure.


> Unsatisifiable requests also end up taking way more resources
> without smarter caching mechanisms to store that information across
> attempts. Which is fine if you're implementing your own build executor I
> suppose. Getting Make or Ninja to understand it without complicated
> stamp juggling doesn't sound fun though.

Sure, it will be difficult to implement efficiently in legacy build
tools like Make and Ninja. But the solution space is not limited to
legacy build systems.
_______________________________________________
Ext mailing list
Ext@lists.isocpp.org
Subscription: https://lists.isocpp.org/mailman/listinfo.cgi/ext
Link to this post: http://lists.isocpp.org/ext/2022/04/19014.php