> Neither of these perform full semantic analysis to help get the build
system going (they do preprocessing + a few other steps, but never
semantic analysis), so I'm curious where I would find this information?
Ninja parses the output from /showIncludes and -MMD, so it is using the preprocessor flags to do it, but that's not really the point: A full compilation is executed for the first pass before this information is available, and a byproduct of that compilation is the emission of the "effects-the-output" dependency information. Ninja doesn't have a separate "get the dependencies" stage that it runs before compilation. Even if there were an --emit-dep-info option to the compiler that performed full semantic analysis to get deep dependency information, this model would be fully supported.
> It can't handle auto-generated headers.
This is fully supported with order-only dependencies. This is not even a "pre-build" step, it purely effects the ordering when enqueueing edges for execution as part of the regular build.
> It doesn't fit well with implementing support for distributed compilation or ignorable change detection.
It works perfectly fine. For distributed build, you hand the TU to the distributed tool and it will spit back the dependency information. For ignorable changes, nothing about a just-in-time dependency info model requires timestamps be used as the determiner of "out-of-date"-ness.
> It won't work for C++ modules, unless you are prepared to go with the "compiler calls back into the build system" approach
> I would be interested to learn how this will work
I've been formulating what this will look like for a while. I have several ideas in mind, but going into them will send the thread pretty far off track (if it hasn't gone too far already).