Date: Fri, 26 May 2023 12:36:09 +0200
On Fri, May 26, 2023 at 12:13 AM Daniel Ruoso via SG15 <
sg15_at_[hidden]> wrote:
> Em qui., 25 de mai. de 2023 às 14:54, Tom Honermann <tom_at_[hidden]>
> escreveu:
>
>> I'm not sure where the idea of a narrowing set of inputs comes from.
>>
> Concretely, the list of importable headers and their arguments would be in
> a file that is an input to the dependency scanning. That means the
> dependency file is downstream from the file with the list of importable
> headers and their arguments.
>
> Therefore if that list changes, the file has to be updated, which results
> in downstream targets being invalidated.
>
> You seem to be saying that I have another way to do that. I called it
> "narrowing" because that was how I understood you were suggesting to
> achieve it (i.e.: later builds have only the header units used by that unit
> in that list), it seems that's not what you meant. So, let's ignore the
> "narrowing" bit.
>
> Would you mind explaining concretely a possible implementation for a build
> system with the following requirements:
>
> 1. Perform the dependency scanning correctly, which depends on the list
> of relevant header units and their arguments being an input to the
> dependency scan.
>
Many of your messages seem to assume that there is a single dependency
scan. In this case by the use of the singular "the dependency scan" noun
phrase, but other messages have made it more explicitly. That isn't the
only model. You can also have a per-TU dependency scan. IMO it is the best
approach, and I suspect it will result in the fastest builds for both clean
and incremental builds. However, it is perhaps a bit trickier to make
efficient. In my mind, the best way to do it is to have a persistent
(local!) server that caches the important information about each file
(including non-importable headers), and each per-TU scan task is really
just a command that calls into the server to evaluate the scan rooted at
that TU. I started on a POC of this approach, but haven't spent much time
on it for the past few years because other things in my life have taken
priority. But nothing that I have seen implies that it won't be a viable
strategy.
2. Doesn't require the build system to dynamically generate new nodes on
> the build graph. The full set of nodes needs to be prepared beforehand, but
> edges can be added after the fact. The dependency scanning runs as part of
> the build and dynamically add those edges (e.g.: CMake does this with
> ninja's dyndep)
>
> 3. Doesn't require the build system to compare the output of a command to
> see if it changed before deciding if a downstream target needs to be redone.
>
I don't think most build systems require *both* 2 and 3 at the same time.
For example ninja requires 2, and not 3 (sort of*), and make requires 3 and
not 2. I don't see anything wrong with requiring different approaches for
different build systems that are adapted to the strengths and weaknesses of
the underlying build system.
There are also some other very subtle differences between make and ninja
that a solution for building C++20 can take advantage of. For example,
while ninja eagerly evaluates the mtimes of all files up front and only
re-evaluates the mtimes of outputs if restat=1 is specified, make will
lazily evaluate the mtimes and will evaluate mtimes of downstream inputs
*after* upstream tasks have run. This can be taken advantage of with things
like .TOUCH files to allow upstream tasks to optionally signal whether
downstream tasks need to rerun. I will admit that I am much more of an
expert on (ab)using ninja than make, so I am a bit fuzzy on the details of
how to do this, but CMP0058 — CMake
<https://cmake.org/cmake/help/latest/policy/CMP0058.html#policy:CMP0058> (now
linked to from the documentation on BYPRODUCTS for add_custom_command)
describes some production usages of this technique.
* technically, ninja actually *does* support dynamic edges, but it is
certainly non-intuitive. Note that ninja re-evaluates the build tree every
time the `build.ninja` file is modified, and it supports reconstructing the
build.ninja file multiple times in a single run until the reconstruction
results in one where ninja decides that it is already clean (aka it finds a
fixed point). Then and only then it will proceed to build anything that
isn't a transitive input edge to the build.ninja file. Prior to the
official support for dyndeps, I made a prototype demonstrating that a build
system using ninja could in fact correctly support modules using a stock
ninja. It worked by making the scanner have an input edge to the
build.ninja and making a new build.ninja with new nodes as needed to either
scan more files or build new targets. This was described as a horrible
hack, but it did work (and I assume it still does even with current ninja).
>
> 4. Has the full set of inputs and outputs known to the build system
> before executing the command, and the execution may happen in an
> environment where only those explicit inputs and outputs are available
> (e.g.: Remote Execution API).
>
IMO, ideally compilers will eventually support the equivalent of an
-frewrite-includes/-fdirectives-only mode that works correctly with modules
and importable headers. This would be useful for remote execution (in the
icecream style of distribution) and also for reproducing bug reports and
many other things. This would potentially require some
(outside-the-standard) indication of the flags used when building a region
of code within a single serialized stream. This would also require that the
preprocessor be able to push/pop a clean state, but that doesn't seem
significantly harder than the widely supported #pragma push/pop.
>
> 5. Works correctly on a clean build, and doesn't invalidate the entire
> build if we change the list of header units or the arguments to existing
> header units.
>
One "fun" technique I've used in my prototype build systems is to have
scanners (or later tasks that consume their output) generate @flags files
that appear on the compiler lines, which allows dynamically changing the
command lines even for build systems like ninja that seem like they don't
support that natively. You can then choose whether the flags files are
declared as inputs or not for the task, depending on whether you want their
changes to cause a rebuild or not. For something like the list of
importable headers, you have a choice: you can either generate a per-TU
list of importable headers with only the header that the TU actually uses
(determined by scanning) which is declared as in input to that TU, or you
can use a global list of all importable headers that *isnt* an input to any
TU, and have the build system use a different technique to detect when each
TU needs to be rebuilt based on changes to the imports that it cares about
(eg, .TOUCH files). Each technique may make sense for different build
system/compiler combinations.
>
> I honestly can't find a solution that fits all those requirements. Please
> elaborate how you see that working.
>
> Daniel
>
> P.S.: I had originally written a response to the other points, but I think
> what I wrote above is the crux of the issue, and therefore I decided to
> focus only on that instead of getting into the nuances of all the other
> topics that you brought up.
>
>> _______________________________________________
> SG15 mailing list
> SG15_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/sg15
>
sg15_at_[hidden]> wrote:
> Em qui., 25 de mai. de 2023 às 14:54, Tom Honermann <tom_at_[hidden]>
> escreveu:
>
>> I'm not sure where the idea of a narrowing set of inputs comes from.
>>
> Concretely, the list of importable headers and their arguments would be in
> a file that is an input to the dependency scanning. That means the
> dependency file is downstream from the file with the list of importable
> headers and their arguments.
>
> Therefore if that list changes, the file has to be updated, which results
> in downstream targets being invalidated.
>
> You seem to be saying that I have another way to do that. I called it
> "narrowing" because that was how I understood you were suggesting to
> achieve it (i.e.: later builds have only the header units used by that unit
> in that list), it seems that's not what you meant. So, let's ignore the
> "narrowing" bit.
>
> Would you mind explaining concretely a possible implementation for a build
> system with the following requirements:
>
> 1. Perform the dependency scanning correctly, which depends on the list
> of relevant header units and their arguments being an input to the
> dependency scan.
>
Many of your messages seem to assume that there is a single dependency
scan. In this case by the use of the singular "the dependency scan" noun
phrase, but other messages have made it more explicitly. That isn't the
only model. You can also have a per-TU dependency scan. IMO it is the best
approach, and I suspect it will result in the fastest builds for both clean
and incremental builds. However, it is perhaps a bit trickier to make
efficient. In my mind, the best way to do it is to have a persistent
(local!) server that caches the important information about each file
(including non-importable headers), and each per-TU scan task is really
just a command that calls into the server to evaluate the scan rooted at
that TU. I started on a POC of this approach, but haven't spent much time
on it for the past few years because other things in my life have taken
priority. But nothing that I have seen implies that it won't be a viable
strategy.
2. Doesn't require the build system to dynamically generate new nodes on
> the build graph. The full set of nodes needs to be prepared beforehand, but
> edges can be added after the fact. The dependency scanning runs as part of
> the build and dynamically add those edges (e.g.: CMake does this with
> ninja's dyndep)
>
> 3. Doesn't require the build system to compare the output of a command to
> see if it changed before deciding if a downstream target needs to be redone.
>
I don't think most build systems require *both* 2 and 3 at the same time.
For example ninja requires 2, and not 3 (sort of*), and make requires 3 and
not 2. I don't see anything wrong with requiring different approaches for
different build systems that are adapted to the strengths and weaknesses of
the underlying build system.
There are also some other very subtle differences between make and ninja
that a solution for building C++20 can take advantage of. For example,
while ninja eagerly evaluates the mtimes of all files up front and only
re-evaluates the mtimes of outputs if restat=1 is specified, make will
lazily evaluate the mtimes and will evaluate mtimes of downstream inputs
*after* upstream tasks have run. This can be taken advantage of with things
like .TOUCH files to allow upstream tasks to optionally signal whether
downstream tasks need to rerun. I will admit that I am much more of an
expert on (ab)using ninja than make, so I am a bit fuzzy on the details of
how to do this, but CMP0058 — CMake
<https://cmake.org/cmake/help/latest/policy/CMP0058.html#policy:CMP0058> (now
linked to from the documentation on BYPRODUCTS for add_custom_command)
describes some production usages of this technique.
* technically, ninja actually *does* support dynamic edges, but it is
certainly non-intuitive. Note that ninja re-evaluates the build tree every
time the `build.ninja` file is modified, and it supports reconstructing the
build.ninja file multiple times in a single run until the reconstruction
results in one where ninja decides that it is already clean (aka it finds a
fixed point). Then and only then it will proceed to build anything that
isn't a transitive input edge to the build.ninja file. Prior to the
official support for dyndeps, I made a prototype demonstrating that a build
system using ninja could in fact correctly support modules using a stock
ninja. It worked by making the scanner have an input edge to the
build.ninja and making a new build.ninja with new nodes as needed to either
scan more files or build new targets. This was described as a horrible
hack, but it did work (and I assume it still does even with current ninja).
>
> 4. Has the full set of inputs and outputs known to the build system
> before executing the command, and the execution may happen in an
> environment where only those explicit inputs and outputs are available
> (e.g.: Remote Execution API).
>
IMO, ideally compilers will eventually support the equivalent of an
-frewrite-includes/-fdirectives-only mode that works correctly with modules
and importable headers. This would be useful for remote execution (in the
icecream style of distribution) and also for reproducing bug reports and
many other things. This would potentially require some
(outside-the-standard) indication of the flags used when building a region
of code within a single serialized stream. This would also require that the
preprocessor be able to push/pop a clean state, but that doesn't seem
significantly harder than the widely supported #pragma push/pop.
>
> 5. Works correctly on a clean build, and doesn't invalidate the entire
> build if we change the list of header units or the arguments to existing
> header units.
>
One "fun" technique I've used in my prototype build systems is to have
scanners (or later tasks that consume their output) generate @flags files
that appear on the compiler lines, which allows dynamically changing the
command lines even for build systems like ninja that seem like they don't
support that natively. You can then choose whether the flags files are
declared as inputs or not for the task, depending on whether you want their
changes to cause a rebuild or not. For something like the list of
importable headers, you have a choice: you can either generate a per-TU
list of importable headers with only the header that the TU actually uses
(determined by scanning) which is declared as in input to that TU, or you
can use a global list of all importable headers that *isnt* an input to any
TU, and have the build system use a different technique to detect when each
TU needs to be rebuilt based on changes to the imports that it cares about
(eg, .TOUCH files). Each technique may make sense for different build
system/compiler combinations.
>
> I honestly can't find a solution that fits all those requirements. Please
> elaborate how you see that working.
>
> Daniel
>
> P.S.: I had originally written a response to the other points, but I think
> what I wrote above is the crux of the issue, and therefore I decided to
> focus only on that instead of getting into the nuances of all the other
> topics that you brought up.
>
>> _______________________________________________
> SG15 mailing list
> SG15_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/sg15
>
Received on 2023-05-26 10:36:25