Date: Fri, 26 May 2023 09:31:55 -0400
Em sex., 26 de mai. de 2023 às 06:36, Mathias Stearn via SG15 <
sg15_at_[hidden]> escreveu:
> Many of your messages seem to assume that there is a single dependency
> scan. In this case by the use of the singular "the dependency scan" noun
> phrase, but other messages have made it more explicitly. That isn't the
> only model. You can also have a per-TU dependency scan.
>
I'm not sure what you mean there.
> In my mind, the best way to do it is to have a persistent (local!) server
> that caches the important information about each file (including
> non-importable headers), and each per-TU scan task is really just a command
> that calls into the server to evaluate the scan rooted at that TU.
>
I'm also not sure what you mean there.
> 2. Doesn't require the build system to dynamically generate new nodes on
>> the build graph. The full set of nodes needs to be prepared beforehand, but
>> edges can be added after the fact. The dependency scanning runs as part of
>> the build and dynamically add those edges (e.g.: CMake does this with
>> ninja's dyndep)
>> 3. Doesn't require the build system to compare the output of a command
>> to see if it changed before deciding if a downstream target needs to be
>> redone.
>>
> I don't think most build systems require *both* 2 and 3 at the same time.
> For example ninja requires 2, and not 3 (sort of*), and make requires 3 and
> not 2. I don't see anything wrong with requiring different approaches for
> different build systems that are adapted to the strengths and weaknesses of
> the underlying build system.
>
Would you mind explaining how an implementation with ninja would work? It
would need to be able to perform the correct dependency scan (which depends
on the list of importable headers and their arguments) for both the clean
build (when we don't know what importable headers could be used by a TU)
and for incremental builds afterwards, without a change in the list of
importable headers and their arguments resulting in an invalidation of all
translation units?
To be clear, this is an honest question, not rhetorical or confrontational.
If someone can show me how to make this work, I'm ready to change my mind*,
I am just not seeing how this can work.
daniel
* I am even open to ditching make if ninja accepts the patch for
integrating with the make job server upstream. We currently can't use ninja
in production because the outside orchestration uses the make job server to
control parallelism across different projects.
sg15_at_[hidden]> escreveu:
> Many of your messages seem to assume that there is a single dependency
> scan. In this case by the use of the singular "the dependency scan" noun
> phrase, but other messages have made it more explicitly. That isn't the
> only model. You can also have a per-TU dependency scan.
>
I'm not sure what you mean there.
> In my mind, the best way to do it is to have a persistent (local!) server
> that caches the important information about each file (including
> non-importable headers), and each per-TU scan task is really just a command
> that calls into the server to evaluate the scan rooted at that TU.
>
I'm also not sure what you mean there.
> 2. Doesn't require the build system to dynamically generate new nodes on
>> the build graph. The full set of nodes needs to be prepared beforehand, but
>> edges can be added after the fact. The dependency scanning runs as part of
>> the build and dynamically add those edges (e.g.: CMake does this with
>> ninja's dyndep)
>> 3. Doesn't require the build system to compare the output of a command
>> to see if it changed before deciding if a downstream target needs to be
>> redone.
>>
> I don't think most build systems require *both* 2 and 3 at the same time.
> For example ninja requires 2, and not 3 (sort of*), and make requires 3 and
> not 2. I don't see anything wrong with requiring different approaches for
> different build systems that are adapted to the strengths and weaknesses of
> the underlying build system.
>
Would you mind explaining how an implementation with ninja would work? It
would need to be able to perform the correct dependency scan (which depends
on the list of importable headers and their arguments) for both the clean
build (when we don't know what importable headers could be used by a TU)
and for incremental builds afterwards, without a change in the list of
importable headers and their arguments resulting in an invalidation of all
translation units?
To be clear, this is an honest question, not rhetorical or confrontational.
If someone can show me how to make this work, I'm ready to change my mind*,
I am just not seeing how this can work.
daniel
* I am even open to ditching make if ninja accepts the patch for
integrating with the make job server upstream. We currently can't use ninja
in production because the outside orchestration uses the make job server to
control parallelism across different projects.
Received on 2023-05-26 13:32:02