Date: Fri, 8 Feb 2019 16:55:53 -0500
On 08/02/2019 16.11, Corentin wrote:
> On Fri, 8 Feb 2019 at 20:07 Matthew Woehlke wrote:
>> On 08/02/2019 12.59, Corentin wrote:
>>> Asking people to manually maintain a module mapping for every project
>>> doesn't seem to be a reasonable stance given:
>>>
>>> - The information would have to be encoded in the build system too
>>> anyway because build systems will have different requirements than
>>> these files
>>
>> ...Are you sure? Example, please.
>
> Files conditionally included according to some macros, platforms, the
> presence of a library, etc.
> And generated files and so forth.
Hmm... can a module map list files that may not (ever) exist? I'm still
not entirely convinced that the build system and module map will need
different information. It seems more likely that if the compiler can be
told about modules that don't actually exist or are never used, then we
ought to be able to do likewise for the build system.
Note that I'm not talking about stuff that's *already* in build systems.
That's not going to change. I'm asking what build systems will need to
that is *both* something they don't already need to know, and cannot be
determined from the module map and/or some automated process.
> So either the information has to be duplicated ( to the extent that it can),
> or the manifest file is as complicated as any build script, neither of
> which is desirable
I guess generating the module map falls into your second case? (Keeping
in mind that this won't be the common case...)
> Hopefully, when you attempt to build anything, the set of files is finite
> an known.
The latter isn't guaranteed, though I don't think build systems usually
support that case very well. The former is (if nothing else, the
resources available to humanity are finite ;-)), but is often of
sufficient magnitude as to introduce a non-trivial performance penalty.
Put differently: I can't even *start* actually compiling TU's until I
know the module dependency graph.
> Which isn't to say that it is easy to collect but the information is where
> it needs to be.
I'm not sure I agree with that. From a build system perspective, when I
go to write the dependencies for a TU, I "need" to know what those
dependencies are. Right now this is fairly easy to determine. Scattering
the information required to figure this out across my entire file system
doesn't really feel like "the information is where it needs to be".
>>> Any project beyond a simple Hello World has some of its state in a build
>>> system, and the only accurate way to build or parse a TU is to ask the
>>> build system for the relevant info.
>>
>> Sure, but with the current model there is NO case where this can be even
>> partially done without partially building the project first.
>
> You only need to scan the project, not to build it.
...until you have generated sources. Then you're up a creek.
Eh, okay, maybe you're up a creek anyway, since if the module's source
is generated, you can't (fully) parse anything that uses it either way.
But you also can't even know where the module will come from, or if
there is any hope that it will be found, until doing a build.
On a related note, I'm not sure if it matters, but a module map could
plausibly inform tooling what build artifacts need to be built for them
to function.
> Let's consider the cost of putting even more strain on developers.
I guess I'm less convinced than you are that having to write a
comparable amount of text in two files instead of one constitutes
"putting even more strain on developers".
Sure, that cost is non-zero, but I'm not convinced it is non-*trivial*,
whereas the cost to tooling developers to make their stuff work under
the current proposal does not feel trivial.
OTOH, I suppose I'm biased, seeing as I work with some of said tooling
developers. (That is, the cost *to Kitware*, specifically, seems likely
to be higher under the current proposal than with a module map approach.)
> On Fri, 8 Feb 2019 at 20:07 Matthew Woehlke wrote:
>> On 08/02/2019 12.59, Corentin wrote:
>>> Asking people to manually maintain a module mapping for every project
>>> doesn't seem to be a reasonable stance given:
>>>
>>> - The information would have to be encoded in the build system too
>>> anyway because build systems will have different requirements than
>>> these files
>>
>> ...Are you sure? Example, please.
>
> Files conditionally included according to some macros, platforms, the
> presence of a library, etc.
> And generated files and so forth.
Hmm... can a module map list files that may not (ever) exist? I'm still
not entirely convinced that the build system and module map will need
different information. It seems more likely that if the compiler can be
told about modules that don't actually exist or are never used, then we
ought to be able to do likewise for the build system.
Note that I'm not talking about stuff that's *already* in build systems.
That's not going to change. I'm asking what build systems will need to
that is *both* something they don't already need to know, and cannot be
determined from the module map and/or some automated process.
> So either the information has to be duplicated ( to the extent that it can),
> or the manifest file is as complicated as any build script, neither of
> which is desirable
I guess generating the module map falls into your second case? (Keeping
in mind that this won't be the common case...)
> Hopefully, when you attempt to build anything, the set of files is finite
> an known.
The latter isn't guaranteed, though I don't think build systems usually
support that case very well. The former is (if nothing else, the
resources available to humanity are finite ;-)), but is often of
sufficient magnitude as to introduce a non-trivial performance penalty.
Put differently: I can't even *start* actually compiling TU's until I
know the module dependency graph.
> Which isn't to say that it is easy to collect but the information is where
> it needs to be.
I'm not sure I agree with that. From a build system perspective, when I
go to write the dependencies for a TU, I "need" to know what those
dependencies are. Right now this is fairly easy to determine. Scattering
the information required to figure this out across my entire file system
doesn't really feel like "the information is where it needs to be".
>>> Any project beyond a simple Hello World has some of its state in a build
>>> system, and the only accurate way to build or parse a TU is to ask the
>>> build system for the relevant info.
>>
>> Sure, but with the current model there is NO case where this can be even
>> partially done without partially building the project first.
>
> You only need to scan the project, not to build it.
...until you have generated sources. Then you're up a creek.
Eh, okay, maybe you're up a creek anyway, since if the module's source
is generated, you can't (fully) parse anything that uses it either way.
But you also can't even know where the module will come from, or if
there is any hope that it will be found, until doing a build.
On a related note, I'm not sure if it matters, but a module map could
plausibly inform tooling what build artifacts need to be built for them
to function.
> Let's consider the cost of putting even more strain on developers.
I guess I'm less convinced than you are that having to write a
comparable amount of text in two files instead of one constitutes
"putting even more strain on developers".
Sure, that cost is non-zero, but I'm not convinced it is non-*trivial*,
whereas the cost to tooling developers to make their stuff work under
the current proposal does not feel trivial.
OTOH, I suppose I'm biased, seeing as I work with some of said tooling
developers. (That is, the cost *to Kitware*, specifically, seems likely
to be higher under the current proposal than with a module map approach.)
-- Matthew
Received on 2019-02-08 22:56:00