Date: Mon, 2 May 2022 09:53:35 -0400
Em seg., 2 de mai. de 2022 às 00:24, Tom Honermann <tom_at_[hidden]> escreveu:
> Given an initial configuration of predefined macros (and enabled attributes, enabled features
> that impact semantics, present header files, etc...)
"enabled attributes" and "enabled features" is what I meant by a
completeness problem. You would need to know the complete feature set
from the compiler, and make sure any one of them that would
potentially affect the bmi are set in a particular way.
> there can be only one parse result for a given source file.
Yeah, but that's not the only source of bmi format changes.
> I acknowledge the desire to avoid invoking the compiler to make
> such a determination, so consider this the low QoI approach. But then again,
> invoking just the preprocessor may perform sufficiently well.
I don't think that is true. There are various other things that may
affect the usability of a bmi beyond the preprocessed contents of the
module interface unit.
> I previously stated that this approach could eliminate the need to consider
> compiler options by themselves, but that was not correct. A good example
> is the MSVC /Zc:char8_t[-] option. Though that option is associated with
> a feature test macro, there is no guarantee that code that is sensitive to use
> of the option be guarded by the macro. In the following example, p will
> conditionally have type const char* or const char8_t* depending on use
> of that option. Thus, that option would need to be reflected as part of the
> salient properties of the initial configuration.
Exactly, that's what I meant by completeness problem. You would need
to enumerate all options that *could* behave like this.
> I think that would have a negative impact on the ability to adopt modules. In particular,
> it seems to me that it would greatly reduce the ability to consume a BMI distributed
> from a compiler implementor.
The trade-off here is that in an environment with a coherent set of
configurations (let's say, the system compiler on a Debian system),
the toolchain configuration discovered by the build system (e.g.:
CMake) has a very likely chance to match what was distributed, and at
that point, the build system doesn't have to recursively parse all
modules consumed by this project before it can start using them.
The alternative is that we make it more likely to find a reusable bmi,
but now we need to invoke the compiler for all modules that are
consumed in order to determine if they are usable or not.
The goal of this paper is to offer an optimization mechanism such that
when you have fully coherent distributions, you can skip an additional
scanning step.
daniel
> Given an initial configuration of predefined macros (and enabled attributes, enabled features
> that impact semantics, present header files, etc...)
"enabled attributes" and "enabled features" is what I meant by a
completeness problem. You would need to know the complete feature set
from the compiler, and make sure any one of them that would
potentially affect the bmi are set in a particular way.
> there can be only one parse result for a given source file.
Yeah, but that's not the only source of bmi format changes.
> I acknowledge the desire to avoid invoking the compiler to make
> such a determination, so consider this the low QoI approach. But then again,
> invoking just the preprocessor may perform sufficiently well.
I don't think that is true. There are various other things that may
affect the usability of a bmi beyond the preprocessed contents of the
module interface unit.
> I previously stated that this approach could eliminate the need to consider
> compiler options by themselves, but that was not correct. A good example
> is the MSVC /Zc:char8_t[-] option. Though that option is associated with
> a feature test macro, there is no guarantee that code that is sensitive to use
> of the option be guarded by the macro. In the following example, p will
> conditionally have type const char* or const char8_t* depending on use
> of that option. Thus, that option would need to be reflected as part of the
> salient properties of the initial configuration.
Exactly, that's what I meant by completeness problem. You would need
to enumerate all options that *could* behave like this.
> I think that would have a negative impact on the ability to adopt modules. In particular,
> it seems to me that it would greatly reduce the ability to consume a BMI distributed
> from a compiler implementor.
The trade-off here is that in an environment with a coherent set of
configurations (let's say, the system compiler on a Debian system),
the toolchain configuration discovered by the build system (e.g.:
CMake) has a very likely chance to match what was distributed, and at
that point, the build system doesn't have to recursively parse all
modules consumed by this project before it can start using them.
The alternative is that we make it more likely to find a reusable bmi,
but now we need to invoke the compiler for all modules that are
consumed in order to determine if they are usable or not.
The goal of this paper is to offer an optimization mechanism such that
when you have fully coherent distributions, you can skip an additional
scanning step.
daniel
Received on 2022-05-02 13:53:48