Date: Wed, 24 May 2023 10:23:49 -0400
Em ter., 23 de mai. de 2023 às 17:36, Jens Maurer
<jens.maurer_at_[hidden]> escreveu:
> In any case, I would really appreciate a very specific,
> concrete example (exemplary source code) of the troubles
> you seem to be talking about. If three two-line files
> suffice to show the principles, that's good enough.
> I have the gut feeling there is an assumption hidden somewhere
> in your arguments that might not be universally shared.
```foo.h
#ifdef FOO_ARG_1
import <bar.h>
#elif FOO_ARG_2
import <baz.h>
#endif
```
```qux.h
#define FOO_ARG_1
#include <foo.h>
```
```qux.cpp
#include <qux.h>
```
Now, let's imagine that we declare `foo.h` to be an importable header.
Part of that process includes defining what are the local preprocessor
arguments that should be used when translating `foo.h`. Let's say the
user sets `-DFOO_ARG_2`.
That means there are two competing interpretations of this code. If
`foo.h` is not importable, `qux.cpp` depends on `bar.h`. If `foo.h` is
importable, it depends on `baz.h`. The current experience in Clang
Header Modules and in MSVC is based around the expectation that the
user should not declare `foo.h` as importable, and it's a user error
if the user does that.
To implement that correctly, the list of header units and the local
preprocessor arguments for those need to be an input to the dependency
scanning process, which now has to emulate the behavior of the import
by starting a fresh preprocessor state informed by those local
preprocessor arguments, and merge the results in the end. No
implementation supports this today, all of them just assume the
importable headers will be interpreted in an equivalent way and that
the user will make correct choices. The failure mode is hopefully a
compilation error, but otherwise it would just be an ODR violation.
Some build systems are capable of not invalidating downstream results
if an intermediate target is invalidated, but produces the same
content (this is the optimization Tom is talking about), however this
would exclude a large category of build systems. For those systems, a
change in the inputs to the dependency scanning process will
invalidate all downstream targets. And that is not an acceptable cost.
daniel
<jens.maurer_at_[hidden]> escreveu:
> In any case, I would really appreciate a very specific,
> concrete example (exemplary source code) of the troubles
> you seem to be talking about. If three two-line files
> suffice to show the principles, that's good enough.
> I have the gut feeling there is an assumption hidden somewhere
> in your arguments that might not be universally shared.
```foo.h
#ifdef FOO_ARG_1
import <bar.h>
#elif FOO_ARG_2
import <baz.h>
#endif
```
```qux.h
#define FOO_ARG_1
#include <foo.h>
```
```qux.cpp
#include <qux.h>
```
Now, let's imagine that we declare `foo.h` to be an importable header.
Part of that process includes defining what are the local preprocessor
arguments that should be used when translating `foo.h`. Let's say the
user sets `-DFOO_ARG_2`.
That means there are two competing interpretations of this code. If
`foo.h` is not importable, `qux.cpp` depends on `bar.h`. If `foo.h` is
importable, it depends on `baz.h`. The current experience in Clang
Header Modules and in MSVC is based around the expectation that the
user should not declare `foo.h` as importable, and it's a user error
if the user does that.
To implement that correctly, the list of header units and the local
preprocessor arguments for those need to be an input to the dependency
scanning process, which now has to emulate the behavior of the import
by starting a fresh preprocessor state informed by those local
preprocessor arguments, and merge the results in the end. No
implementation supports this today, all of them just assume the
importable headers will be interpreted in an equivalent way and that
the user will make correct choices. The failure mode is hopefully a
compilation error, but otherwise it would just be an ODR violation.
Some build systems are capable of not invalidating downstream results
if an intermediate target is invalidated, but produces the same
content (this is the optimization Tom is talking about), however this
would exclude a large category of build systems. For those systems, a
change in the inputs to the dependency scanning process will
invalidate all downstream targets. And that is not an acceptable cost.
daniel
Received on 2023-05-24 14:24:02