Date: Tue, 6 Jun 2023 13:14:45 +0200
On Thu, Jun 1, 2023 at 6:23 PM Ben Boeckel <ben.boeckel_at_[hidden]> wrote:
> On Tue, May 30, 2023 at 22:05:22 +0200, Mathias Stearn via SG15 wrote:
> > So I did a bit of digging and I now think it is possible to get
> restat-like
> > behavior out of make. The "trick" is to use a separate recursive make
> > invocation for scanning vs building. So for example, in your user-facing
> > Makefile, you will have entries like this:
> >
> > all:
> > $(MAKE) -f Makefile2 scan_all
> > $(MAKE) -f Makefile2 build_all
> > .PHONY : all
> >
> > The scan_all task will do all of the scanning (possibly by defering to
> sub
> > tasks if you aren't using a megascan approach) and then exit that
> instance
> > of make. The build_all task will run with a new instance of make, so it
> > won't see that the scanning tasks were considered dirty and will only
> look
> > at the mtimes of the outputs. If the scan_all task (and its dependencies)
> > only touch files whose outputs have changed, then the downstream build
> > tasks will not need to rerun.
>
> This does not support generated sources from tools that you're building
> in this project (protobuf, VTK's wrapper code, etc.). You need to
> schedule a scan of *these* sources after building other code (assuming
> they use `import`). FWIW, this is why this scan/build split is done
> per-target in CMake (there's another level of recursive make to stitch
> the per-directory user interface in front of all of that).
Sure, but if I understand you correctly, you aren't saying that the
*general technique* won't work with generated sources built by compiled
tools, but that you will need an extra layer (or layers) of deferred mtime
checking to handle it, so you can't use the optimization to do exactly two
mtime checks. That sounds about right. I wasn't trying to account for that
in this description, since you can be more optimal if you don't need to
compile your tools within the same build tree.
On a philosophical level, I even think it is better to compile your tools
(including code generators, compilers, build systems, etc) in a separate
build tree since you often want them to be built differently from the
actual product. For example, you probably want them built in release mode
even when building your product in debug mode to avoid slowing down your
build. And you certainly want them to target the machine running the build
rather than the actual target when cross compiling. I think building tools
is better handled by some sort of separate build orchestration. Although
perhaps that is easier for me to say since we typically use interpreted
languages rather than C++ for code generation. I could see that being more
problematic if you need to share significant code between your product and
your code generator.
@Daniel Ruoso <daniel_at_[hidden]> are you now satisfied that it is possible
to implement this with make, or do you see other issues?
> On Tue, May 30, 2023 at 22:05:22 +0200, Mathias Stearn via SG15 wrote:
> > So I did a bit of digging and I now think it is possible to get
> restat-like
> > behavior out of make. The "trick" is to use a separate recursive make
> > invocation for scanning vs building. So for example, in your user-facing
> > Makefile, you will have entries like this:
> >
> > all:
> > $(MAKE) -f Makefile2 scan_all
> > $(MAKE) -f Makefile2 build_all
> > .PHONY : all
> >
> > The scan_all task will do all of the scanning (possibly by defering to
> sub
> > tasks if you aren't using a megascan approach) and then exit that
> instance
> > of make. The build_all task will run with a new instance of make, so it
> > won't see that the scanning tasks were considered dirty and will only
> look
> > at the mtimes of the outputs. If the scan_all task (and its dependencies)
> > only touch files whose outputs have changed, then the downstream build
> > tasks will not need to rerun.
>
> This does not support generated sources from tools that you're building
> in this project (protobuf, VTK's wrapper code, etc.). You need to
> schedule a scan of *these* sources after building other code (assuming
> they use `import`). FWIW, this is why this scan/build split is done
> per-target in CMake (there's another level of recursive make to stitch
> the per-directory user interface in front of all of that).
Sure, but if I understand you correctly, you aren't saying that the
*general technique* won't work with generated sources built by compiled
tools, but that you will need an extra layer (or layers) of deferred mtime
checking to handle it, so you can't use the optimization to do exactly two
mtime checks. That sounds about right. I wasn't trying to account for that
in this description, since you can be more optimal if you don't need to
compile your tools within the same build tree.
On a philosophical level, I even think it is better to compile your tools
(including code generators, compilers, build systems, etc) in a separate
build tree since you often want them to be built differently from the
actual product. For example, you probably want them built in release mode
even when building your product in debug mode to avoid slowing down your
build. And you certainly want them to target the machine running the build
rather than the actual target when cross compiling. I think building tools
is better handled by some sort of separate build orchestration. Although
perhaps that is easier for me to say since we typically use interpreted
languages rather than C++ for code generation. I could see that being more
problematic if you need to share significant code between your product and
your code generator.
@Daniel Ruoso <daniel_at_[hidden]> are you now satisfied that it is possible
to implement this with make, or do you see other issues?
Received on 2023-06-06 11:14:59