Date: Wed, 2 Jan 2019 18:34:15 +0000
FASTBuild takes the preprocessing route as well. I’m not sure that we should limit ourselves to that approach though. I will note that for my particular code base, on my machine, I end up being bottlenecked significantly by preprocessing time. I have a mere 8 cores, and I tend to only be able to keep another dozen or so remote cores occupied. Sand-boxing seems like it has the potential to be friendlier to Amdahl effects.
From: tooling-bounces_at_[hidden] <tooling-bounces_at_[hidden]> On Behalf Of Mathias Stearn
Sent: Wednesday, January 2, 2019 12:13 PM
To: WG21 Tooling Study Group SG15 <tooling_at_[hidden]>
Subject: Re: [Tooling] [ Modules and Tools ] Tracking Random Dependency Information
On Wed, Jan 2, 2019 at 8:41 AM Manuel Klimek <klimek_at_[hidden]<mailto:klimek_at_[hidden]>> wrote:
On Wed, Dec 19, 2018 at 10:30 AM Colby Pike <vectorofbool_at_[hidden]<mailto:vectorofbool_at_[hidden]>> wrote:
> It doesn't fit well with implementing support for distributed compilation or ignorable change detection.
It works perfectly fine. For distributed build, you hand the TU to the distributed tool and it will spit back the dependency information. For ignorable changes, nothing about a just-in-time dependency info model requires timestamps be used as the determiner of "out-of-date"-ness.
If you want to sandbox execution so that an action is only executed with the exact set of inputs it needs (for example, in order to send the minimum number of inputs to a remote execution node), this seems like a chicken-and-egg problem.
If you use the ICECC model of passing a preprocessed[1] file to the remote host which compiles in a chroot, then it is already fully sandboxed. It only has access to the single stream of input text. I think this is probably the best way to handle distributing with std::embed as well, at least for small files, embedding them directly in the single stream using some length-delimited binary format.
[1] Technically, it is only *partially* preprocessed using -fdirectives-only (gcc) or -frewrite-includes (clang). This delays macro expansion so that the remote node can generate the correct warnings and debug info.
From: tooling-bounces_at_[hidden] <tooling-bounces_at_[hidden]> On Behalf Of Mathias Stearn
Sent: Wednesday, January 2, 2019 12:13 PM
To: WG21 Tooling Study Group SG15 <tooling_at_[hidden]>
Subject: Re: [Tooling] [ Modules and Tools ] Tracking Random Dependency Information
On Wed, Jan 2, 2019 at 8:41 AM Manuel Klimek <klimek_at_[hidden]<mailto:klimek_at_[hidden]>> wrote:
On Wed, Dec 19, 2018 at 10:30 AM Colby Pike <vectorofbool_at_[hidden]<mailto:vectorofbool_at_[hidden]>> wrote:
> It doesn't fit well with implementing support for distributed compilation or ignorable change detection.
It works perfectly fine. For distributed build, you hand the TU to the distributed tool and it will spit back the dependency information. For ignorable changes, nothing about a just-in-time dependency info model requires timestamps be used as the determiner of "out-of-date"-ness.
If you want to sandbox execution so that an action is only executed with the exact set of inputs it needs (for example, in order to send the minimum number of inputs to a remote execution node), this seems like a chicken-and-egg problem.
If you use the ICECC model of passing a preprocessed[1] file to the remote host which compiles in a chroot, then it is already fully sandboxed. It only has access to the single stream of input text. I think this is probably the best way to handle distributing with std::embed as well, at least for small files, embedding them directly in the single stream using some length-delimited binary format.
[1] Technically, it is only *partially* preprocessed using -fdirectives-only (gcc) or -frewrite-includes (clang). This delays macro expansion so that the remote node can generate the correct warnings and debug info.
Received on 2019-01-02 20:03:59