Date: Sat, 9 Feb 2019 12:31:52 -0500
On Sat, Feb 9, 2019 at 11:26 AM Jon Chesterfield <
jonathanchesterfield_at_[hidden]> wrote:
>
> I have a suspicion that my codebase is sufficiently coupled that modules
> will bring the maximum build concurrency down to within the core count of a
> single server, at which point icecc won't be necessary.
>
This is what I had originally thought. However, last week I wrote a
miniscript[1] to analyze how much of an issue this would be, and I was
surprised to see that it basically won't be a problem. A few people in the
SG15 slack channel posted results for their codebases, and they all seemed
fairly similar. I aggregated them here:
https://gist.github.com/RedBeard0531/23330df6cf9e320e5ff80febae2b522f (if
anyone else wants to add theirs, send me an email). Single file results
count each header once, the first time it is encountered. Multi-file
results aggregate them, so each header is counted once per cpp that
transitively includes them. These indicate to me that the BMI generation
DAG won't be the bottleneck that limits parallelism because at all times,
except perhaps the first second or two of a clean build there should be
more than enough parallelism to hit another bottleneck (either local CPU or
network) that limits us to 200-400 parallel jobs today with headers.
My concern with distributed builds is that we haven't figured out the
mechanics of how they will efficiently work yet, or proven that they can.
With headers it is (almost) trivial to just ship the result of compiling
with -E and either -frwrite-includes or -fdirectives-only to a remote host
and get back a .o. It is unclear what the equivalent for modules will be.
[1] There are probably some linuxisms and zshisms here, but it should be
easy to remove them: for file in path/to/source/**/*.{cpp,cc,cxx}; do echo
$file > /proc/$$/fd/1; g++ -all-your-flags -especially -Ipaths -DMACROS
-and -std=c++VERSION "$file" -fdirectives-only -o /dev/null -H -E; done |&
\grep -E '^\.+ ' | sed -e 's/ .*//' | sort -S 1G | uniq -c | awk '{print
$1, "headers at depth", length($2)}'
jonathanchesterfield_at_[hidden]> wrote:
>
> I have a suspicion that my codebase is sufficiently coupled that modules
> will bring the maximum build concurrency down to within the core count of a
> single server, at which point icecc won't be necessary.
>
This is what I had originally thought. However, last week I wrote a
miniscript[1] to analyze how much of an issue this would be, and I was
surprised to see that it basically won't be a problem. A few people in the
SG15 slack channel posted results for their codebases, and they all seemed
fairly similar. I aggregated them here:
https://gist.github.com/RedBeard0531/23330df6cf9e320e5ff80febae2b522f (if
anyone else wants to add theirs, send me an email). Single file results
count each header once, the first time it is encountered. Multi-file
results aggregate them, so each header is counted once per cpp that
transitively includes them. These indicate to me that the BMI generation
DAG won't be the bottleneck that limits parallelism because at all times,
except perhaps the first second or two of a clean build there should be
more than enough parallelism to hit another bottleneck (either local CPU or
network) that limits us to 200-400 parallel jobs today with headers.
My concern with distributed builds is that we haven't figured out the
mechanics of how they will efficiently work yet, or proven that they can.
With headers it is (almost) trivial to just ship the result of compiling
with -E and either -frwrite-includes or -fdirectives-only to a remote host
and get back a .o. It is unclear what the equivalent for modules will be.
[1] There are probably some linuxisms and zshisms here, but it should be
easy to remove them: for file in path/to/source/**/*.{cpp,cc,cxx}; do echo
$file > /proc/$$/fd/1; g++ -all-your-flags -especially -Ipaths -DMACROS
-and -std=c++VERSION "$file" -fdirectives-only -o /dev/null -H -E; done |&
\grep -E '^\.+ ' | sed -e 's/ .*//' | sort -S 1G | uniq -c | awk '{print
$1, "headers at depth", length($2)}'
Received on 2019-02-09 18:32:06