Date: Wed, 16 Aug 2023 23:20:37 +0200
On 16/08/2023 12.15, Hassan Sajjad wrote:
> For deeply nested dependency graphs, that seems to mean that compilation
> state of many translation units is kept in memory while the respective
> compile waits for a dependency to be built. That feels like uncontrolled
> resource consumption and doesn't seem to work on small-ish machines
> (compared to the project to be built).
>
>
> Yes. Memory consumption of such an approach will be more than the traditional approach. But we can alleviate it in 2, 3 ways. If we start from the bottom and call newCompile() for a source-file, it might depend on a header-file which might depend on another header-file, and so on till the standard header files. So, a lot of compiler-states will need to be kept. But if start from the top, it can only depend on standard headers, hence fewer compiler-states will need to be preserved. However, as we move down, we will have more and more ifc files.
How do we know where the "top" or "bottom" is, without pre-scanning the source files?
Jens
> As we compile we can free ifc file resources if there is no other compilation-unit left that can depend on that ifc. Linking can be moved to the end and before that, we can clear all the ifc. I checked aggregated .obj file size in llvm cmake-build-debug directory. It is around 5GB. I think the maximum size of the ifc files needs to be kept around at any time will be around that as well.
>
> Because of this, I feel on a machine with 32GB RAM and 12 threads, 2 llvm configurations can be compiled simultaneously without any problem. 12GB of ifc files + compiler states will exist in memory. The remaining 18GB will be available for 12 compiler threads. And before linking, all of these will be written back to the disk. If this takes more memory, the user will always have the option to fall back to the conventional solution.
>
> Best,
> Hassan Sajjad
>
> On Wed, Aug 16, 2023 at 11:01 AM Jens Maurer <jens.maurer_at_[hidden] <mailto:jens.maurer_at_[hidden]>> wrote:
>
>
>
> On 16/08/2023 01.54, Hassan Sajjad via SG15 wrote:
> > Please share your thoughts on this.
> >
> > Best,
> > Hassan Sajjad
> >
> > On Mon, Aug 14, 2023, 04:51 Hassan Sajjad <hassan.sajjad069_at_[hidden] <mailto:hassan.sajjad069_at_[hidden]> <mailto:hassan.sajjad069_at_[hidden] <mailto:hassan.sajjad069_at_[hidden]>>> wrote:
> >
> > Hi.
> >
> > Thank you so much for explaining the solution. It is a happy news. With this consensus adoption, my build-system achieves full standard compliance without any modifications required. This is however less optimal compared to solution 5 that I proposed. Depending on different factors, I conjecture that my solution will be up to *40%* faster compared to the current consensus. The compilation speed-up will be more visible in clean builds using C++20 modules / header-units. Here I explain the reasoning behind this: https://github.com/HassanSajjad-302/solution5 <https://github.com/HassanSajjad-302/solution5> <https://github.com/HassanSajjad-302/solution5 <https://github.com/HassanSajjad-302/solution5>>.
>
> I've skimmed this. It seems you want to run the compiler as a library,
> intercept any loads of header and/or module files, and then kick off
> the compilation of those as a "subroutine".
>
> For deeply nested dependency graphs, that seems to mean that compilation
> state of many translation units is kept in memory while the respective
> compile waits for a dependency to be built. That feels like uncontrolled
> resource consumption and doesn't seem to work on small-ish machines
> (compared to the project to be built).
>
> Jens
>
> For deeply nested dependency graphs, that seems to mean that compilation
> state of many translation units is kept in memory while the respective
> compile waits for a dependency to be built. That feels like uncontrolled
> resource consumption and doesn't seem to work on small-ish machines
> (compared to the project to be built).
>
>
> Yes. Memory consumption of such an approach will be more than the traditional approach. But we can alleviate it in 2, 3 ways. If we start from the bottom and call newCompile() for a source-file, it might depend on a header-file which might depend on another header-file, and so on till the standard header files. So, a lot of compiler-states will need to be kept. But if start from the top, it can only depend on standard headers, hence fewer compiler-states will need to be preserved. However, as we move down, we will have more and more ifc files.
How do we know where the "top" or "bottom" is, without pre-scanning the source files?
Jens
> As we compile we can free ifc file resources if there is no other compilation-unit left that can depend on that ifc. Linking can be moved to the end and before that, we can clear all the ifc. I checked aggregated .obj file size in llvm cmake-build-debug directory. It is around 5GB. I think the maximum size of the ifc files needs to be kept around at any time will be around that as well.
>
> Because of this, I feel on a machine with 32GB RAM and 12 threads, 2 llvm configurations can be compiled simultaneously without any problem. 12GB of ifc files + compiler states will exist in memory. The remaining 18GB will be available for 12 compiler threads. And before linking, all of these will be written back to the disk. If this takes more memory, the user will always have the option to fall back to the conventional solution.
>
> Best,
> Hassan Sajjad
>
> On Wed, Aug 16, 2023 at 11:01 AM Jens Maurer <jens.maurer_at_[hidden] <mailto:jens.maurer_at_[hidden]>> wrote:
>
>
>
> On 16/08/2023 01.54, Hassan Sajjad via SG15 wrote:
> > Please share your thoughts on this.
> >
> > Best,
> > Hassan Sajjad
> >
> > On Mon, Aug 14, 2023, 04:51 Hassan Sajjad <hassan.sajjad069_at_[hidden] <mailto:hassan.sajjad069_at_[hidden]> <mailto:hassan.sajjad069_at_[hidden] <mailto:hassan.sajjad069_at_[hidden]>>> wrote:
> >
> > Hi.
> >
> > Thank you so much for explaining the solution. It is a happy news. With this consensus adoption, my build-system achieves full standard compliance without any modifications required. This is however less optimal compared to solution 5 that I proposed. Depending on different factors, I conjecture that my solution will be up to *40%* faster compared to the current consensus. The compilation speed-up will be more visible in clean builds using C++20 modules / header-units. Here I explain the reasoning behind this: https://github.com/HassanSajjad-302/solution5 <https://github.com/HassanSajjad-302/solution5> <https://github.com/HassanSajjad-302/solution5 <https://github.com/HassanSajjad-302/solution5>>.
>
> I've skimmed this. It seems you want to run the compiler as a library,
> intercept any loads of header and/or module files, and then kick off
> the compilation of those as a "subroutine".
>
> For deeply nested dependency graphs, that seems to mean that compilation
> state of many translation units is kept in memory while the respective
> compile waits for a dependency to be built. That feels like uncontrolled
> resource consumption and doesn't seem to work on small-ish machines
> (compared to the project to be built).
>
> Jens
>
Received on 2023-08-16 21:20:43