C++ Logo

sg15

Advanced search

Re: [Tooling] Modules

From: Titus Winters <titus_at_[hidden]>
Date: Fri, 1 Feb 2019 15:07:02 -0500
I *highly* endorse the approach of having a tool extract and maintain the
build information.

On Fri, Feb 1, 2019 at 3:04 PM Peter Bindels <dascandy_at_[hidden]> wrote:

> > Titus Winters <titus_at_[hidden]> writes:
> >
> >> We've been doing explicit statements of the dependency chain for our
> >> codebase for almost 20 years, and I've literally never heard a new hire
> (or
> >> anyone else) say it is a "huge" burden.
>
> Bjarne Stroustrup writes:
> > Seriously, having manual dependency specification is inherently
> > error-prone (independent double specification always is), as well as
> > extra work. The fact that it is manageable for someone somewhere doesn't
> > change that. I suspect its a skills, productivity, and scaling issue.
>
> On Fri, 1 Feb 2019 at 18:22, Bill Hoffman <bill.hoffman_at_[hidden]>
> wrote:
>
>> Except for toy projects, you need to tell the compiler what files will
>> go into which libraries and executables.
>
>
> At work we're using an automated tool to create these things
> automatically, and it's proven to be both much more maintainable and more
> accurate at tracking dependencies and constituent files than any developer
> was so far - both in removing dependencies when the last include was
> removed, and in adding new ones when a new include was added. This is on a
> large project, 400+ developers working on 1000+ components. In this project
> we're autogenerating about 79% of all CMakeFiles, with the remaining 21%
> being mostly platform dependent things, other language integration and odd
> bits of generated sources requiring uncommon build steps.
>
> The accuracy of our autogenerated files is 100%, and if it gets it wrong
> we've got an open challenge to the whole company to tell us. We've had
> 30-ish people try it, and one found an actual bug that we subsequently
> solved. 29 were wrong about their dependencies that they'd just added or
> removed.
>
> I'm also using it on a different project, where 100% of the build files
> are autogenerated. This works fine - in fact, the only time the build
> breaks on a dependency issue is if you don't run it.
>
> As an extension to this, I've created Evoke that does the same basic
> derivation, but then does the whole build system part too. I've not yet
> used it widely enough - in part because I try not to convince coworkers to
> switch to a new tool every few months - but on the targets I've tried it on
> it works. Full stop.
>
> On Fri, 1 Feb 2019 at 18:22, Bill Hoffman <bill.hoffman_at_[hidden]>
> wrote:
>
>> You could point a compiler at a
>> file with main in it and have it figure out everything that is used by
>> that main and build a single executable. However, breaking code down
>> into libraries and deciding if the libraries are shared, static,
>> dynamically loaded is something the developer is going to need to
>> control.
>
>
> I doubt that. Shared libraries as a generic thing are a choice that needs
> a whole lot more thought than nearly all developers are putting into these
> choices; static by default is the only sane option.
>
>
>> On Fri, 1 Feb 2019 at 18:22, Bill Hoffman <bill.hoffman_at_[hidden]>
>> wrote:
>> If you use an IDE it is done by drag and drop with a graphical
>> interface. If you use CMake it is done by listing the sources you want
>> for each library or executable in the CMake file. Basically you need to
>> partition the set of source files into a set of products from the
>> compiler. Any build tool or IDE is going to have to do this.
>
>
> Disagreed. I showed why not at CppCon 2018.
>
>
>> On Fri, 1 Feb 2019 at 18:22, Bill Hoffman <bill.hoffman_at_[hidden]>
>> wrote:
>> I think it
>> would be a huge step backwards to ask users to also specify the include
>> depends and the module depends.
>
>
> That is true though; we need to be careful for in particular existing
> build systems like shell scripts, makefiles and cmakefiles that we make
> sure these builds are unbroken - suboptimal is fine, but they should *work*.
>
> On Fri, 1 Feb 2019 at 18:22, Bill Hoffman <bill.hoffman_at_[hidden]>
>> wrote:
>> In CMake we have had Fortran working for
>> years now. You list all the Fortran files you want in a product and
>> CMake parses the Fortran to figure out the build order defined by the
>> producers and consumers of modules in the set of Fortran files it was
>> given. In practice with Fortran users having to figure out the correct
>> order of module builds resulted in people running make over and over
>> until all the modules were produced and the code compiled unless they
>> use a tool like CMake.
>>
>
> Do you wish the same upon C++, where parsing the code requires a full
> preprocessor and in many cases may not even reveal that a file is never
> built, built 4x with different options, or only on some platforms exports a
> given module? On sundays?
>
>
>
> _______________________________________________
> Tooling mailing list
> Tooling_at_[hidden]
> http://www.open-std.org/mailman/listinfo/tooling
>

Received on 2019-02-01 21:07:18