C++ Logo

sg7

Advanced search

Re: [SG7] sg7 varid proposal

From: Dominic Jones <dominic.jones_at_[hidden]>
Date: Tue, 22 Dec 2020 12:45:54 +0000
> 1) What do you use to solve such problems today?

The approach detailed in the C++ London meetup talk and the Euro AD 2019
slides was abandoned for a number of reasons. The first reason was the
required workaround of explicitly wrapping every primitive type in a Unique
template wrapper. This alone was enough to prevent the approach being used
in production (here, 'production' in my case refers to Siemens Star-CCM+
<https://blogs.sw.siemens.com/simcenter/star-ccm-2020-3/>
simulation software). The second reason was the compilation time for large
expressions (although with further work I think this could have been
improved to some degree). The third reason was that efficient treatment of
expressions with multiple 'roots' required special handling (i.e. what
would ordinarily be multiple results from a given function).

So, for these reasons I developed another methodology which avoids the need
for unique types but at the price of sub-par performance (and, in fact,
inadequate performance: the approach is not used presently in
performance-critical code). I presented some aspects of this methodology in
the Euro AD 2020 workshop [slides
<http://www.autodiff.org/Docs/euroad/23rd%20EuroAd%20Workshop%20-%20Dominic%20Jones%20-%20Recursive%20compile%20time%20adjoint%20in%20C++.pdf>].
This approach deals with the back-propagation by making use of the
destructors of the l-values in block of statements. Whilst this seems like
an ideal approach, it suffers from two problems: the first is that the
l-value type cannot simply be the auto deduced expression type from the
right-hand side because, if it were, r-value temporaries would also trigger
unwanted destructor computation. The second reason is that computation
within destructors seems to inline either seldom or not at all.

The bottom line is that I need to obtain the fastest possible performance
from an automatic differentiation expression transformation. This
transformation needs to be very close to what would be achieved if the
differentiated function were implemented manually.

> 2) Would the introduction of this language facility perhaps, vaguely
guessing at a possible answer to (1), to be able to avoid using a
compiler-specific solution or a custom parser?

Yes. It is enough for us (those working on it at Siemens) to have a tool
that covers 80% of use-cases, and the sort of ideas I have proposed in
workshops and have implemented at Siemens gives us this coverage. To seek
an extra 5% coverage, say, would involve tracing auto-deduced types through
'if' block expressions. Detailing language changes for supporting that sort
of thing is a giant leap. Andrew Sutton's paper appears to be such a leap,
and may be sufficient, but I have not got round to studying it yet. Really,
we just need a tool to get by, not one that solves every problem. But what
we (Siemens) cannot compromise on is performance.

I need to think a little about your further remark before I get back to you
on it.

-Dominic

On Mon, 21 Dec 2020 at 23:43, Ville Voutilainen <ville.voutilainen_at_[hidden]>
wrote:

> On Tue, 22 Dec 2020 at 01:03, Dominic Jones <dominic.jones_at_[hidden]>
> wrote:
> >
> > Thank you for your comments.
> >
> > Regarding the first: my use case is compile-time automatic
> differentiation (AD). I gave a talk on the approach at a London C++ meetup,
> and at a Euro AD workshop [slides].
> >
> > The gist of it is that I want to be able to evaluate an expression from
> its root to its terminals. This is the opposite of what one normally wishes
> to do, but it just is the nature of the so-called 'adjoint' method of
> automatic differentiation. Doing this naively, one ends up performing
> duplicate evaluations when a term is used repeatedly.
> >
> > Consider the code corresponding to Figure 1 (I now realise I should have
> included the code snippet in the paper!): the term 'ab' occurs twice in the
> expression for 'result'. In typical expression evaluation, 'ab' is
> evaluated once then its result used in many places. However, in
> back-propagation evaluations (like those found in adjoint automatic
> differentiation), evaluation begins at 'result' and propagates to the
> terminals 'a' and 'b'. The problem is that the back-propagation will visit
> the sub-expression tree for 'ab' twice since it occurs twice upstream.
> >
> > float c0;
> > float c1;
> > auto ab = a * b;
> > auto result = (c0 + ab) / (c1 + ab);
> >
> > Regarding the second point: I came across Andrew Sutton's paper,
> P2237R0, after submitting my own. Whilst skimming it I saw something that
> may be doing what I am proposing, namely 'meta::location_of(e)', p15 ff.
> Are the reflection mechanisms you are referring to those mentioned by
> Andrew in his paper or are you referring to another language proposal? If
> it is another, I'd be grateful if I could get a link to the paper.
> >
> > I agree that if there are not sufficiently broad use case examples for
> my proposal then it does not warrant further consideration; amassing a
> broad range of use cases is something I still need to work on. (Klaus
> Iglberger, I believe, has a use case for his math library 'Blaze' which I
> need to ask him for.) However, presently, I don't know how the same end
> could be achieved without a top level API function. Early on, I thought it
> could be by proposing a language extension whereby `address of' in a
> constexpr context would return a constexpr variable ID, much like what
> varid would do. However, even with this, a second language change would
> need to be made, one whereby a constexpr function could return a constexpr
> result despite taking non-constexpr arguments (i.e. resolving the
> compilation error presented in Listing 3).
> >
> > In response to your final comment: unfortunately, only having equality
> wouldn't do; it would be minimally useful, facilitating expression
> transformations for only trivial expressions, such as 'c0*c1' or 'c0*c0',
> but nothing more complicated than that.
>
> Interesting, very interesting. I think this is certainly worth further
> study, considering that we are in a Study Group.
> It's also a very interesting use in general, something that pushes the
> boundaries of what C++ can do, which is
> a sizeable part of what we're trying to do here anyway. :)
>
> So yeah, the reflection mechanisms I referred to are indeed what's
> catalogued by P2237. I have two further questions:
>
> 1) What do you use to solve such problems today?
> 2) Would the introduction of this language facility perhaps, vaguely
> guessing at a possible answer to (1), to be able to
> avoid using a compiler-specific solution or a custom parser?
>
> A further remark: we are yet fairly far from being able to introspect
> and analyze function bodies, sequences of statements
> and expressions, that is. Based on a brief look at the talk this might
> fall into that bucket that we hope to get at later
> on, but perhaps there's something about the answer to the question (1)
> above that makes this more straightforward
> than full expression/statement-reflection?
>

Received on 2020-12-22 06:46:12