Hey guys,

It had been a while since I attended one of these calls and I’m amazed at the incredible work that’s been done. Quick question: what’s the preferred means of discussion/feedback? We used to have a slack channel but it seems abandoned now. I was curious to know more about the function differentiation mechanisms in Python among other things.

David Lindelöf, Ph.D.
+41 (0)79 415 66 41
Follow me on Twitter:

On 14 January 2021 at 22:33:45, Michael Wong via SG19 (sg19@lists.isocpp.org) wrote:

On Wed, Jan 13, 2021 at 3:21 PM Michael Wong <fraggamuffin@gmail.com> wrote:

SG19 Machine Learning 2 hours. This session will focus on Differential Calculus and reinforcement learning but with updates from all the others optionally.

Link to Automatic differentiation proposal:



Michael Wong is inviting you to a scheduled Zoom meeting.

Topic: SG19 monthly Dec 2020-Feb 2021
Time: Jan 14, 2020 02:00 PM Eastern Time (US and Canada)
    Every month on the Second Thu, until Feb 11, 2021, 3 occurrence(s)
    Dec 10, 2020 02:00 PM ET 1900 UTC
    Jan 14, 2021 02:00 PM ET 1900 UTC
    Feb 11, 2021 02:00 PM ET 1900 UTC
    Please download and import the following iCalendar (.ics) files to your
calendar system.

Join from PC, Mac, Linux, iOS or Android:
    Password: 035530

Or iPhone one-tap :
    US: +13017158592,,93084591725# or +13126266799,,93084591725#
Or Telephone:
    Dial(for higher quality, dial a number based on your current location):
        US: +1 301 715 8592 or +1 312 626 6799 or +1 346 248 7799 or +1
408 638 0968 or +1 646 876 9923 or +1 669 900 6833 or +1 253 215 8782
 or 877 853 5247 (Toll Free)
    Meeting ID: 930 8459 1725
    Password: 035530
    International numbers available: https://iso.zoom.us/u/agewu4X97

Or Skype for Business (Lync):


1. Opening and introductions

The ISO Code of conduct:
The IEC Code of Conduct:

ISO patent policy.

The WG21 Practices and Procedures and Code of Conduct:

1.1 Roll call of participants

 Richard Dosselman, Phil ratzloff, Andrew Lumsdaine, Ayenem. David Lindelof, Cyril Khazan, Eugenio Bargiacchi, Jens Maurer, Joe Sachs, Kevin Deweese, Larry Lewis, marco foco, Ozan Irsoy, Scott Mcmillan, vassil vassilev, Will Wray
Michael Wong. William Moses

1.2 Adopt agenda


1.3 Approve minutes from previous meeting, and approve publishing
 previously approved minutes to ISOCPP.org

1.4 Action items from previous meetings

2. Main issues (125 min)

2.1 General logistics

Meeting plan, focus on one paper per meeting but does not preclude other
paper updates:

     Dec 10, 2020 02:00 PM ET1900 UTC Stats DONE
    Jan 14, 2021 02:00 PM ET 1900 UTCReinforcement Learning and Diff
    Feb 11, 2021 02:00 PM 1900 UTC ET Graph

ISO meeting status

future C++ Std meetings

2.2 Paper reviews

2.2.1: ML topics Differential Calculs:


 Enzyme using IR and not AST
differentiation is first class in other languages
cant train the network without one part of loss function isnt AD compatible
cannot fo BP
so vast set of C++ codebases in ML

AD or algorithmic using chain rule
will generate the dervative function
can do non-closed form expressions
impl  exists:
1. use operator overloading, create a new differential double type and it is
AOT compilation, will not work ifipow is not there already
2. source transformation like Tapenade
2 styles:
1. forward mode (1 input and multiple output)
2. reverse mode (multiple inputs and one output, gradienst)

Dual numbers
a+eB is a dual number

needs to be native
op overloading approach(adept) and source rewriters(OpenAD) rewritign original program to use that specific suset that is differentiable , JAX does this

AD with language/compiler is efficient
several proof of concept: enzyme and CLAD
Enzyme can work wit hdifferent FEs
for all functsions, will build backward pass
now we can optimize this looks like hand compile derivatives
combining AD with optimization

runnign AD after optimization
loop invariant code motion assuming no aliasing between out and in
using restrict

if you do AD then Code motion, it is O(Nsquare)
if you do code  motion then AD then it is O(N)
ADBench from MS testd Enzyme
shows this case

C++ proposal
support AD at a low level enables AD at high level
so if u import eigen library

AL: compose different library, compiler based
will add burden on compiler impl
start with minimal set that compiler differentiate
then custom derivatives and generic functions

impl complexity: reverse mode in clad is 2000 lines

performance numbers with clad? coming

less intrusive like library are evaluated and have less performance

could be niche which make it tough sell
use reflection to inspect ADT
reflection does not work on statement and expressions
P2040 reflxpr touched on expression, but SG7 seems to be opposed to exposing teh AST

Library solution would serve no purpose to community on existing codepeople moving from TF to

people moving from C++ to Tf, TF to PT

what about portable format for modules by Gaby Dos Reis some library overload the explicit conversion operat

have we exhausted library apporach, as they are old
but are there modern expression template technique
Yes we explored lirbary, even latest one, like AOki which use shadow language

eve library is higher bar and favors small library

if we dont standardize it, someone will put it in llvm

do not see modules supporting this, Expression template needs ADL and leaves out whole set of already written C++ code
also issue with scalability like a chain of differentiation, and the space is large
and will miss optimization opportunity
might be mitigated by CSEE

is there a small set of features that could be provided by compiler that could make an efficient library implementation? I think nonethat library could be close to compiler impl
may be CSSE of templates
yes I spent time to understand doing it as a libr with small set of features from teh compiler, but could not find any that work

continue in this direction, publish the paper Reinforcement Learning Larry Lewis Jorge Silva

Reinforcement Learning proposal:

RI is dependent on LA, ML, NN, -> optimizer, AD, data loaders, LA, tensors
may be better to focus on tensors
mdspan and owning mdarray; but they are not enough for a good tensor library, no common tensor operations, just indexing, strides
NN needs matrix mult
NN needs AD to prevent handcoding BP, especially for CN when doing RGD
theano has AD
LA syntax and LA BLAS
LA syntax adds a definition for arithmetic operators to create matrix and vector types, currently in LEWG , rebased to C++ 20 using requires clauses great for mathematics which reduces interface
tensorflow/theano tensor looks more like python
may have to work US NL physics tensors
Heterogeneous tensorflow is currently built on top of SYCL, CUDA
review xtensor, pytorch, and TF tensor ops
optimzers, weights by Andrew Lumsdaine
P1416 by xtensor people

Data tables from Nvidia building data frame capability xdateframe?

split and reduce for C++26
common api?
Concept of algorithm
reducing the number of types and number of functions leveraging what is in ranges
looking at BGL concept are just handfull
STL is just into concepts and algorithms Stats paper

Stats review Richard Dosselman et al

P1708R3: Math proposal for Machine Learning: 3rd review

PXXXX: combinatorics: 1st Review

> std.org/jtc1/sc22/wg21/docs/papers/2020/p1708r2
> above is the stats paper that was reviewed in Prague
> Review Jolanta Polish feedback.

python has factorials, perm/comb is missing from our math library
had identified the min P1415 overview document
applications beyond SG19
need large numerical type for factorials
python has builtin wide integer type

unbounded template T for a numeric type, we have failed to specify for the complex of T type for float and long double
what operations are declared on that type, can't just say plug in any T

these could also be in C without the templates, may also consider how to do it in C

2.2.3 any other proposal for reviews?

2.3 Other Papers and proposals

P1416R1: SG19 - Linear Algebra for Data Science and Machine Learning

P1415: Machine Learning Layered list

2.2.2 SG14 Linear Algebra progress:
Different layers of proposal

2.5 Future F2F meetings:

2.6 future C++ Standard meetings:


3. Any other business

New reflector


Old Reflector

Code and proposal Staging area

4. Review

4.1 Review and approve resolutions and issues [e.g., changes to SG's
working draft]

4.2 Review action items (5 min)

5. Closing process

5.1 Establish next agenda


5.2 Future meeting

    Feb 11, 2021 02:00 PM 1900 UTC ET Graph paper

SG19 mailing list