As specified, library implementors have no constraints on how they implement the non-member-function (i.e., "freestanding") and object-based APIs, which leads to the possibility that different libraries (MSVC, libc++, libstdc++, libcu++, etc) will generate different results for the same input (or even generate exceptional results differently).

This feels wrong, I believe that the user community would benefit from a guarantee that the results from different library implementations are either exactly the same, or have some (very) well-characterized domains and error bounds.

The obvious way to do this is to completely define algorithmic results in terms of the behavior of a specific algorithm. This dramatically constrains implementor choices, but this can be limited if one defines the non-member APIs explicitly in terms of the object-based api and std::stats::accum.

As an example see

This little bit of down and dirty code demonstrates how a library might forward `mean` to an all-powerful stats::accum interface (I believe this is already proposed), provide a default accumulator object type, and support customization. It additionally provides three basic accumulator types, each of which trades between performance, accuracy, and overflow safety, and calls the non-member function mean() in four different ways.

I believe that there is precedence for such a design in the standard library. At a minimum I know that

* <cmath> obviously constrains implementations (possibly detrimentally)
* <chrono> provides a number of different clock types with different characteristics
* <random> provides explicit named algorithm implementations

This does not address complications with respect to commutativity and associativity, particularly in the context of the proposed parallelization APIs, however given a standardized specific implementation one could potentially forward that concern to “the behavior of the following algorithm as presented."

Luke D'Alessandro