Date: Tue, 27 Aug 2024 11:39:52 +0000
The following answers are based on my user experience and not informed by the actual decisions that led to the status quo, so take it with a grain of salt.
> Do I see correctly that the standard says
nothing about the precision of mathematical
functions provided by <cmath>?
That looks correct.
> Why not?
Well, I think it all comes down to implementability.
To implement things like sin, sqrt, etc., you need a method where you can get an exact number represented in binary, and transform it using boolean logic and a finite amount of resources (things that a computer can do).
There have been a lot of people working on this and there are several numerical methods that you can find in the literature in order to do this.
Many of those are iterative in nature with each iteration getting close to but not quite right to the actual value. Note that you may not always be able to get to the right value because they might not be representable in floating point anyways.
At some point you have to make a decision, “do I keep iterating (potentially forever)”? Or “do I stop somewhere a decide that the value that I have is good enough”?
It’s a tradeoff between accuracy and performance.
Accurate is going to be slow, fast and you have to give up on some accuracy.
Not all applications require extremely high levels of accuracy (again exact results are not representable in floating point anyways and as you chain those results with more and more operations, it’s just a losing battle the value is going to lose precision), good enough is often good enough. People had to make chips, and write software that can be used effectively, and they have historically decided on a sweet spot.
Different vendors (of both CPUs and compilers) to this day provide different levels of accuracy for the exact same source code.
And since they started making CPUs newer methods have been found, that allow for better accuracy at a lower cost, and to a certain extent the situation has been updating and evolving, but the dynamics remain.
The fact that the standard “intentionally” doesn’t say anything about mathematical precision, it is just an acknowledgement of this legacy, stuff ain’t precise and what you can is all over the place depending on platform and library implementation, and there’s no consistency here.
> Could this change in the future?
Not likely. Unless newer more efficient methods that can guarantee both accuracy and speed come along, and there’s a widespread adoption to the extent that anything not using is just going to be dropped out of support or the ability to receive future standard updates.
I don’t see this happening.
> As a user, I was better off depending on a dedicated third-party library than relying on a poor implementation in the C++ standard lib.
If you need super-accuracy, Yes! But most people don’t need that, and for that what the standard can provide is good enough.
My 2c
________________________________
From: Std-Discussion <std-discussion-bounces_at_[hidden]> on behalf of Joachim Wuttke via Std-Discussion <std-discussion_at_[hidden]>
Sent: Tuesday, August 27, 2024 12:36:11 PM
To: std-discussion_at_[hidden] <std-discussion_at_[hidden]>
Cc: Joachim Wuttke <j.wuttke_at_[hidden]uelich.de>
Subject: [std-discussion] SG6] precision of functions in cmath
Do I see correctly that the standard says
nothing about the precision of mathematical
functions provided by <cmath>?
Why not?
Could this change in the future?
What is the point of adding ever more functions
to <cmath> (like Bessel functions in c++17) if
this comes without any guarantee of accuracy?
As a user, I was better off depending on
a dedicated third-party library than relying on
a poor implementation in the C++ standard lib.
---
Dr. Joachim Wuttke
group leader Scientific Computing
Forschungszentrum Jülich GmbH
Jülich Centre for Neutron Science at MLZ
+49 89 158860 715
https://computing.mlz-garching.de
https://jugit.fz-juelich.de/mlz
> Do I see correctly that the standard says
nothing about the precision of mathematical
functions provided by <cmath>?
That looks correct.
> Why not?
Well, I think it all comes down to implementability.
To implement things like sin, sqrt, etc., you need a method where you can get an exact number represented in binary, and transform it using boolean logic and a finite amount of resources (things that a computer can do).
There have been a lot of people working on this and there are several numerical methods that you can find in the literature in order to do this.
Many of those are iterative in nature with each iteration getting close to but not quite right to the actual value. Note that you may not always be able to get to the right value because they might not be representable in floating point anyways.
At some point you have to make a decision, “do I keep iterating (potentially forever)”? Or “do I stop somewhere a decide that the value that I have is good enough”?
It’s a tradeoff between accuracy and performance.
Accurate is going to be slow, fast and you have to give up on some accuracy.
Not all applications require extremely high levels of accuracy (again exact results are not representable in floating point anyways and as you chain those results with more and more operations, it’s just a losing battle the value is going to lose precision), good enough is often good enough. People had to make chips, and write software that can be used effectively, and they have historically decided on a sweet spot.
Different vendors (of both CPUs and compilers) to this day provide different levels of accuracy for the exact same source code.
And since they started making CPUs newer methods have been found, that allow for better accuracy at a lower cost, and to a certain extent the situation has been updating and evolving, but the dynamics remain.
The fact that the standard “intentionally” doesn’t say anything about mathematical precision, it is just an acknowledgement of this legacy, stuff ain’t precise and what you can is all over the place depending on platform and library implementation, and there’s no consistency here.
> Could this change in the future?
Not likely. Unless newer more efficient methods that can guarantee both accuracy and speed come along, and there’s a widespread adoption to the extent that anything not using is just going to be dropped out of support or the ability to receive future standard updates.
I don’t see this happening.
> As a user, I was better off depending on a dedicated third-party library than relying on a poor implementation in the C++ standard lib.
If you need super-accuracy, Yes! But most people don’t need that, and for that what the standard can provide is good enough.
My 2c
________________________________
From: Std-Discussion <std-discussion-bounces_at_[hidden]> on behalf of Joachim Wuttke via Std-Discussion <std-discussion_at_[hidden]>
Sent: Tuesday, August 27, 2024 12:36:11 PM
To: std-discussion_at_[hidden] <std-discussion_at_[hidden]>
Cc: Joachim Wuttke <j.wuttke_at_[hidden]uelich.de>
Subject: [std-discussion] SG6] precision of functions in cmath
Do I see correctly that the standard says
nothing about the precision of mathematical
functions provided by <cmath>?
Why not?
Could this change in the future?
What is the point of adding ever more functions
to <cmath> (like Bessel functions in c++17) if
this comes without any guarantee of accuracy?
As a user, I was better off depending on
a dedicated third-party library than relying on
a poor implementation in the C++ standard lib.
---
Dr. Joachim Wuttke
group leader Scientific Computing
Forschungszentrum Jülich GmbH
Jülich Centre for Neutron Science at MLZ
+49 89 158860 715
https://computing.mlz-garching.de
https://jugit.fz-juelich.de/mlz
Received on 2024-08-27 11:39:56