Date: Mon, 21 Mar 2022 07:04:52 -0700 (PDT)
On Mon, 21 Mar 2022 10:35:25 +0000 Jonathan Wakely via Liaison wrote:
>
>There is some overlap with parts of
>http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p2551r0.pdf
>regarding what "has_denorm" means for the new C++ traits. But the answer to
>"what does the C macro mean?" and "what does the C++ trait mean?" should
>not be the same, as they're actually asking different questions, and so the
>proposed changes in this paper do not impact C++.
I have looked at p2551r0 and p1841r2 and I do not find "has_denorm" in either.
The only version of C++ that I have is WG21 N4860. In it, I do find "has_denorm"
and it appears to be equivalent to C's *_HAS_SUBNORM.
It appears that neither standard can give those a useful value if the treatment
of subnormals can be changed at runtime (as can be done on ARM chips).
They also do not cover the two case where:
operands are flushed to zero, but results are not flushed.
results are flushed, but operands are not flushed.
C implementations should define the macros as: -1 indeterminable.
C++ implementations should define as: denorm_indeterminate
While that is the correct value, it is not useful.
Aside: References to 559 should be to 60559.
There should be "norm_max" for the maximum normalized number.
The definition of "epsilon" in C++ differs from that in C. C added
"normalized" to the definition. It mattes for the case where
long double is implemented as a pair of doubles.
Does "round_error" cover subnormals numbers flushed to zero?
>
>There is some overlap with parts of
>http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2022/p2551r0.pdf
>regarding what "has_denorm" means for the new C++ traits. But the answer to
>"what does the C macro mean?" and "what does the C++ trait mean?" should
>not be the same, as they're actually asking different questions, and so the
>proposed changes in this paper do not impact C++.
I have looked at p2551r0 and p1841r2 and I do not find "has_denorm" in either.
The only version of C++ that I have is WG21 N4860. In it, I do find "has_denorm"
and it appears to be equivalent to C's *_HAS_SUBNORM.
It appears that neither standard can give those a useful value if the treatment
of subnormals can be changed at runtime (as can be done on ARM chips).
They also do not cover the two case where:
operands are flushed to zero, but results are not flushed.
results are flushed, but operands are not flushed.
C implementations should define the macros as: -1 indeterminable.
C++ implementations should define as: denorm_indeterminate
While that is the correct value, it is not useful.
Aside: References to 559 should be to 60559.
There should be "norm_max" for the maximum normalized number.
The definition of "epsilon" in C++ differs from that in C. C added
"normalized" to the definition. It mattes for the case where
long double is implemented as a pair of doubles.
Does "round_error" cover subnormals numbers flushed to zero?
--- Fred J. Tydeman Tydeman Consulting tydeman_at_[hidden] Testing, numerics, programming +1 (702) 608-6093 Vice-chair of PL22.11 (ANSI "C") Sample C99+FPCE tests: http://www.tybor.com Savers sleep well, investors eat well, spenders work forever.
Received on 2022-03-21 15:04:56