Date: Fri, 17 Sep 2021 21:06:31 +0000
Jorg Brown wrote:
>> Already a huge problem is support for "long double": is it 128-bit or 80-bit or 64-bit?
Yes, the varying formats for long double is a real problem for writing portable code. I don’t know how to fix that. Rather than trying to fix that problem, P1467 provides a way for code to use well-known floating-point formats that won’t vary from platform to platform, so that applications that are serious about floating-point focus too much on long double.
To figure out at preprocessor time what format long double is, use the macros in <cfloat> or <float.h>.
>> My dream:
>> #ifdef __cpp_float16
>> void Handle(std::float16_t f);
>> #endif
>> #ifdef __cpp_float32
>> void Handle(std::float32_t f);
>> #endif
>> #ifdef __cpp_float64
>> void Handle(std::float64_t f);
>> #endif
Done. P1467 proposes feature test macros for each of the std::floatN_t names, so code can know whether or not they are supported. If the type is supported, then all arithmetic operations and <math.h> functions are available, as well conversions to and from standard floating-point types (implicit if lossless, explicit if potentially lossy).
These types are optional, so your code will have to handle the case where none of the std::floatN_t types are available. The std::floatN_t types are guaranteed to be different types than float, double, and long double, so you can overload on both sets of floating-point types if that works best for you.
>> This is of course common practice in C++, but how does C do this [type-generic macros] without overloads?
_Generic. A C11 feature.
>> Notably, the most glaring need for this is printf.
Neither C nor C++ are proposing adding printf/scanf support for the new floating-point types. That’s a really hard problem, whose benefit is probably not worth the effort to solve. Both C and C++ provide other ways of doing I/O of the new floating-point types, though unfortunately not the same ways.
From: Ext <ext-bounces_at_[hidden]> On Behalf Of Jorg Brown via Ext
Sent: Friday, September 17, 2021 11:03 AM
To: Evolution Working Group mailing list <ext_at_[hidden]>
Cc: Jorg Brown <jorg.brown_at_[hidden]>; SG6 numerics <sci_at_[hidden]>; Matthias Kretz <m.kretz_at_[hidden]>; WG14/WG21 liaison mailing list <liaison_at_[hidden]>; Joseph Myers <joseph_at_[hidden]>
Subject: Re: [isocpp-ext] [wg14/wg21 liaison] Report from the recent C/C++ liaison meeting (SG22); includes new floating point types(!)
I humbly suggest that before any meeting, there should be a list of important use cases that must be considered. For me, the one that comes to mind is:
1) I know of a few common FP scenarios:
MSVC: float is 32-bit, double is 64-bit, long double is also 64-bit. (same for ARM64)
x64 gcc/clang: float is 32-bit, double is 64-bit, long double is 80-bit.
ARM64 gcc/clang: float is 32-bit, double is 64-bit, long double is 128-bit.
Embedded with small FPU: all are 32-bit.
My specific concern with the above is how overloads are handled. A function such as std::abs should be able to handle whatever standard C types_(Float16, _Float32, _Float64, etc) are built into the compiler. But since the standard library ships with the compiler, it also knows what macros it should use to detect support. As a library writer, I have a harder job: I have to handle all of these scenarios in a tooling-independent way.
Already a huge problem is support for "long double": is it 128-bit or 80-bit or 64-bit? I can programmatically detect which of them a given type is, but I can't conditionally #include 128-bit support code unless there's a macro for that purpose, because "#if sizeof(double) == sizeof(long double)" doesn't compile.
Something that would make me incredibly happy is if c++23 said that I could overload on std::float16, std::float32, std::float64, std::float80, and std::float128, and that the basic functions (+-*/, abs, frexp, ldexp) all work on those types, regardless of what the CPU natively supports. Alternatively, if there were macros I could check that would tell me if each of these had support - but it would still have to be guaranteed that "float", "long", and "long double" could be passed to my overload, without compile error.
In Tony Table form:
Currently:
void Handle(float f) {
if (std::numeric_limits<float>::digits == 24) return Handle_float32(f);
if (std::numeric_limits<float>::digits == 53) return Handle_float64(f);
}
void Handle(double f) {
if (std::numeric_limits<float>::digits == 24) return Handle_float32(f);
if (std::numeric_limits<float>::digits == 53) return Handle_float64(f);
}
void Handle(long double f) {
if (std::numeric_limits<float>::digits == 24) return Handle_float32(f);
if (std::numeric_limits<float>::digits == 53) return Handle_float64(f);
if (std::numeric_limits<float>::digits == 64) return Handle_float80(f);
if (std::numeric_limits<float>::digits == 106) return Handle_float128(f);
}
My dream:
#ifdef __cpp_float16
void Handle(std::float16_t f);
#endif
#ifdef __cpp_float32
void Handle(std::float32_t f);
#endif
#ifdef __cpp_float64
void Handle(std::float64_t f);
#endif
#ifdef __cpp_float80
void Handle(std::float80_t f);
#endif
#ifdef __cpp_float128
void Handle(std::float128_t f);
#endif
= - = - = - =
Looking at the actual details of http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1312.pdf<https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.open-std.org%2Fjtc1%2Fsc22%2Fwg14%2Fwww%2Fdocs%2Fn1312.pdf&data=04%7C01%7Cdolsen%40nvidia.com%7Cde71f23d4b764375221708d97a058a8e%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637674987086750637%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=6sSnGRZ%2FIsVzYg4%2BlwE7Ev1g1UwGhNZ2OS6u4DBPqlg%3D&reserved=0> , I find one thing new and intriguing: "Type-generic macros". Specifically, it says that if you include <tgmath.h>, you can write "pow(2, 3.0)" and get the double version of pow, and you can write "pow(2, 3.0DD)" and get the _Decimal64 version. This is of course common practice in C++, but how does C do this without overloads?
Notably, the most glaring need for this is printf. If they are going to introduce a pow that does the right thing, how about a type-generic "tgprintf", that would automatically convert "%d" and "%u" to "%lld" and "%z" if the type of the parameters being passed were "long long" and "size_t"? (I'm hoping that the compiler would do this transformation, rather than some complicated runtime scheme)
-- Jorg
>> Already a huge problem is support for "long double": is it 128-bit or 80-bit or 64-bit?
Yes, the varying formats for long double is a real problem for writing portable code. I don’t know how to fix that. Rather than trying to fix that problem, P1467 provides a way for code to use well-known floating-point formats that won’t vary from platform to platform, so that applications that are serious about floating-point focus too much on long double.
To figure out at preprocessor time what format long double is, use the macros in <cfloat> or <float.h>.
>> My dream:
>> #ifdef __cpp_float16
>> void Handle(std::float16_t f);
>> #endif
>> #ifdef __cpp_float32
>> void Handle(std::float32_t f);
>> #endif
>> #ifdef __cpp_float64
>> void Handle(std::float64_t f);
>> #endif
Done. P1467 proposes feature test macros for each of the std::floatN_t names, so code can know whether or not they are supported. If the type is supported, then all arithmetic operations and <math.h> functions are available, as well conversions to and from standard floating-point types (implicit if lossless, explicit if potentially lossy).
These types are optional, so your code will have to handle the case where none of the std::floatN_t types are available. The std::floatN_t types are guaranteed to be different types than float, double, and long double, so you can overload on both sets of floating-point types if that works best for you.
>> This is of course common practice in C++, but how does C do this [type-generic macros] without overloads?
_Generic. A C11 feature.
>> Notably, the most glaring need for this is printf.
Neither C nor C++ are proposing adding printf/scanf support for the new floating-point types. That’s a really hard problem, whose benefit is probably not worth the effort to solve. Both C and C++ provide other ways of doing I/O of the new floating-point types, though unfortunately not the same ways.
From: Ext <ext-bounces_at_[hidden]> On Behalf Of Jorg Brown via Ext
Sent: Friday, September 17, 2021 11:03 AM
To: Evolution Working Group mailing list <ext_at_[hidden]>
Cc: Jorg Brown <jorg.brown_at_[hidden]>; SG6 numerics <sci_at_[hidden]>; Matthias Kretz <m.kretz_at_[hidden]>; WG14/WG21 liaison mailing list <liaison_at_[hidden]>; Joseph Myers <joseph_at_[hidden]>
Subject: Re: [isocpp-ext] [wg14/wg21 liaison] Report from the recent C/C++ liaison meeting (SG22); includes new floating point types(!)
I humbly suggest that before any meeting, there should be a list of important use cases that must be considered. For me, the one that comes to mind is:
1) I know of a few common FP scenarios:
MSVC: float is 32-bit, double is 64-bit, long double is also 64-bit. (same for ARM64)
x64 gcc/clang: float is 32-bit, double is 64-bit, long double is 80-bit.
ARM64 gcc/clang: float is 32-bit, double is 64-bit, long double is 128-bit.
Embedded with small FPU: all are 32-bit.
My specific concern with the above is how overloads are handled. A function such as std::abs should be able to handle whatever standard C types_(Float16, _Float32, _Float64, etc) are built into the compiler. But since the standard library ships with the compiler, it also knows what macros it should use to detect support. As a library writer, I have a harder job: I have to handle all of these scenarios in a tooling-independent way.
Already a huge problem is support for "long double": is it 128-bit or 80-bit or 64-bit? I can programmatically detect which of them a given type is, but I can't conditionally #include 128-bit support code unless there's a macro for that purpose, because "#if sizeof(double) == sizeof(long double)" doesn't compile.
Something that would make me incredibly happy is if c++23 said that I could overload on std::float16, std::float32, std::float64, std::float80, and std::float128, and that the basic functions (+-*/, abs, frexp, ldexp) all work on those types, regardless of what the CPU natively supports. Alternatively, if there were macros I could check that would tell me if each of these had support - but it would still have to be guaranteed that "float", "long", and "long double" could be passed to my overload, without compile error.
In Tony Table form:
Currently:
void Handle(float f) {
if (std::numeric_limits<float>::digits == 24) return Handle_float32(f);
if (std::numeric_limits<float>::digits == 53) return Handle_float64(f);
}
void Handle(double f) {
if (std::numeric_limits<float>::digits == 24) return Handle_float32(f);
if (std::numeric_limits<float>::digits == 53) return Handle_float64(f);
}
void Handle(long double f) {
if (std::numeric_limits<float>::digits == 24) return Handle_float32(f);
if (std::numeric_limits<float>::digits == 53) return Handle_float64(f);
if (std::numeric_limits<float>::digits == 64) return Handle_float80(f);
if (std::numeric_limits<float>::digits == 106) return Handle_float128(f);
}
My dream:
#ifdef __cpp_float16
void Handle(std::float16_t f);
#endif
#ifdef __cpp_float32
void Handle(std::float32_t f);
#endif
#ifdef __cpp_float64
void Handle(std::float64_t f);
#endif
#ifdef __cpp_float80
void Handle(std::float80_t f);
#endif
#ifdef __cpp_float128
void Handle(std::float128_t f);
#endif
= - = - = - =
Looking at the actual details of http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1312.pdf<https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.open-std.org%2Fjtc1%2Fsc22%2Fwg14%2Fwww%2Fdocs%2Fn1312.pdf&data=04%7C01%7Cdolsen%40nvidia.com%7Cde71f23d4b764375221708d97a058a8e%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637674987086750637%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=6sSnGRZ%2FIsVzYg4%2BlwE7Ev1g1UwGhNZ2OS6u4DBPqlg%3D&reserved=0> , I find one thing new and intriguing: "Type-generic macros". Specifically, it says that if you include <tgmath.h>, you can write "pow(2, 3.0)" and get the double version of pow, and you can write "pow(2, 3.0DD)" and get the _Decimal64 version. This is of course common practice in C++, but how does C do this without overloads?
Notably, the most glaring need for this is printf. If they are going to introduce a pow that does the right thing, how about a type-generic "tgprintf", that would automatically convert "%d" and "%u" to "%lld" and "%z" if the type of the parameters being passed were "long long" and "size_t"? (I'm hoping that the compiler would do this transformation, rather than some complicated runtime scheme)
-- Jorg
Received on 2021-09-17 16:06:36