I think you should articulate a clearer definition of "the default floating point type". Does it mean the fastest floating-point type supported by the architecture, regardless of precision? Or is the implementation allowed to make a trade-off at their discretion, i.e., double provides X% more precision than float, but is Y% faster, so comparing X and Y leads to the decision of which type is the default?

Of course, the standard doesn't have such clear guidance for how long `int` should be, but the length of `int` on each platform is a matter of historical practice. With your proposed floating point type, such historical practice doesn't exist yet so I think most users wouldn't want to use it, given that they don't know what they're going to get.

A more flexible solution would be the float equivalent of the int_fastX_t types, i.e., what is the fastest floating point type that provides at least X bits of precision. Here is a sketch proposal to be added to <numeric_limits>:

template <int precision, int exp_bits = 0>
using float_fast_t = See below;

float_fast_t<precision, exp_bits> is an alias for the fastest floating point type supported by the implementation such that:
If no such type exists, then any reference to the specialization float_fast_t<precision, exp_bits> is ill-formed.

On Thu, Jan 14, 2021 at 9:23 PM Vishal Oza via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
I was thinking of adding a default floating point type like int to the integer type rather than assume that double is the default. This might be better on older hardware where using a double might have a performance penalty. The keyword would either be flt or floating_point. I prefer flt for less typing but I can understand that it could break existing code. Can anyone arue why this is a bad idea?

Vishal Oza  
--
Std-Proposals mailing list
Std-Proposals@lists.isocpp.org
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals


--
Brian Bi