I think you should articulate a clearer definition of "the default floating point type". Does it mean the fastest floating-point type supported by the architecture, regardless of precision? Or is the implementation allowed to make a trade-off at their discretion, i.e., double provides X% more precision than float, but is Y% faster, so comparing X and Y leads to the decision of which type is the default?
Of course, the standard doesn't have such clear guidance for how long `int` should be, but the length of `int` on each platform is a matter of historical practice. With your proposed floating point type, such historical practice doesn't exist yet so I think most users wouldn't want to use it, given that they don't know what they're going to get.
A more flexible solution would be the float equivalent of the int_fastX_t types, i.e., what is the fastest floating point type that provides at least X bits of precision. Here is a sketch proposal to be added to <numeric_limits>:
template <int precision, int exp_bits = 0>
using float_fast_t = See below;
float_fast_t<precision, exp_bits> is an alias for the fastest floating point type supported by the implementation such that:
- the smallest representable value after 1 is at most 1 + 2-precision, and
- if exp_bits is greater than 0, then there exists a representable value that is at least 22exp_bits - 1.
If no such type exists, then any reference to the specialization float_fast_t<precision, exp_bits> is ill-formed.