Date: Thu, 30 Mar 2023 10:58:08 +0200
On 29/03/2023 17:18, Arthur O'Dwyer via Std-Proposals wrote:
> A new `uintmax_extended_t` (or whatever) can communicate properly from
> the get-go: "Hey! This type will change in the future! Don't build it
> into your APIs!"
> But then, if you aren't using this type in APIs, then where /*are*/ you
> using it, and why does it need to exist in the standard library at all?
>
That, I think, is the key point - /why/ would you want a "maximum size
integer type" ?
I hope that intmax_t gets deprecated, and I hope never to see any
replacement versions in the C or C++ standards, regardless of how
clearly they are defined. The size of an integer that you want in your
code is determined by what you want to store in it - not the biggest
size some implementation happens to support.
In my eyes, the appropriate starting point for C++ integer types would
be templates:
Int_t<N>
Int_fast_t<N>
Int_least_t<N>
matching the standard int32_t, int_fast32_t, int_least32_t types (and
all similar types). They should support N as powers of 2 from 8
upwards, but implementations could support other sizes if they want (the
"fast" and "least" types will always support other sizes, rounding up as
needed).
(I'd also like variants with different overflow behaviours, but that's
an orthogonal issue.)
That would give a simple and consistent system that lets you have types
the size you want, and use them safely in API's. The same type would
have the same size whether you are using an 8-bit AVR or a 128-bit
RISC-V (if anyone actually makes one).
Things are more difficult in C, since there are no templates (_Generics
are not extensible or recursive, and cannot be parametrised by
integers). But again, the maximum size integer types were a mistake IMHO.
> A new `uintmax_extended_t` (or whatever) can communicate properly from
> the get-go: "Hey! This type will change in the future! Don't build it
> into your APIs!"
> But then, if you aren't using this type in APIs, then where /*are*/ you
> using it, and why does it need to exist in the standard library at all?
>
That, I think, is the key point - /why/ would you want a "maximum size
integer type" ?
I hope that intmax_t gets deprecated, and I hope never to see any
replacement versions in the C or C++ standards, regardless of how
clearly they are defined. The size of an integer that you want in your
code is determined by what you want to store in it - not the biggest
size some implementation happens to support.
In my eyes, the appropriate starting point for C++ integer types would
be templates:
Int_t<N>
Int_fast_t<N>
Int_least_t<N>
matching the standard int32_t, int_fast32_t, int_least32_t types (and
all similar types). They should support N as powers of 2 from 8
upwards, but implementations could support other sizes if they want (the
"fast" and "least" types will always support other sizes, rounding up as
needed).
(I'd also like variants with different overflow behaviours, but that's
an orthogonal issue.)
That would give a simple and consistent system that lets you have types
the size you want, and use them safely in API's. The same type would
have the same size whether you are using an 8-bit AVR or a 128-bit
RISC-V (if anyone actually makes one).
Things are more difficult in C, since there are no templates (_Generics
are not extensible or recursive, and cannot be parametrised by
integers). But again, the maximum size integer types were a mistake IMHO.
Received on 2023-03-30 08:58:18