Date: Thu, 27 Nov 2025 09:53:31 +0100
I think the discussions so far have been quite informative about what
people need or want, and I have certainly found them helpful to solidify
my own thoughts and opinions on integer types.
And based on these, my initial reaction to your new suggestion here is,
with all due respect, that it is a terrible idea. But if by discussing
it we can reach more of a consensus about /why/ it is a terrible idea -
or to convince the sceptics that it is actually a good idea - then the
thread will be a positive result regardless.
I can only really talk about what /I/ want from integer types. I
believe my opinions are shared by many or most people, but that is just
a belief without rigorous backing.
When I am picking an integer type when programming, I am concerned with
four things :
1. Is it big enough to hold the numbers I need?
2. Is it clear in my code?
3. Is it efficient for the operations I need on it?
4. What is its overflow or error behaviour?
Number 1 is by far the most important. If the type is too small for its
use, the code is wrong - and code correctness trumps every other aspect
of code.
Number 2 is important for writing quality code that can be read and
understood in the future. With "uint32_t", I know exactly what I am
getting, and so does everyone else. With "long", it depends on the
platform and target. It gives certain guarantees, but you often do not
know what the author actually intended.
Number 3 - efficiency - is important in some code, but often less
important than many people think. Still, if efficiency is irrelevant to
your work then perhaps C++ is not the language for you - integers in
languages like Python are always big enough without needing to make choices.
Number 4 - overflow and/or error behaviour is important if you are
pushing things to the max and risk overflows (which you presumably will
weed out during testing and debugging). Overflow behaviour can also
affect efficiency - UB overflow can common constructs noticeably more
efficient on some platforms.
The one thing I do not care about at all, is how the compiler and target
implement the type. Thus "_Single" and "_Mult" are completely useless
concepts. If I pick a type that can hold at least 64 bits, I don't care
if that takes one register, two registers, 8 registers, regular GPRs,
SIMD registers, stack slots, or whatever. I care that it gives the
correct answers, and I hope the implementation handles it efficiently.
In theory, the only (current) integer types that I think are appropriate
to use are intN_t, int_fastN_t and int_leastN_t (and their unsigned
counterparts). Local variables and function parameters should be "fast",
data stored in memory should be "least" (smaller means more cached data
and therefore faster), and external interfaces of all sorts should use
exact size types. In practice, the "fast" and "least" types are usually
not worth the effort and the fixed size types are used instead.
It is no surprise to me that in almost all "modern" languages, integer
types are either a single arbitrary precision integer type without
limits, or named by explicit size.
Going forward with what I would like to see in C++, _BitInt (or a
template-based equivalent, or both - compatibility with C is very
useful) covers pretty much everything of all fixed sizes. I think there
is also scope for a int128_t and uint128_t type as a convenience for
compilers that will take more time to have good _BitInt implementations.
I'd also like to see a standardised multi-precision integer type and
cryptography-oriented support, but that's a big project and a big
addition to the standard library rather than a language feature.
I'd like integer literals and the preprocessor to use arbitrary
precision integers.
Perhaps more controversially, I'd also like to see unsigned
size-specific types with UB overflow behaviour. And maybe there is a
way to standardise the kind of "sanitiser" tools some compilers provide
to help programmers detect overflows - perhaps as a future enhancement
to the contract system.
It might also be nice to have some templates to get types for integer
sizes without having to figure out bit-sizes manually :
using big_num = std::integer_for_range<
0, std::pow(1'000'000, 4),
std::integer_tags::overflow_is_ub,
std::integer_tags::fast
> :: type;
But I am not making any proposals as yet :-)
The one thing I can't see any benefits of is yet more vague and
unspecified integer types or qualifiers.
David
On 27/11/2025 00:14, Lev Minkovsky via Std-Proposals wrote:
> Hello all,
>
> Thank you for your active participation in the
> discussion about extended precision types.
>
> There is a better way to accomplish what I am looking
> for. Virtually every hardware architecture provides some support for
> double precision math. For example, 32-bit platforms can effectively and
> directly do 64-bit operations, and 64-bit platforms – 128-bit
> operations. A good deal of that support is already available in existing
> C++ implementations.
>
> We now have the following set of integral types:
>
> 1. size-constrained fundamental types (e.g., long long is at least 64 bit).
> 2. fixed-size library types, aliasing to fundamental types (e.g., int16_t)
> 3. the potential future wide gamut of bit-precise types (e.g.,
> _BitInt(48)).
>
> In addition to those, we can introduce the following
> word-length-dependent types:
>
> _Single. unsigned _Single
> _Mult2, unsigned _Mult2.
>
> The _Single types will be aliases of some C++26 fundamental types. For
> example, on x64, _Single can be aliased to long long. The _Mult2 types
> on 32-bit platforms can aliased to (unsigned) long long, and on 64-bit
> platforms introduce standard 128-bit integer types.
>
> If we want to, we can also have the respective literals. For example,
>
> auto x = 1sl;
> auto y = 1m2;
>
>
> Here, x will be a single precision integer, and y – double precision
> integer, both equal to 1.
>
> All this should be easy to implement with native-level performance, and
> therefore we should expect quick and wide adoption.
>
> Please let me know if you think this is proposal worthy.
>
> Best regards –
>
> Lev Minkovsky
>
>
people need or want, and I have certainly found them helpful to solidify
my own thoughts and opinions on integer types.
And based on these, my initial reaction to your new suggestion here is,
with all due respect, that it is a terrible idea. But if by discussing
it we can reach more of a consensus about /why/ it is a terrible idea -
or to convince the sceptics that it is actually a good idea - then the
thread will be a positive result regardless.
I can only really talk about what /I/ want from integer types. I
believe my opinions are shared by many or most people, but that is just
a belief without rigorous backing.
When I am picking an integer type when programming, I am concerned with
four things :
1. Is it big enough to hold the numbers I need?
2. Is it clear in my code?
3. Is it efficient for the operations I need on it?
4. What is its overflow or error behaviour?
Number 1 is by far the most important. If the type is too small for its
use, the code is wrong - and code correctness trumps every other aspect
of code.
Number 2 is important for writing quality code that can be read and
understood in the future. With "uint32_t", I know exactly what I am
getting, and so does everyone else. With "long", it depends on the
platform and target. It gives certain guarantees, but you often do not
know what the author actually intended.
Number 3 - efficiency - is important in some code, but often less
important than many people think. Still, if efficiency is irrelevant to
your work then perhaps C++ is not the language for you - integers in
languages like Python are always big enough without needing to make choices.
Number 4 - overflow and/or error behaviour is important if you are
pushing things to the max and risk overflows (which you presumably will
weed out during testing and debugging). Overflow behaviour can also
affect efficiency - UB overflow can common constructs noticeably more
efficient on some platforms.
The one thing I do not care about at all, is how the compiler and target
implement the type. Thus "_Single" and "_Mult" are completely useless
concepts. If I pick a type that can hold at least 64 bits, I don't care
if that takes one register, two registers, 8 registers, regular GPRs,
SIMD registers, stack slots, or whatever. I care that it gives the
correct answers, and I hope the implementation handles it efficiently.
In theory, the only (current) integer types that I think are appropriate
to use are intN_t, int_fastN_t and int_leastN_t (and their unsigned
counterparts). Local variables and function parameters should be "fast",
data stored in memory should be "least" (smaller means more cached data
and therefore faster), and external interfaces of all sorts should use
exact size types. In practice, the "fast" and "least" types are usually
not worth the effort and the fixed size types are used instead.
It is no surprise to me that in almost all "modern" languages, integer
types are either a single arbitrary precision integer type without
limits, or named by explicit size.
Going forward with what I would like to see in C++, _BitInt (or a
template-based equivalent, or both - compatibility with C is very
useful) covers pretty much everything of all fixed sizes. I think there
is also scope for a int128_t and uint128_t type as a convenience for
compilers that will take more time to have good _BitInt implementations.
I'd also like to see a standardised multi-precision integer type and
cryptography-oriented support, but that's a big project and a big
addition to the standard library rather than a language feature.
I'd like integer literals and the preprocessor to use arbitrary
precision integers.
Perhaps more controversially, I'd also like to see unsigned
size-specific types with UB overflow behaviour. And maybe there is a
way to standardise the kind of "sanitiser" tools some compilers provide
to help programmers detect overflows - perhaps as a future enhancement
to the contract system.
It might also be nice to have some templates to get types for integer
sizes without having to figure out bit-sizes manually :
using big_num = std::integer_for_range<
0, std::pow(1'000'000, 4),
std::integer_tags::overflow_is_ub,
std::integer_tags::fast
> :: type;
But I am not making any proposals as yet :-)
The one thing I can't see any benefits of is yet more vague and
unspecified integer types or qualifiers.
David
On 27/11/2025 00:14, Lev Minkovsky via Std-Proposals wrote:
> Hello all,
>
> Thank you for your active participation in the
> discussion about extended precision types.
>
> There is a better way to accomplish what I am looking
> for. Virtually every hardware architecture provides some support for
> double precision math. For example, 32-bit platforms can effectively and
> directly do 64-bit operations, and 64-bit platforms – 128-bit
> operations. A good deal of that support is already available in existing
> C++ implementations.
>
> We now have the following set of integral types:
>
> 1. size-constrained fundamental types (e.g., long long is at least 64 bit).
> 2. fixed-size library types, aliasing to fundamental types (e.g., int16_t)
> 3. the potential future wide gamut of bit-precise types (e.g.,
> _BitInt(48)).
>
> In addition to those, we can introduce the following
> word-length-dependent types:
>
> _Single. unsigned _Single
> _Mult2, unsigned _Mult2.
>
> The _Single types will be aliases of some C++26 fundamental types. For
> example, on x64, _Single can be aliased to long long. The _Mult2 types
> on 32-bit platforms can aliased to (unsigned) long long, and on 64-bit
> platforms introduce standard 128-bit integer types.
>
> If we want to, we can also have the respective literals. For example,
>
> auto x = 1sl;
> auto y = 1m2;
>
>
> Here, x will be a single precision integer, and y – double precision
> integer, both equal to 1.
>
> All this should be easy to implement with native-level performance, and
> therefore we should expect quick and wide adoption.
>
> Please let me know if you think this is proposal worthy.
>
> Best regards –
>
> Lev Minkovsky
>
>
Received on 2025-11-27 08:53:37
