Date: Wed, 26 Nov 2025 00:57:52 +0000
If we are to force implementers to have long long long as at least 128 bit, we will risk the embedded community to simply reject/not implement this proposal as too burdensome and irrelevant. I am hoping for this to be sufficiently lightweight to be as universally accepted as long long.
Also, if we are to move forward with further consideration, we will need to decide if the variant long long long int makes sense. Perhaps simply long long long (without the int) would be sufficient, although that would be another symmetry breakage.
From: JJ Marr <jjmarr_at_gmail.com>
Sent: Tuesday, November 25, 2025 7:48 PM
To: std-proposals_at_[hidden]
Cc: Lev Minkovsky <levmink_at_outlook.com>
Subject: Re: [std-proposals] Extended precision integers
> As a complimentary alternative, we can address the 3rd bucket by adding a yet another integer type long long long, with implementation-defined width that would not be less than that of long long. For the platforms where the extended precision is irrelevant (embedded targets, possibly freestanding systems), the implementers can define it as an alias for long long. The desktop and server targets can opt for long long long to have 128 bits.
The standard guarantees `int` is at least 16 bits, `long int` is at least 32 bits, and `long long int` is at least 64 bits.
Would there be disadvantages to breaking the symmetry that implies `long long long int` would be at least 128 bits?
On Tue, Nov 25, 2025, 7:40 p.m. Lev Minkovsky via Std-Proposals <std-proposals_at_[hidden]<mailto:std-proposals_at_[hidden]>> wrote:
Hello all,
I wanted to float an idea of an additional extended precision integral type.
The C++ 26 assortment of fundamental integer types (bool, char, short, int, long and long long) are designed to be thin abstractions over the underlying CPU instructions. As such, they are easy to implement and provide very good performance. There are however use cases which necessitate different kinds of types. They can be classified into 3 broad “buckets”.
1. Arbitrary precision computing. For example, commonly used RSA cryptography keys vary from 1024 bits to 4096 bits.
2. Known precision calculations. We already have some support for this in the form of fixed width integer types (Fixed width integer types (since C++11) - cppreference.com<https://en.cppreference.com/w/cpp/types/integer.html>).
3. Extended precision calculations. Some CPU architectures make it possible to do math with higher precision than long long without significant performance penalty.
The bit-precise integers introduced in P3666<https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3666r1.html> aims for all these scenarios. I foresee however that it may be difficult to quality-standardize and even more difficult to quality-implement. Some implementations will very possibly be reluctant to commit to it, because their user base would not consider it high priority.
As a complimentary alternative, we can address the 3rd bucket by adding a yet another integer type long long long, with implementation-defined width that would not be less than that of long long. For the platforms where the extended precision is irrelevant (embedded targets, possibly freestanding systems), the implementers can define it as an alias for long long. The desktop and server targets can opt for long long long to have 128 bits.
With this new type and its unsigned variant, we will also get new literal suffixes LLL and ULLL. Thus, instead of writing a 128-bit zero as static_cast<_BitInt(128)>(0), we would be able to do it simply as 0LLL.
If there is an interest in exploring this approach further, I can write a proposal for C++ 29.
Thank you all for your attention –
Lev Minkovsky
--
Std-Proposals mailing list
Std-Proposals_at_[hidden]<mailto:Std-Proposals_at_[hidden]>
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Also, if we are to move forward with further consideration, we will need to decide if the variant long long long int makes sense. Perhaps simply long long long (without the int) would be sufficient, although that would be another symmetry breakage.
From: JJ Marr <jjmarr_at_gmail.com>
Sent: Tuesday, November 25, 2025 7:48 PM
To: std-proposals_at_[hidden]
Cc: Lev Minkovsky <levmink_at_outlook.com>
Subject: Re: [std-proposals] Extended precision integers
> As a complimentary alternative, we can address the 3rd bucket by adding a yet another integer type long long long, with implementation-defined width that would not be less than that of long long. For the platforms where the extended precision is irrelevant (embedded targets, possibly freestanding systems), the implementers can define it as an alias for long long. The desktop and server targets can opt for long long long to have 128 bits.
The standard guarantees `int` is at least 16 bits, `long int` is at least 32 bits, and `long long int` is at least 64 bits.
Would there be disadvantages to breaking the symmetry that implies `long long long int` would be at least 128 bits?
On Tue, Nov 25, 2025, 7:40 p.m. Lev Minkovsky via Std-Proposals <std-proposals_at_[hidden]<mailto:std-proposals_at_[hidden]>> wrote:
Hello all,
I wanted to float an idea of an additional extended precision integral type.
The C++ 26 assortment of fundamental integer types (bool, char, short, int, long and long long) are designed to be thin abstractions over the underlying CPU instructions. As such, they are easy to implement and provide very good performance. There are however use cases which necessitate different kinds of types. They can be classified into 3 broad “buckets”.
1. Arbitrary precision computing. For example, commonly used RSA cryptography keys vary from 1024 bits to 4096 bits.
2. Known precision calculations. We already have some support for this in the form of fixed width integer types (Fixed width integer types (since C++11) - cppreference.com<https://en.cppreference.com/w/cpp/types/integer.html>).
3. Extended precision calculations. Some CPU architectures make it possible to do math with higher precision than long long without significant performance penalty.
The bit-precise integers introduced in P3666<https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3666r1.html> aims for all these scenarios. I foresee however that it may be difficult to quality-standardize and even more difficult to quality-implement. Some implementations will very possibly be reluctant to commit to it, because their user base would not consider it high priority.
As a complimentary alternative, we can address the 3rd bucket by adding a yet another integer type long long long, with implementation-defined width that would not be less than that of long long. For the platforms where the extended precision is irrelevant (embedded targets, possibly freestanding systems), the implementers can define it as an alias for long long. The desktop and server targets can opt for long long long to have 128 bits.
With this new type and its unsigned variant, we will also get new literal suffixes LLL and ULLL. Thus, instead of writing a 128-bit zero as static_cast<_BitInt(128)>(0), we would be able to do it simply as 0LLL.
If there is an interest in exploring this approach further, I can write a proposal for C++ 29.
Thank you all for your attention –
Lev Minkovsky
--
Std-Proposals mailing list
Std-Proposals_at_[hidden]<mailto:Std-Proposals_at_[hidden]>
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Received on 2025-11-26 00:57:56
