Date: Wed, 26 Nov 2025 08:06:27 +0000
long long long is dead on arrival. “long long” is already consider ridiculous repetition of key words, just give it a proper name.
In any case implementing a type that has no hardware equivalent is a bad move. Specially because this is and A/B problem.
What you want is the ability to do multi-precision arithmetic, something that computers have been able to do for years, and to do that you think you need a bigger type, instead of actually making multi-precision available. To make things worse a bigger type doesn’t actually allow you to do multi-precision any better
You will soon realize that RSA 1024 is 1024 bits, so you also need the 256bits/512bits and then the 1024 bit numbers.
And next you will be asking for a “long long long long long long” (did I get the number of longs right?… I think so)
There is nothing special you can solve with 128bits that you couldn’t have solved with 64, the only reason you just don’t do it with 64bits it’s because you don’t know how to implement higher bit count operations with a lower number of bits. The number 128 isn’t magical. It will not all of a sudden make things work.
What you want is this: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
Give access to what computers have been able to do for decades and this problem is over.
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of Lev Minkovsky via Std-Proposals
Sent: Wednesday, November 26, 2025 01:40
To: std-proposals_at_[hidden]
Cc: Lev Minkovsky <levmink_at_[hidden]>
Subject: [std-proposals] Extended precision integers
Hello all,
I wanted to float an idea of an additional extended precision integral type.
The C++ 26 assortment of fundamental integer types (bool, char, short, int, long and long long) are designed to be thin abstractions over the underlying CPU instructions. As such, they are easy to implement and provide very good performance. There are however use cases which necessitate different kinds of types. They can be classified into 3 broad “buckets”.
1. Arbitrary precision computing. For example, commonly used RSA cryptography keys vary from 1024 bits to 4096 bits.
2. Known precision calculations. We already have some support for this in the form of fixed width integer types (Fixed width integer types (since C++11) - cppreference.com<https://en.cppreference.com/w/cpp/types/integer.html>).
3. Extended precision calculations. Some CPU architectures make it possible to do math with higher precision than long long without significant performance penalty.
The bit-precise integers introduced in P3666<https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3666r1.html> aims for all these scenarios. I foresee however that it may be difficult to quality-standardize and even more difficult to quality-implement. Some implementations will very possibly be reluctant to commit to it, because their user base would not consider it high priority.
As a complimentary alternative, we can address the 3rd bucket by adding a yet another integer type long long long, with implementation-defined width that would not be less than that of long long. For the platforms where the extended precision is irrelevant (embedded targets, possibly freestanding systems), the implementers can define it as an alias for long long. The desktop and server targets can opt for long long long to have 128 bits.
With this new type and its unsigned variant, we will also get new literal suffixes LLL and ULLL. Thus, instead of writing a 128-bit zero as static_cast<_BitInt(128)>(0), we would be able to do it simply as 0LLL.
If there is an interest in exploring this approach further, I can write a proposal for C++ 29.
Thank you all for your attention –
Lev Minkovsky
In any case implementing a type that has no hardware equivalent is a bad move. Specially because this is and A/B problem.
What you want is the ability to do multi-precision arithmetic, something that computers have been able to do for years, and to do that you think you need a bigger type, instead of actually making multi-precision available. To make things worse a bigger type doesn’t actually allow you to do multi-precision any better
You will soon realize that RSA 1024 is 1024 bits, so you also need the 256bits/512bits and then the 1024 bit numbers.
And next you will be asking for a “long long long long long long” (did I get the number of longs right?… I think so)
There is nothing special you can solve with 128bits that you couldn’t have solved with 64, the only reason you just don’t do it with 64bits it’s because you don’t know how to implement higher bit count operations with a lower number of bits. The number 128 isn’t magical. It will not all of a sudden make things work.
What you want is this: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
Give access to what computers have been able to do for decades and this problem is over.
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of Lev Minkovsky via Std-Proposals
Sent: Wednesday, November 26, 2025 01:40
To: std-proposals_at_[hidden]
Cc: Lev Minkovsky <levmink_at_[hidden]>
Subject: [std-proposals] Extended precision integers
Hello all,
I wanted to float an idea of an additional extended precision integral type.
The C++ 26 assortment of fundamental integer types (bool, char, short, int, long and long long) are designed to be thin abstractions over the underlying CPU instructions. As such, they are easy to implement and provide very good performance. There are however use cases which necessitate different kinds of types. They can be classified into 3 broad “buckets”.
1. Arbitrary precision computing. For example, commonly used RSA cryptography keys vary from 1024 bits to 4096 bits.
2. Known precision calculations. We already have some support for this in the form of fixed width integer types (Fixed width integer types (since C++11) - cppreference.com<https://en.cppreference.com/w/cpp/types/integer.html>).
3. Extended precision calculations. Some CPU architectures make it possible to do math with higher precision than long long without significant performance penalty.
The bit-precise integers introduced in P3666<https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3666r1.html> aims for all these scenarios. I foresee however that it may be difficult to quality-standardize and even more difficult to quality-implement. Some implementations will very possibly be reluctant to commit to it, because their user base would not consider it high priority.
As a complimentary alternative, we can address the 3rd bucket by adding a yet another integer type long long long, with implementation-defined width that would not be less than that of long long. For the platforms where the extended precision is irrelevant (embedded targets, possibly freestanding systems), the implementers can define it as an alias for long long. The desktop and server targets can opt for long long long to have 128 bits.
With this new type and its unsigned variant, we will also get new literal suffixes LLL and ULLL. Thus, instead of writing a 128-bit zero as static_cast<_BitInt(128)>(0), we would be able to do it simply as 0LLL.
If there is an interest in exploring this approach further, I can write a proposal for C++ 29.
Thank you all for your attention –
Lev Minkovsky
Received on 2025-11-26 08:06:36
