Date: Thu, 2 Apr 2026 10:32:56 +0200
Hey folks,
During the Croydon meeting, I have talked to a few other committee members,
and there seems to be a bit of enthusiasm for a "big int" type. That is, an
infinite-precision integer, as compared to _BitInt(N), which has fixed
width and does not "type erase" the width.
I think the motivation is obvious at this point. Unbounded integers are a
feature of standard libraries in many languages, are incredibly useful, and
are a vocabulary type. Specifically in C++, it may also be possible to make
std::big_int an intrinsic for the compiler's infinite-precision integer
type (such as llvm::APInt) during constant evaluation. This has the
potential of making a standard std::big_int dramatically faster than any
constexpr library implementation could ever be.
I believe that there are a few key design decisions (most also made in
Boost.Multiprecision) which are desirable to standardize:
- Make it constexpr.
- Separate the sign bit from the rest of the representation. This makes
negation and absolutes incredibly cheap, regardless of the dynamic size of
the integer. It also makes it cheap to query whether the integer is
negative.
- Perform small object optimizations, at least for up to 64 bits. In
standardese, it should be possible to represent any standard signed or
standard unsigned integer type as std::bit_int without having to perform
memory allocation. This covers the common use case of merely using
std::big_int for the eventuality that it is big, but rarely working with
big values.
- Perform reference counting (and copy on write) for the dynamic
representation. This has key benefits:
- std::big_int would typically be passed by value because it is
cheaply copyable and cheaply movable, which is really natural in
mathematical code. It also means there is no need to pass std::big_int
by forwarding reference or to add extra rvalue overloads to avoid extra
allocations. Whenever a function is given a std::big_int by value, it
can check the reference counter and repurpose the allocation if it holds
the sole reference.
- It is possible to implement abs like (x < 0 ? -x : x) without
dynamic allocation. Negation is just creating a
reference-counted copy with
the sign bit flipped.
- std::big_int can be cheaply returned by value from a function by a
getter. If the class of the getter holds an int, it can create a
std::big_int without allocating thanks to SSO. If it holds a
std::big_int internally, it can cheaply copy that std::big_int. By
comparison, returning a const std::string& or std::string_view from
getters leaks internal representation details of classes.
One design aspect I'm not so sure about is whether std::big_int would be
allocator-aware and whether the "limb type" should be configurable. If
neither of these is true, it could be a non-templated class, which is
attractive.
It's worth noting that I walk on a sea of corpses. N1692, N1744, and N4038
tried to add a big integer type previously, but all died in the process.
N4038 got the furthest, but died in SG6. In any case, if we want to have
this in C++29, we need to act soon. This proposal is a massive effort, and
I could not do it myself I think given how many papers I already have, so
if anyone else is interested in supporting the effort, please reach out.
Jan
During the Croydon meeting, I have talked to a few other committee members,
and there seems to be a bit of enthusiasm for a "big int" type. That is, an
infinite-precision integer, as compared to _BitInt(N), which has fixed
width and does not "type erase" the width.
I think the motivation is obvious at this point. Unbounded integers are a
feature of standard libraries in many languages, are incredibly useful, and
are a vocabulary type. Specifically in C++, it may also be possible to make
std::big_int an intrinsic for the compiler's infinite-precision integer
type (such as llvm::APInt) during constant evaluation. This has the
potential of making a standard std::big_int dramatically faster than any
constexpr library implementation could ever be.
I believe that there are a few key design decisions (most also made in
Boost.Multiprecision) which are desirable to standardize:
- Make it constexpr.
- Separate the sign bit from the rest of the representation. This makes
negation and absolutes incredibly cheap, regardless of the dynamic size of
the integer. It also makes it cheap to query whether the integer is
negative.
- Perform small object optimizations, at least for up to 64 bits. In
standardese, it should be possible to represent any standard signed or
standard unsigned integer type as std::bit_int without having to perform
memory allocation. This covers the common use case of merely using
std::big_int for the eventuality that it is big, but rarely working with
big values.
- Perform reference counting (and copy on write) for the dynamic
representation. This has key benefits:
- std::big_int would typically be passed by value because it is
cheaply copyable and cheaply movable, which is really natural in
mathematical code. It also means there is no need to pass std::big_int
by forwarding reference or to add extra rvalue overloads to avoid extra
allocations. Whenever a function is given a std::big_int by value, it
can check the reference counter and repurpose the allocation if it holds
the sole reference.
- It is possible to implement abs like (x < 0 ? -x : x) without
dynamic allocation. Negation is just creating a
reference-counted copy with
the sign bit flipped.
- std::big_int can be cheaply returned by value from a function by a
getter. If the class of the getter holds an int, it can create a
std::big_int without allocating thanks to SSO. If it holds a
std::big_int internally, it can cheaply copy that std::big_int. By
comparison, returning a const std::string& or std::string_view from
getters leaks internal representation details of classes.
One design aspect I'm not so sure about is whether std::big_int would be
allocator-aware and whether the "limb type" should be configurable. If
neither of these is true, it could be a non-templated class, which is
attractive.
It's worth noting that I walk on a sea of corpses. N1692, N1744, and N4038
tried to add a big integer type previously, but all died in the process.
N4038 got the furthest, but died in SG6. In any case, if we want to have
this in C++29, we need to act soon. This proposal is a massive effort, and
I could not do it myself I think given how many papers I already have, so
if anyone else is interested in supporting the effort, please reach out.
Jan
Received on 2026-04-02 08:33:10
