C++ Logo


Advanced search

Re: [std-proposals] 128-bit integers

From: Jan Schultke <janschultke_at_[hidden]>
Date: Sun, 11 Feb 2024 11:52:37 +0100
> I would be in support of 128bit int as so far as there is hardware support for it, and as far as I am aware there isn't one.

There is hardware support for 64/128-bit mixed operations.
std::float128_t can also be used to implement any integer operation up
to 113 bits. RISC-V has a RV128I variant.

> Given that no platform has native support for 128bit integer this implies you think platforms should support it despite the fact that they don't.

I'm not sure what you think qualifies for "native support". There is
more than enough native support. While it is not typical to choose a
128-bit general purpose register size (and likely won't be common in
the foreseeable future), some degree of 128-bit arithmetic support is
already common.

> And if they do, then uint128_t now becomes the largest supported int meaning that now uintmax_t must also be at least 128bits width (the impact of which is catastrophic).

Literally the first paragraph in the introduction explains that this
is no longer an issue.See

> You would also need integer literals to be bale to explicit define constants, which the paper doesn't mention.

It does. See https://eisenwave.github.io/cpp-proposals/int-least128.html#possible-semantic-changes

> And the motivation suffers from the typical X Y problem ...


> But what you really want is an easier way to perform multi-word arithmetic. I'm currently writing a paper on this: https://kaotic.software/cpp_papers/overflow_arithmetic.html

Why would you conclude that? I want 128-bit operations. Whether they
are implemented through multi-word arithmetic or direct hardware
operations is ultimately not important. It's merely an implementation
detail. A more general solution such as _BitInt(N) would also enable
multi-word arithmetic but I explain in great detail why I have not
decided to propose it.

I guess if you're writing a multi-word arithmetic proposal, it's easy
to view everything through that lense, whether it makes sense or not.

> Cryptography cited many cyphers that use wider that 64bit numbers, that go beyond 128bits, and they way that they do this is by "widening", i.e. casting 64bit integers to 128 integers in order to perform arithmetic operations preserving all overflow bits.

I know all of that. The proposal explains that there is hardware
support for these widening operations, and that this extends further
to multi-precision arithmetic.

> Plus, the cited cyphers that are exactly 128bit long are also considered unsafe.

AES, SHA-2 and SHA-3 are not considered unsafe, to my knowledge. The
only fishy one on that list is MD5, and MD5 hashing is still useful to
compare against hashes in existing databases and other such historical

> Random number generator. Where the problem is exactly the same, you need more that 128bits ...

Just because you need more than 128 bits for *some* PRNGs doesn't mean
that it's bad to have 128-bit types for those PRNGs that need only
128-bit. Similarly, just because you need 128 bits in some cases,
doesn't mean that 64 bits aren't useful. Just because 64 bits are
needed in some use case, doesn't mean that 32 bits aren't useful, etc.

> Widening operations. Which again boils down to multi-word arithmetic.

I think you're focusing too much on implementation details, and that
applies to most of the points you're bringing up in general. The fact
that multiplying two long long would not be a single mul instruction
on 32-bit machines has not stopped C from introducing the type.

It's ultimately irrelevant whether the operation is multi-word. A
128-bit integer gives you a clean interface for performing 128-bit
computation. It is much simpler to cast one operand to std::uint128_t
prior to the multiplication than to deal with std::mul_wide and the
struct it returns. For the common use case of 64-to-128-bit widening
operations, std::int128_t is superior. It would also be much faster in
debug builds and constant evaluations, presumably.

> Fixed-point/Financial systems. That could best be fulfilled with a fixed-point library.

... which is easier to implement if you have 128-bit integers.

> 6. Double-wide atomic operations. Which isn't a use case. Just because a platform provides an operation, it is not in of itself a useful.

That's a good point, and it's one of the weakest parts of the
motivation. I might cut that section out.

> High-precision time calculations. Which doesn't make sense unless you also have a time facility to support it (which there isn't).

There are existing interfaces such as POSIX nanotime() that yield
nanosecond precision timestamps if possible, though the actual
hardware clocks may not be as precise. So I don't get what you mean

> Networking. Specifically, IPv6 addresses, 2 ints.

Possible but not as ergonomic. Why represent it as two 64-bit integers
instead of one 128-bit integers, when it is a 128-bit address? Just
think about the interface you want the language to give you, not how
you can compensate for a lack of a good interface.

> Future proofing. May suggestion here, don't try to predict the future. When it comes, then we can look at it.

Given that RISC-V already has a 128-bit variant, I don't think it's
too far-fetched to future-proof for 128-bit arithmetic. Not to
mention, we already have 64/128-bit mixed ops and 128-bit floating
point, where 128-bit floating point can do 113-bit integer division
for example.

But to be fair, 64K ought to be enough for anybody :)

> Every single time I have seen this idea [128-bit] float around, a better solution was always something else.

Like what? How come every major compiler has 128-bit support,
libatomic supports 128-bit atomic ops, NVIDIA now has a 128-bit type
in CUDA, etc.

It doesn't look like there was a "better solution" for these implementations.

> Given that it would break people's code overnight, the amount of effort required to support it, and not good reason to do it.

The impact on existing code would be little or none. The amount of
effort required to support it is low. I have gone to great lengths to
investigate the impact on implementations. There are literally more
than 12 sections worth of motivation and you seem awfully dismissive
of it: "no good reason". NVIDIA, GCC, LLVM, and many others obviously
disagree with the idea that there is "no good reason" for 128-bit.

> The math says no.

What math?

> That's my opinion.

Thanks for your feedback.

Received on 2024-02-11 10:52:50