Date: Wed, 26 Nov 2025 09:42:33 +0100
> long long long is dead on arrival. “long long” is already consider
> ridiculous repetition of key words, just give it a proper name.
>
People would just use std::int_least128_t or std::int128_t in practice, so
the aesthetics of the "long long long" spelling don't matter that much.
> You will soon realize that RSA 1024 is 1024 bits, so you also need the
> 256bits/512bits and then the 1024 bit numbers.
>
> And next you will be asking for a “long long long long long long” (did I
> get the number of longs right?… I think so)
>
That's why the proposal doesn't make much sense despite 128-bit being
well-motivated; _BitInt can be used for any width.
What OP is proposing doesn't even seem to be a mandatory minimum of 128
bits, but a type that has the same minimum as long long, but is recommended
to be wider. I think this will just lead to an unreliable, non-portable
type.
In any case implementing a type that has no hardware equivalent is a bad
> move. Specially because this is and A/B problem.
What you want is the ability to do multi-precision arithmetic, something
> that computers have been able to do for years, and to do that you think you
> need a bigger type, instead of actually making multi-precision available.
> To make things worse a bigger type doesn’t actually allow you to do
> multi-precision any better
>
> There is nothing special you can solve with 128bits that you couldn’t have
> solved with 64, the only reason you just don’t do it with 64bits it’s
> because you don’t know how to implement higher bit count operations with a
> lower number of bits. The number 128 isn’t magical. It will not all of a
> sudden make things work.
>
> What you want is this:
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
>
There are plenty of problems that you need 128 bits for, and definitely
won't need any more for. For instance, implementing 64-bit modular
arithmetic, implementing 128-bit decimal floating-point, time and currency
calculations (64-bit often isn't enough but 128-bit is almost too much),
etc.
Having to do 128-bit arithmetic by gluing together two 64-bit integers is
operating at the wrong level of abstraction anyway. It makes many
optimizations that operate on integers totally impossible because the
middle-end is robbed of the ability to tell that something is a 128-bit
operation, rather than a long sequence of 64-bit operations. It's extremely
important that LLVM has an i128, i256, etc. type so that you only lower to
64-bit in the compiler backend, while enabling all the N-bit integer
mathemtical optimizations.
By your logic, int64_t should also not exist on a 32-bit architecture, and
int16_t shouldn't exist on an 8-bit architecture because people should just
use multi-precision arithmetic. This would be disastrous for writing
portable code, just like it's disastrous for portable 128-bit arithmetic
not to have a 128-bit type. Target-specific lowering should happen deep in
the compiler backend, not in a high-level programming language targeting
the abstract machine.
> ridiculous repetition of key words, just give it a proper name.
>
People would just use std::int_least128_t or std::int128_t in practice, so
the aesthetics of the "long long long" spelling don't matter that much.
> You will soon realize that RSA 1024 is 1024 bits, so you also need the
> 256bits/512bits and then the 1024 bit numbers.
>
> And next you will be asking for a “long long long long long long” (did I
> get the number of longs right?… I think so)
>
That's why the proposal doesn't make much sense despite 128-bit being
well-motivated; _BitInt can be used for any width.
What OP is proposing doesn't even seem to be a mandatory minimum of 128
bits, but a type that has the same minimum as long long, but is recommended
to be wider. I think this will just lead to an unreliable, non-portable
type.
In any case implementing a type that has no hardware equivalent is a bad
> move. Specially because this is and A/B problem.
What you want is the ability to do multi-precision arithmetic, something
> that computers have been able to do for years, and to do that you think you
> need a bigger type, instead of actually making multi-precision available.
> To make things worse a bigger type doesn’t actually allow you to do
> multi-precision any better
>
> There is nothing special you can solve with 128bits that you couldn’t have
> solved with 64, the only reason you just don’t do it with 64bits it’s
> because you don’t know how to implement higher bit count operations with a
> lower number of bits. The number 128 isn’t magical. It will not all of a
> sudden make things work.
>
> What you want is this:
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
>
There are plenty of problems that you need 128 bits for, and definitely
won't need any more for. For instance, implementing 64-bit modular
arithmetic, implementing 128-bit decimal floating-point, time and currency
calculations (64-bit often isn't enough but 128-bit is almost too much),
etc.
Having to do 128-bit arithmetic by gluing together two 64-bit integers is
operating at the wrong level of abstraction anyway. It makes many
optimizations that operate on integers totally impossible because the
middle-end is robbed of the ability to tell that something is a 128-bit
operation, rather than a long sequence of 64-bit operations. It's extremely
important that LLVM has an i128, i256, etc. type so that you only lower to
64-bit in the compiler backend, while enabling all the N-bit integer
mathemtical optimizations.
By your logic, int64_t should also not exist on a 32-bit architecture, and
int16_t shouldn't exist on an 8-bit architecture because people should just
use multi-precision arithmetic. This would be disastrous for writing
portable code, just like it's disastrous for portable 128-bit arithmetic
not to have a 128-bit type. Target-specific lowering should happen deep in
the compiler backend, not in a high-level programming language targeting
the abstract machine.
Received on 2025-11-26 08:42:49
