Date: Wed, 26 Nov 2025 09:43:12 +0000
> By your logic, int64_t should also not exist on a 32-bit architecture, and int16_t shouldn't exist on an 8-bit architecture because people should just use multi-precision arithmetic.
Yes.
> This would be disastrous for writing portable code, just like it's disastrous for portable 128-bit arithmetic not to have a 128-bit type. Target-specific lowering should happen deep in the compiler backend, not in a high-level programming language targeting the abstract machine.
Kind of. Not really.
It’s a complex subject to go into detail in a short answer. But it’s partly using “plastic” types with predictable rules to solve most of the portability concerns, and partly “C++ already is not-portable” and this would not make it much worse in that aspect.
We often like to pretend that computers don’t have limitations and that resources are unlimited, but that is simply not true and becomes ever more apparent the less of it you have.
C++ isn’t what I would consider a “high-level programming language”. It may have complex constructs, but what it aims to produce is stuff that runs on bare metal. It’s not code that you can compile once and run every machine, that’s why you have to explicitly specify ints of different sizes instead of “generic number”.
While the goal of making code portable is noble, my perspective/philosophy to achieve this isn’t to “give a man a fish”, but “give them the tools to fish for themselves”.
I’m more than capable of solving this problem by myself if I had the right tools.
I have lost count of how many times the topic of 128bit ints have been brought up, how many times this was never enough, and how many times this has failed.
And yet there seems to be a consensus on insisting to go down the path of just providing a type that does magical operations that are inaccessible to regular programmers, instead of giving access to users to do those operations that your CPU have been designed to do (to address this exact problem) for decades.
How many more decades do we need to realize that this is not working? How much longer do we need to wait for C++ to catch up on being able to do something your computer could do even before C++ was a thing?
From: Jan Schultke <janschultke_at_[hidden]>
Sent: Wednesday, November 26, 2025 09:43
To: std-proposals_at_[hidden]
Cc: Tiago Freire <tmiguelf_at_[hidden]>
Subject: Re: [std-proposals] Extended precision integers
long long long is dead on arrival. “long long” is already consider ridiculous repetition of key words, just give it a proper name.
People would just use std::int_least128_t or std::int128_t in practice, so the aesthetics of the "long long long" spelling don't matter that much.
You will soon realize that RSA 1024 is 1024 bits, so you also need the 256bits/512bits and then the 1024 bit numbers.
And next you will be asking for a “long long long long long long” (did I get the number of longs right?… I think so)
That's why the proposal doesn't make much sense despite 128-bit being well-motivated; _BitInt can be used for any width.
What OP is proposing doesn't even seem to be a mandatory minimum of 128 bits, but a type that has the same minimum as long long, but is recommended to be wider. I think this will just lead to an unreliable, non-portable type.
In any case implementing a type that has no hardware equivalent is a bad move. Specially because this is and A/B problem.
What you want is the ability to do multi-precision arithmetic, something that computers have been able to do for years, and to do that you think you need a bigger type, instead of actually making multi-precision available. To make things worse a bigger type doesn’t actually allow you to do multi-precision any better
There is nothing special you can solve with 128bits that you couldn’t have solved with 64, the only reason you just don’t do it with 64bits it’s because you don’t know how to implement higher bit count operations with a lower number of bits. The number 128 isn’t magical. It will not all of a sudden make things work.
What you want is this: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
There are plenty of problems that you need 128 bits for, and definitely won't need any more for. For instance, implementing 64-bit modular arithmetic, implementing 128-bit decimal floating-point, time and currency calculations (64-bit often isn't enough but 128-bit is almost too much), etc.
Having to do 128-bit arithmetic by gluing together two 64-bit integers is operating at the wrong level of abstraction anyway. It makes many optimizations that operate on integers totally impossible because the middle-end is robbed of the ability to tell that something is a 128-bit operation, rather than a long sequence of 64-bit operations. It's extremely important that LLVM has an i128, i256, etc. type so that you only lower to 64-bit in the compiler backend, while enabling all the N-bit integer mathemtical optimizations.
By your logic, int64_t should also not exist on a 32-bit architecture, and int16_t shouldn't exist on an 8-bit architecture because people should just use multi-precision arithmetic. This would be disastrous for writing portable code, just like it's disastrous for portable 128-bit arithmetic not to have a 128-bit type. Target-specific lowering should happen deep in the compiler backend, not in a high-level programming language targeting the abstract machine.
Yes.
> This would be disastrous for writing portable code, just like it's disastrous for portable 128-bit arithmetic not to have a 128-bit type. Target-specific lowering should happen deep in the compiler backend, not in a high-level programming language targeting the abstract machine.
Kind of. Not really.
It’s a complex subject to go into detail in a short answer. But it’s partly using “plastic” types with predictable rules to solve most of the portability concerns, and partly “C++ already is not-portable” and this would not make it much worse in that aspect.
We often like to pretend that computers don’t have limitations and that resources are unlimited, but that is simply not true and becomes ever more apparent the less of it you have.
C++ isn’t what I would consider a “high-level programming language”. It may have complex constructs, but what it aims to produce is stuff that runs on bare metal. It’s not code that you can compile once and run every machine, that’s why you have to explicitly specify ints of different sizes instead of “generic number”.
While the goal of making code portable is noble, my perspective/philosophy to achieve this isn’t to “give a man a fish”, but “give them the tools to fish for themselves”.
I’m more than capable of solving this problem by myself if I had the right tools.
I have lost count of how many times the topic of 128bit ints have been brought up, how many times this was never enough, and how many times this has failed.
And yet there seems to be a consensus on insisting to go down the path of just providing a type that does magical operations that are inaccessible to regular programmers, instead of giving access to users to do those operations that your CPU have been designed to do (to address this exact problem) for decades.
How many more decades do we need to realize that this is not working? How much longer do we need to wait for C++ to catch up on being able to do something your computer could do even before C++ was a thing?
From: Jan Schultke <janschultke_at_[hidden]>
Sent: Wednesday, November 26, 2025 09:43
To: std-proposals_at_[hidden]
Cc: Tiago Freire <tmiguelf_at_[hidden]>
Subject: Re: [std-proposals] Extended precision integers
long long long is dead on arrival. “long long” is already consider ridiculous repetition of key words, just give it a proper name.
People would just use std::int_least128_t or std::int128_t in practice, so the aesthetics of the "long long long" spelling don't matter that much.
You will soon realize that RSA 1024 is 1024 bits, so you also need the 256bits/512bits and then the 1024 bit numbers.
And next you will be asking for a “long long long long long long” (did I get the number of longs right?… I think so)
That's why the proposal doesn't make much sense despite 128-bit being well-motivated; _BitInt can be used for any width.
What OP is proposing doesn't even seem to be a mandatory minimum of 128 bits, but a type that has the same minimum as long long, but is recommended to be wider. I think this will just lead to an unreliable, non-portable type.
In any case implementing a type that has no hardware equivalent is a bad move. Specially because this is and A/B problem.
What you want is the ability to do multi-precision arithmetic, something that computers have been able to do for years, and to do that you think you need a bigger type, instead of actually making multi-precision available. To make things worse a bigger type doesn’t actually allow you to do multi-precision any better
There is nothing special you can solve with 128bits that you couldn’t have solved with 64, the only reason you just don’t do it with 64bits it’s because you don’t know how to implement higher bit count operations with a lower number of bits. The number 128 isn’t magical. It will not all of a sudden make things work.
What you want is this: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
There are plenty of problems that you need 128 bits for, and definitely won't need any more for. For instance, implementing 64-bit modular arithmetic, implementing 128-bit decimal floating-point, time and currency calculations (64-bit often isn't enough but 128-bit is almost too much), etc.
Having to do 128-bit arithmetic by gluing together two 64-bit integers is operating at the wrong level of abstraction anyway. It makes many optimizations that operate on integers totally impossible because the middle-end is robbed of the ability to tell that something is a 128-bit operation, rather than a long sequence of 64-bit operations. It's extremely important that LLVM has an i128, i256, etc. type so that you only lower to 64-bit in the compiler backend, while enabling all the N-bit integer mathemtical optimizations.
By your logic, int64_t should also not exist on a 32-bit architecture, and int16_t shouldn't exist on an 8-bit architecture because people should just use multi-precision arithmetic. This would be disastrous for writing portable code, just like it's disastrous for portable 128-bit arithmetic not to have a 128-bit type. Target-specific lowering should happen deep in the compiler backend, not in a high-level programming language targeting the abstract machine.
Received on 2025-11-26 09:43:17
