C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Extended precision integers

From: David Brown <david.brown_at_[hidden]>
Date: Wed, 26 Nov 2025 12:46:34 +0100
I find this very hard to relate to. I have used int64_t (well, probably
uint64_t) on 8-bit microcontrollers. Much of the code I write is very
target specific because it deals with low-level hardware interactions
that are for a particular microcontroller or a particular electronics
board. But I also have plenty of code that needs to compile and work on
a wide variety of systems - old 8-bit embedded systems, newer 32-bit
embedded systems, and 64-bit PC's for testing.

On 26/11/2025 10:43, Tiago Freire via Std-Proposals wrote:
> > By your logic, int64_t should also not exist on a 32-bit
> architecture, and int16_t shouldn't exist on an 8-bit architecture
> because people should just use multi-precision arithmetic.
>
> Yes.
>

I've worked in assembly on a least a dozen architectures - I would not
want to go back.

> > This would be disastrous for writing portable code, just like it's
> disastrous for portable 128-bit arithmetic not to have a 128-bit type.
> Target-specific lowering should happen deep in the compiler backend, not
> in a high-level programming language targeting the abstract machine.
>
> Kind of. Not really.
>
> It’s a complex subject to go into detail in a short answer. But it’s
> partly using “plastic” types with predictable rules to solve most of the
> portability concerns, and partly “C++ already is not-portable” and this
> would not make it much worse in that aspect.

/Parts/ of C++ (and the same thing here applies to C) are not portable -
other parts are. More importantly, C++ lets you write code that is
portable, code that is semi-portable (such as "works on compilers with
these extensions", "works on all systems with 32-bit int" or "works on
systems with more than X bytes of ram"), and code that is never intended
to be portable. It is fair to say that most C++ /programs/ are not
portable - but a large proportion of C++ /code/ is quite widely
portable. Programs usually consist of a mixture of non-portable and
portable parts.

>
> We often like to pretend that computers don’t have limitations and that
> resources are unlimited, but that is simply not true and becomes ever
> more apparent the less of it you have.

Yes, we programmers often like to pretend that computers do not have the
limitations that apply to their lowest level - because that is what
programming and programming languages is all about! We pretend that the
computer can handle strings, so that we can write our "Hello, world!"
programs and ignore the fact that deep down, it's all ones and zeros.
Abstraction is the key to getting anything done in programming.

And yes, we sometimes pretend that there are no limitations at all.
That can come at the cost of performance or efficiency - if you know
your target has certain limitations, then taking that into account when
writing your code can lead to higher performance on that target.
Pretending there are no limitations, on the other hand, can make code
simpler and more flexible at times. Both viewpoints have their merits
for different use-cases.

A language definition - the C++ standards - should not be concerned
about limitations. It should /allow/ limitations where those are
helpful to efficient implementations, but it should not have arbitrary
limitations. Thus it allows "int" to be as limited as 16 bits, but it
does not place an upper limit on how bit "int" can be.

>
> C++ isn’t what I would consider a “high-level programming language”. It
> may have complex constructs, but what it aims to produce is stuff that
> runs on bare metal. It’s not code that you can compile once and run
> every machine, that’s why you have to explicitly specify ints of
> different sizes instead of “generic number”.
>

I don't think there is any good agreed-upon definition of what is a
"high level programming language" and what is not. My own thinking is
that a language is "high level" if it is defined in terms of an abstract
machine rather than the underlying hardware. With that, both C and C++
are clearly high-level languages. C++ supports many more abstractions
and higher-level constructs than C - therefore it is (IMHO) a higher
level language than C. Python supports more high-level constructs and
does so in a simpler more integrated way, thus Python is higher level
than C++. And so on.

C++ is also a compiled language, and a language intended to be suitable
for efficient, low-level and systems programming. Thus it also has to
support "close to the metal" concepts and programming. Remember the
justification for the design of C - it was not designed to be a
"portable assembly" or to replace assembly language, it was designed to
reduce the need to write assembly. It was designed as a high level
language that could be used to write portable code, and also used to
write low-level non-portable code. C++ follows and expands upon that.

AFAIK there is actually nothing (other than human resources) hindering
C++ from having a "generic number" type. It could have a class for
arbitrary precision integers, akin to the std::string class for
arbitrary length character arrays. And that new "number" type could be
used in code instead of "int". It would be less efficient, but it could
certainly be done.

> While the goal of making code portable is noble, my
> perspective/philosophy to achieve this isn’t to “give a man a fish”, but
> “give them the tools to fish for themselves”.
>
> I’m more than capable of solving this problem by myself if I had the
> right tools.
>

I am all in favour of having the right tools. It would be good for C++
to gain functions designed for making multi-precision integers or other
related uses - add_with_carry() functions and the like.

But it would be ridiculous to insist that C++ programmers used these
themselves in normal coding. In the C world, everyone had to make their
own linked-list types and other collections - in the C++ world, you can
use the standard library types and spend your time doing something more
useful than re-inventing the same wheel thousands of others have
invented before you.

It's fine to suggest that the C++ standard should have the tools needed
to make integers and efficient integer operations of any size. But it
must also provide integers of a wide range of sizes because that's what
programmers need.

When I am hungry, I want a fish - I don't want a fishing rod.

> I have lost count of how many times the topic of 128bit ints have been
> brought up, how many times this was never enough, and how many times
> this has failed.
>
> And yet there seems to be a consensus on insisting to go down the path
> of just providing a type that does magical operations that are
> inaccessible to regular programmers, instead of giving access to users
> to do those operations that your CPU have been designed to do (to
> address this exact problem) for decades.

This is not an either/or choice - we can do both. The OP is only
talking about one path, but it does not exclude the other.

> How many more decades do we need to realize that this is not working?
> How much longer do we need to wait for C++ to catch up on being able to
> do something your computer could do even before C++ was a thing?
>

I've been using C and C++ integer sizes bigger than int8_t quite happily
on 8-bit microcontrollers for decades. It seems to be working fine.
The "standard integer type" system in C and C++ scales badly, which has
put complications and roadblocks in the way of on moving beyond 64-bit
integer types, but there seems to be a way past that now with _BitInt in
place in C and coming to C++. There might not be any point in having
new extended integer types like a 128-bit type, but it might still be of
interest if some implementers can handle it better than _BitInt(128).

David



> *From:*Jan Schultke <janschultke_at_[hidden]>
> *Sent:* Wednesday, November 26, 2025 09:43
> *To:* std-proposals_at_[hidden]
> *Cc:* Tiago Freire <tmiguelf_at_[hidden]>
> *Subject:* Re: [std-proposals] Extended precision integers
>
> long long long is dead on arrival. “long long” is already consider
> ridiculous repetition of key words, just give it a proper name.
>
> People would just use std::int_least128_t or std::int128_t in practice,
> so the aesthetics of the "long long long" spelling don't matter that much.
>
> You will soon realize that RSA 1024 is 1024 bits, so you also need
> the 256bits/512bits and then the 1024 bit numbers.
>
> And next you will be asking for a “long long long long long long”
> (did I get the number of longs right?… I think so)
>
> That's why the proposal doesn't make much sense despite 128-bit being
> well-motivated; _BitInt can be used for any width.
>
> What OP is proposing doesn't even seem to be a mandatory minimum of 128
> bits, but a type that has the same minimum as long long, but is
> recommended to be wider. I think this will just lead to an unreliable,
> non-portable type.
>
> In any case implementing a type that has no hardware equivalent is a
> bad move. Specially because this is and A/B problem.
>
> What you want is the ability to do multi-precision arithmetic,
> something that computers have been able to do for years, and to do
> that you think you need a bigger type, instead of actually making
> multi-precision available. To make things worse a bigger type
> doesn’t actually allow you to do multi-precision any better
>
> There is nothing special you can solve with 128bits that you
> couldn’t have solved with 64, the only reason you just don’t do it
> with 64bits it’s because you don’t know how to implement higher bit
> count operations with a lower number of bits. The number 128 isn’t
> magical. It will not all of a sudden make things work.
>
> What you want is this:
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html <https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html>
>
> There are plenty of problems that you need 128 bits for, and definitely
> won't need any more for. For instance, implementing 64-bit modular
> arithmetic, implementing 128-bit decimal floating-point, time and
> currency calculations (64-bit often isn't enough but 128-bit is almost
> too much), etc.
>
> Having to do 128-bit arithmetic by gluing together two 64-bit integers
> is operating at the wrong level of abstraction anyway. It makes many
> optimizations that operate on integers totally impossible because the
> middle-end is robbed of the ability to tell that something is a 128-bit
> operation, rather than a long sequence of 64-bit operations. It's
> extremely important that LLVM has an i128, i256, etc. type so that you
> only lower to 64-bit in the compiler backend, while enabling all the
> N-bit integer mathemtical optimizations.
>
> By your logic, int64_t should also not exist on a 32-bit architecture,
> and int16_t shouldn't exist on an 8-bit architecture because people
> should just use multi-precision arithmetic. This would be disastrous for
> writing portable code, just like it's disastrous for portable 128-bit
> arithmetic not to have a 128-bit type. Target-specific lowering should
> happen deep in the compiler backend, not in a high-level programming
> language targeting the abstract machine.
>
>

Received on 2025-11-26 11:46:42