C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Extended precision integers

From: Jan Schultke <janschultke_at_[hidden]>
Date: Wed, 26 Nov 2025 11:01:59 +0100
> > By your logic, int64_t should also not exist on a 32-bit architecture,
> and int16_t shouldn't exist on an 8-bit architecture because people should
> just use multi-precision arithmetic.
>
>
>
> Yes.
>

Then you seem to be working with the wrong language; even LLVM IR isn't
low-level enough for you because it still has those abstract integers
independent of the target. Is x86_64 assembly low-level enough for you, or
is that still too distant from the microcode and transistors?


> > This would be disastrous for writing portable code, just like it's
> disastrous for portable 128-bit arithmetic not to have a 128-bit type.
> Target-specific lowering should happen deep in the compiler backend, not in
> a high-level programming language targeting the abstract machine.
>
>
>
> Kind of. Not really.
>
> It’s a complex subject to go into detail in a short answer. But it’s
> partly using “plastic” types with predictable rules to solve most of the
> portability concerns, and partly “C++ already is not-portable” and this
> would not make it much worse in that aspect.
>
> We often like to pretend that computers don’t have limitations and that
> resources are unlimited, but that is simply not true and becomes ever more
> apparent the less of it you have.
>

No one claims that resources are unlimited, but it's usually better to
emulate things not directly available in hardware at some cost rather than
leaking hardware limitations into the programming language.

By your logic, the "/" operator should not exist on a target without an
"idiv" instruction for integers, and "*" should not exist on a target with
no hardware multipliers. I'd rather write high-level code and have the
guarantee that these things work, possibly with a bit of cost.

C++ isn’t what I would consider a “high-level programming language”. It may
> have complex constructs, but what it aims to produce is stuff that runs on
> bare metal. It’s not code that you can compile once and run every machine,
> that’s why you have to explicitly specify ints of different sizes instead
> of “generic number”.
>

It is by design a high-level programming language targeting an abstract
machine. Lower-level languages like LLVM still abstract from the native
integers, and so do higher-level languages like Java. Fixed-width integers
are important for performance in any case; it is too difficult to optimize
around an infinite-precision integer like the one Python has. It's not
really about "bare metal"; otherwise Java wouldn't have fixed-width
integers.

While the goal of making code portable is noble, my perspective/philosophy
> to achieve this isn’t to “give a man a fish”, but “give them the tools to
> fish for themselves”.
>
> I’m more than capable of solving this problem by myself if I had the right
> tools.
>

The "fishing" in question here is generating target-specific assembly. Even
knowing how to massage C++ to do it, I would delegate that job to an
optimizer instead of trying to do that in my high-level language code.

I have lost count of how many times the topic of 128bit ints have been
> brought up, how many times this was never enough, and how many times this
> has failed.
>

Is there any evidence for it having failed? I don't see Rust users
complaining about i128 being a feature.

And yet there seems to be a consensus on insisting to go down the path of
> just providing a type that does magical operations that are inaccessible to
> regular programmers, instead of giving access to users to do those
> operations that your CPU have been designed to do (to address this exact
> problem) for decades.
>

Yes, we give developers multiplication, division, long long, std::popcount
and a million other things that don't directly correspond to something in
hardware and which may need to be emulated. This is because the cost of
recreating these things as the user is absurdly higher than a few CPU
cycles spent on the compiler figuring it out. If the design approach of the
last 40 years of C and C++ development isn't to your liking, feel free to
hand-write assembly. I'm not aware of anything high-level than that which
would satisfy your bottomless hunger for eliminating abstractions.


> How many more decades do we need to realize that this is not working? How
> much longer do we need to wait for C++ to catch up on being able to do
> something your computer could do even before C++ was a thing?
>

It's worked for the last 4+ decades. Maybe it will stop working in a few
more, but I'd wager on the trend continuing.



>

Received on 2025-11-26 10:02:15