C++ Logo


Advanced search

Re: [std-proposals] 128-bit integers

From: Thiago Macieira <thiago_at_[hidden]>
Date: Sun, 11 Feb 2024 10:06:42 -0800
On Sunday, 11 February 2024 09:20:09 PST Jan Schultke via Std-Proposals wrote:
> > _BitInt behaves very differently from other integer types, e.g. no
> > promotions or implicit conversions.
> Well, that's true, but what I meant is that the infrastructure for
> multi-precision arithmetic already has to exist in compilers. It had
> to exist to a limited extent so that long long could be
> software-emulated on 32-bit or 8-bit platforms, and _BitInt extended
> the required infrastructure even more.

Correct: it exists *to a limited extent*.

That extent is "twice the size of the platform register" (a.k.a., a double
word). That means both libgcc and libcompiler-rt support 64-bit on 32-bit
platforms and that's what enables 128-bit on 64-bit platforms "for free".

You're right that support for _BitInt has forced the implementations to extend
that somewhat. But no one using those is expecting them to be fast. Just look
at what Clang generates for multiplying two 128-bit integers together on 32-
bit and compare to 64-bit:
Two multiplications became 10.

If you change that to a division, Clang inserts a very long, looping code that
fortunately has no actual divisions.

I should also note that current GCC trunk still has no support for _BitInt
(that I can see).

> Does GCC support any targets where bytes are not octets? If not, the
> implementation effort is still relatively limited because at least the
> type is padding-free and a power-of-two-multiple of the byte size.

As far as I know, no. But even if it did, it would be no different from
supporting int_least64_t: it would be the smallest contiguous block that
matches or exceeds 64 bits. So on a hypothetical 9-bit CPU, it would likely
still be an 8-byte type (72 bits).

Note: there would be no problem on a 16-bit- or 32-bit-byte CPU. I think there
are DSPs like that out there.

> I still fail to see a strong argument against making it mandatory.
> Even if it has to be software-emulated, that's more of a performance
> issue that the user has to keep in mind, than a problem for compilers.
> Why is long long not optional on 32-bit targets if software emulation
> is not acceptable?

It used to be, before C99 made it mandatory. I'm old enough to remember having
to work with compiler that had no 64-bit integer type, though not
professionally. By the time I began coding for a living, 64-bit types were a
given... just not what they were called. That's also why long is 64-bit on 64-
bit Unix: the OSes existed before long long did.

> > I don't think that's conforming. I don't think the standard gives
> > permission for those large differences to not work.
> Based on what Niebler says in
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1522r1.pdf
> 4.2, as far as I understand it, implementers can just define the
> difference_type to not cover the whole range and ignore the problem.

Isn't the difference type of the standard allocators required to be ptrdiff_t?
Then it's an architectural decision, not the Standard Library's.

Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
   Software Architect - Intel DCAI Cloud Engineering

Received on 2024-02-11 18:06:44