C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Slow bulky integer types (128-bit)

From: Alejandro Colomar <alx.manpages_at_[hidden]>
Date: Thu, 30 Mar 2023 11:26:03 +0200
On 3/30/23 10:58, David Brown via Std-Proposals wrote:
> On 29/03/2023 17:18, Arthur O'Dwyer via Std-Proposals wrote:
>
>> A new `uintmax_extended_t` (or whatever) can communicate properly from
>> the get-go: "Hey! This type will change in the future! Don't build it
>> into your APIs!"
>> But then, if you aren't using this type in APIs, then where /*are*/ you
>> using it, and why does it need to exist in the standard library at all?
>>
> That, I think, is the key point - /why/ would you want a "maximum size
> integer type" ?

For example to printf(3) an off_t variable.
Or to write a [[gnu::always_inline]] function that accepts any integer.
Or to write a type-generic macro that handles any integer.

The addition of functions that handle [u]intmax_t was the design mistake.
If they had been added as macros, we wouldn't be discussing ABI issues,
because macros don't have ABI. Of course, the problem is that _Generic(3)
was only added in C11, but intmax_t was added in C99, so they had to do it
as functions. History sucks.

Still, I think the reform that has been done to make intmax_t be the
widest standard integral type (or however they call it technically) (as
opposed to the widest integral type including extended ones) is that it
should be optimal for printing most libc variables of weird types such as
id_t or off_t. And that can have a fixed ABI, because anyway, I don't
expect libc to start using int128 for anything. Then if we get a new
intwidest_t that covers all extended integer types (and maybe or maybe
not the bit-precise ones) that one would be useful for other things, but
that one would have a strong requirement of not being usable in anything
that sets an ABI.

>
> I hope that intmax_t gets deprecated, and I hope never to see any
> replacement versions in the C or C++ standards, regardless of how
> clearly they are defined. The size of an integer that you want in your
> code is determined by what you want to store in it - not the biggest
> size some implementation happens to support.
>
> In my eyes, the appropriate starting point for C++ integer types would
> be templates:
>
> Int_t<N>
> Int_fast_t<N>
> Int_least_t<N>

What's the use of int_least_t? Now that C23 will require 2's complement
and 8-bit char, it's hard to think of implementation that don't have
the fixed-width types.

Even the int_fast_t ones are already dubious, since fastness depends on
context. Usually if you don't care about the width of your ints, you can
just use int or long and you'll be fine. Why would you type int_fast16_t
over int? I bet you're not going to notice that int is not always the
size of a word.

>
> matching the standard int32_t, int_fast32_t, int_least32_t types (and
> all similar types). They should support N as powers of 2 from 8
> upwards, but implementations could support other sizes if they want (the
> "fast" and "least" types will always support other sizes, rounding up as
> needed).
>
> (I'd also like variants with different overflow behaviours, but that's
> an orthogonal issue.)
>
>
> That would give a simple and consistent system that lets you have types
> the size you want, and use them safely in API's.

Nothing is really safe because of default int promotions. That's the
original sin. Especially when you want 16-bit bitwise operations.
_BitInt() is somehow better than the fundamental types, since it doesn't
default to int. But you can't typedef it.

> The same type would
> have the same size whether you are using an 8-bit AVR or a 128-bit
> RISC-V (if anyone actually makes one).
>
> Things are more difficult in C, since there are no templates (_Generics
> are not extensible or recursive, and cannot be parametrised by
> integers). But again, the maximum size integer types were a mistake IMHO.
>
>

Received on 2023-03-30 09:26:06