C++ Logo

std-proposals

Advanced search

Re: [std-proposals] 128-bit integers

From: <yang.franklin9_at_[hidden]>
Date: Sun, 11 Feb 2024 00:07:39 -0800
> That way, if there isn't a "fast" version that fits in a register (or has a corresponding subset of instructions), a warning would be emitted. Otherwise, the code can express a more relaxed width requirement just using "int_least128_t", allowing an emulation without warning.
On all non-64-bit systems (most commonly 32-bit), that I know of, emulated int_fast64_t doesn’t issue a warning (same with 32 and 16 bit integers on 8-bit systems). I don’t see why “fast” 128-bit should have a warning when emulated.
 
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of Chris Gary via Std-Proposals
Sent: Saturday, February 10, 2024 6:38 PM
Cc: Chris Gary <cgary512_at_[hidden]>; std-proposals_at_[hidden]
Subject: Re: [std-proposals] 128-bit integers
 
I did a quick skim reading over the proposal, then searched for "int_fast128_t" - nothing showed up, but I also moved the cursor to the end of the page.
It looks as though they're addressed as well as they could be.
 
The way I use standard integers is as I described: If I want something as close to native performance as possible, the "fast" types are appropriate, etc... However, the lack of actual diagnostics here relegates determining the cause of performance issues to not just profiling, but also directly inspecting the code as well. In most cases, plain "intXXX_t" would work just as well.
 
> They are required to be aliases in the C++ standard. How else would
> you provide them?
 
What I meant there was that they are aliases for the standard types, and really not practically useful in their distinction despite the wording in the standard.
 
For example, one might think int64_t is an alias for __int64 in the Microsoft stdint.h, but its "typedef long long" instead. In that same header, all of the "least" and "fast" variants are identical as well. I thought the original idea behind stdint was to ensure aliases might correspond to compiler intrinsics wherever this is more appropriate. There seems to still be a problem assuming any given architecture has a single unambiguous notion of "char", "long", "long long", etc... Sort of like "long double" in a non-x87 context.
 
Really, my opining here was due more to my failure to notice a blinking caret at the end of a webpage...
 
On Sat, Feb 10, 2024 at 6:16 PM Jan Schultke <janschultke_at_[hidden] <mailto:janschultke_at_[hidden]> > wrote:
> IMO, fixed-width multiple precision support for any reasonable bit width ought to be provided automatically

That sounds like _BitInt.
https://eisenwave.github.io/cpp-proposals/int-least128.html#bit-precise-integers
explains why I didn't choose to propose it this way. In short, it's a
massive language change and doesn't even get you 128-bit integers in
the form you want.

> The proposal might be better refined by being a bit more specific with something along the lines of "int_fast128_t".

It's unclear to me what the difference between that hypothetical
approach and my proposal is. Mandatory int_least128_t gets you
int128_t on the architectures we care about, and that's ultimately
what matters. This *is* the fast 128-bit integer type.

> That way, if there isn't a "fast" version that fits in a register (or has a corresponding subset of instructions), a warning would be emitted.

A warning saying what? "You're using int_fast128_t but it's not
fast!"? How would this be worded, and is this even in the scope of the
standard? Quality-of-implementation diagnostics generally aren't.

> That said, I haven't seen an implementation that provides the "least" or "fast" variations as anything but aliases.

They are required to be aliases in the C++ standard. How else would
you provide them?

Received on 2024-02-11 08:07:44