C++ Logo

std-proposals

Advanced search

Re: [std-proposals] 128-bit integers

From: Jonathan Wakely <cxx_at_[hidden]>
Date: Sun, 11 Feb 2024 11:03:08 +0000
On Sun, 11 Feb 2024, 02:40 Chris Gary via Std-Proposals, <
std-proposals_at_[hidden]> wrote:

> I did a quick skim reading over the proposal, then searched for
> "int_fast128_t" - nothing showed up, but I also moved the cursor to the end
> of the page.
> It looks as though they're addressed as well as they could be.
>
> The way I use standard integers is as I described: If I want something as
> close to native performance as possible, the "fast" types are appropriate,
> etc... However, the lack of actual diagnostics here relegates determining
> the cause of performance issues to not just profiling, but also directly
> inspecting the code as well. In most cases, plain "intXXX_t" would work
> just as well.
>
> > They are required to be aliases in the C++ standard. How else would
> > you provide them?
>
> What I meant there was that they are aliases for the standard types, and
> really not practically useful in their distinction despite the wording in
> the standard.
>
> For example, one might think int64_t is an alias for __int64 in the
> Microsoft stdint.h, but its "typedef long long" instead. In that same
> header, all of the "least" and "fast" variants are identical as well. I
> thought the original idea behind stdint was to ensure aliases might
> correspond to compiler intrinsics
>

No, they're just typedefs for existing types. The "fast" variants are not
supposed to be some magical new type, because that would be silly. Why
would the implementation not use the magical fast types for int, long etc
if such magical types existed?

Your mental model for those typedefs is wrong.

The fast types are just regular types. The reason they exist is that on a
given platform the instructions for operating on 32-bit types might be
faster than the instructions for 16-bit types, so int_fast16_t could be a
32-bit type on that platform. But it's just a normal integer type, not some
magic intrinsic that's faster than normal integers (because there's no such
thing).




wherever this is more appropriate. There seems to still be a problem
> assuming any given architecture has a single unambiguous notion of "char",
> "long", "long long", etc... Sort of like "long double" in a non-x87
> context.
>
>
>

Received on 2024-02-11 11:04:25