C++ Logo


Advanced search

Re: [std-proposals] 128-bit integers

From: Jonathan Wakely <cxx_at_[hidden]>
Date: Sun, 11 Feb 2024 16:56:31 +0000
On Sun, 11 Feb 2024, 16:39 Jan Schultke, <janschultke_at_[hidden]> wrote:

> > There's a typo in the language-agnostic search: /int128|int128 is
> missing an underscore (it's there in the actual search, just not the link
> text in the paper).
> Thanks, I've fixed it and the other typo.
> > Better support for bitcoin is an anti-feature IMHO.
> It's not supporting Bitcoin per-se to be fair; Bitcoin is just a good
> example of a currency where the fraction is very small. What's the
> matter with "supporting Bitcoin" anyway?

It's environmentally disastrous and has no sensible use cases. The net
value to the world from bitcoin is massively massively negative.

> > Existing implementations do not support 128-bit integers on all
> platforms (and MSVC doesn't support them at all).
> Fair point. I should have pointed out that __int128 is not mandatory.
> I don't believe that this is a major problem though. The
> infrastructure for _BitInt already exists and is mandated by the C
> standard, so something like that could be used on other platforms, or
> __int128 should be made available on more.

_BitInt behaves very differently from other integer types, e.g. no
promotions or implicit conversions.

> > Adding a mandatory int_least128_t doesn't seem trivial at all.
> I agree that the effort is not trivial. However, Clang and GCC already
> support _BitInt(N) for N well over 128, so the work is essentially
> done. I'm not sure when Microsoft intends to support _BitInt and to
> what extent though. Also, it's still tremendously easier to support
> 128-bit integers than 128-bit floating-point. The overwhelming
> majority of standard libraries is already robust against arbitrary
> bit-sizes. Also, the infrastructure for multi-word operations must
> already exist to support long long on 32-bit platforms. I don't see
> any outrage over having to support 64-bit long long on 8-bit targets,
> so I don't see 128-bit is that big of an issue either.
> > std::timespec is only a 128-bit type when std::time_t is 64-bit, which
> is not universal
> I was under the impression that time_t is required to be unsigned long
> at this point, and that's 64-bit for Unix-like systems. Thanks for the
> correction.

Nope, unsigned long is 32-bit on 32-bit Unix-like systems, except for
little-used outliers like the x32 ABI on x86_64.

> > we already support __int128 on some platforms, but that doesn't mean we
> can support it on all platforms
> What's the hurdle to supporting a 128-bit+ type on all platforms (not
> necessarily an exactly-128-bit type)?

It has to be emulated in software, so the performance is bad.

> > Your discussion of iota_view fails to mention that a standard int128_t
> type would be a valid template argument for iota_view, and so would require
> a new integer-like class type like _Signed129 (or more). Libstdc++ already
> has such a type because we already allow iota_view using 128-bit integers,
> so we have a difference type that's even wider.
> It doesn't mention it because implementers are free to do what they
> want in this case. I imagine that libc++ maintainers will continue
> doing what they're doing and define difference_type to be __int128.
> This is compliant, but does produce UB for large differences.

I don't think that's conforming. I don't think the standard gives
permission for those large differences to not work.

I see it
> as QoI. I should have made that issue more clear in my proposal though
> and investigated further.
> > And implementations would need to do 256-bit arithmetic, e.g. for the
> product of two 128-bit values in the transition function of
> std::linear_congruential_engine<uint128_t, ...>. That might not require
> wording changes to the standard but the additional implementation effort is
> worth calling out.
> Thanks, I will take a closer look at how the RNGs in different
> libraries would be affected.
> > I don't agree with illegal. The implementation can add overloads of any
> standard function as long as they don't affect the semantics of valid ISO
> C++ programs.
> It would not be permitted. See the newly added example in
> https://eisenwave.github.io/cpp-proposals/int-least128.html#lifting-library-restrictions
> The heart of the issue is that the implementation can already provide
> std::int128_t, so adding overloads is altering the result of overload
> resolution from <ambiguous overload> to <success>. If std::int128_t
> was not allowed to exist in prior standards, this would not be
> relevant.

But it's implementation defined whether they do provide that typedef, and
none do. So the only way to use the type today is as an extension, and so
it's perfectly fine (in terms of conformance) to change the meaning of code
using that extension. Any implementation that doesn't provide std::int128_t
is free to support to_string(__int128).

> Of course, there are no implementations that provide std::int128_t
> yet, so no code is affected. However, if future implementations
> provided a 128-bit integer type, then it would presumably be available
> in C++11 mode and upwards. Therefore, this isn't a non-issue, but a
> practical one.

I still disagree. No existing ISO C++ code uses that type, so adding the
overload for it does not change the meaning of any valid code.

> > Would adding bitset(same_as<unsigned __int128> auto) prevent any valid
> ISO C++ programs from compiling?
> Yes, and the problem is the same as with to_string. Making it a
> template doesn't solve the problem.

What valid code would change meaning if I added that constructor to
libstdc++ tomorrow?

Received on 2024-02-11 16:57:49