Date: Sun, 11 Feb 2024 18:32:36 +0000
On Sun, 11 Feb 2024, 17:22 Jan Schultke, <janschultke_at_[hidden]> wrote:
>
> > _BitInt behaves very differently from other integer types, e.g. no
> promotions or implicit conversions.
>
> Well, that's true, but what I meant is that the infrastructure for
> multi-precision arithmetic already has to exist in compilers. It had
> to exist to a limited extent so that long long could be
> software-emulated on 32-bit or 8-bit platforms, and _BitInt extended
> the required infrastructure even more.
>
> Keep in mind that the proposal does not make uint128_t mandatory, only
> uint_least128_t, so the type could be _BitInt(128) in disguise (with
> traditional conversion/promotion rules).
>
> Does GCC support any targets where bytes are not octets?
No
If not, the
> implementation effort is still relatively limited because at least the
> type is padding-free and a power-of-two-multiple of the byte size.
>
> I still fail to see a strong argument against making it mandatory.
> Even if it has to be software-emulated, that's more of a performance
> issue that the user has to keep in mind, than a problem for compilers.
> Why is long long not optional on 32-bit targets if software emulation
> is not acceptable?
>
Yup, fair point. But implementing int128 with 32-bit registers is going to
be even slower than implementing long long. You need 16 32-bit
multiplications for a single 128-bit multiplication, I think?
> > I don't think that's conforming. I don't think the standard gives
> permission for those large differences to not work.
>
> Based on what Niebler says in
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1522r1.pdf
> ยง4.2, as far as I understand it, implementers can just define the
> difference_type to not cover the whole range and ignore the problem.
>
Ugh, yuck. I guess it was just wishful thinking on my part that we require
it to work.
> > I still disagree. No existing ISO C++ code uses that type, so adding the
> overload for it does not change the meaning of any valid code.
>
> The ISO C++ code that I've provided in the example uses that type and
> its meaning would be changed. There may not be a single C++ compiler
> that would accept the program, but still.
>
> We're really getting deep into technicalities here though. Yes, the
> implementation could provide those extra overloads without breaking
> any *compilling* code. Regardless, the wording changes in the proposal
> should be made so that it's not just *effectively* OK, but *actually*
> OK.
>
I think I might have misunderstood the original point, which was that
adding to_string(int128_t) is not allowed on an implementation that does
provide that type. I am claiming that it's OK to add to_string(__int128)
today on implementations that don't provide std::int128_t. But I think
we're actually agreeing, because my point is only true for an __int128
extension and not for std::int128_t. Which I think is the point you were
making, sorry!
You can only add overloads for the type if you don't support the type with
a standard name.
>
>
> > _BitInt behaves very differently from other integer types, e.g. no
> promotions or implicit conversions.
>
> Well, that's true, but what I meant is that the infrastructure for
> multi-precision arithmetic already has to exist in compilers. It had
> to exist to a limited extent so that long long could be
> software-emulated on 32-bit or 8-bit platforms, and _BitInt extended
> the required infrastructure even more.
>
> Keep in mind that the proposal does not make uint128_t mandatory, only
> uint_least128_t, so the type could be _BitInt(128) in disguise (with
> traditional conversion/promotion rules).
>
> Does GCC support any targets where bytes are not octets?
No
If not, the
> implementation effort is still relatively limited because at least the
> type is padding-free and a power-of-two-multiple of the byte size.
>
> I still fail to see a strong argument against making it mandatory.
> Even if it has to be software-emulated, that's more of a performance
> issue that the user has to keep in mind, than a problem for compilers.
> Why is long long not optional on 32-bit targets if software emulation
> is not acceptable?
>
Yup, fair point. But implementing int128 with 32-bit registers is going to
be even slower than implementing long long. You need 16 32-bit
multiplications for a single 128-bit multiplication, I think?
> > I don't think that's conforming. I don't think the standard gives
> permission for those large differences to not work.
>
> Based on what Niebler says in
> https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1522r1.pdf
> ยง4.2, as far as I understand it, implementers can just define the
> difference_type to not cover the whole range and ignore the problem.
>
Ugh, yuck. I guess it was just wishful thinking on my part that we require
it to work.
> > I still disagree. No existing ISO C++ code uses that type, so adding the
> overload for it does not change the meaning of any valid code.
>
> The ISO C++ code that I've provided in the example uses that type and
> its meaning would be changed. There may not be a single C++ compiler
> that would accept the program, but still.
>
> We're really getting deep into technicalities here though. Yes, the
> implementation could provide those extra overloads without breaking
> any *compilling* code. Regardless, the wording changes in the proposal
> should be made so that it's not just *effectively* OK, but *actually*
> OK.
>
I think I might have misunderstood the original point, which was that
adding to_string(int128_t) is not allowed on an implementation that does
provide that type. I am claiming that it's OK to add to_string(__int128)
today on implementations that don't provide std::int128_t. But I think
we're actually agreeing, because my point is only true for an __int128
extension and not for std::int128_t. Which I think is the point you were
making, sorry!
You can only add overloads for the type if you don't support the type with
a standard name.
>
Received on 2024-02-11 18:33:57