Date: Sun, 16 Feb 2025 19:12:19 +0100
On Sun, Feb 16, 2025, 6:45 PM Jan Schultke <janschultke_at_[hidden]>
wrote:
> > I think "without further work" is not true. If you implement these types
> in library, you need to put in the work to come up with a fast
> implementation. It probably requires platform intrinsics or assembly
> language, and a lot of experimentation if you don't have an existing fast
> implementation you're allowed to copy from.
>
> There's certainly a point to be made there, yeah.
> I will incorporate that into the draft.
>
> Nonetheless, it seems a lot easier to implement in a library using
> intrinsics than to come up with the codegen yourself. There are also
> architectural similarities. The general technique for addition is
> going to be the same on any architecture that has ADC instructions,
> for example. If you just implement __builtin_add_overflow and
> libstdc++ uses that in its implementation of operator+, you're
> basically done. Adding support for a __builtin_add_overflow intrinsic
> is much easier than implementing N-bit operation support.
>
If you're convinced that the library implementation can be as efficient as
an in-compiler implementation but with significantly less effort, then you
can implement it in library and then special case it in the compiler, the
way we do std::initializer_list. This is of course assuming that we
actually want the e.g. overload resolution properties that a fundamental
type would have. Like, yes, there is extra implementation effort, but of
course there is, because there are simply more core language rules in this
case. The real question is whether or not the overload resolution
properties are actually desirable.
> > Do we impose a disproportionate burden on freestanding compiler vendors
> every time we approve a new core language feature, regardless of what that
> feature is? I wasn't under the impression that that was the case, but I
> could very well be wrong.
>
> It feels disproportionate to me when the feature could be implemented
> in library with absolute certainty, but we choose not to. It would be
> a bit like requiring that the implementation supports
> multi-dimensional arrays as built-in types instead of std::mdspan and
> std::mdarray or whatever.
>
> > If you were going to implement it in library but WG21 decides to make it
> a fundamental type, couldn't you just make the ABI whatever the ABI of the
> class implementation would have been? Sorry if these are very basic
> questions, but I am sure that I am not alone in my ignorance.
>
> Yes, you could, but then you're de-facto creating an ABI for one of
> the fundamental types in your language. This is pretty consequential.
> The ABI debates about whether __int128 should have 16-byte or 8-byte
> alignment, and what the alignment of _BitInt(128) and above should be
> were pretty extensive.
>
Those debates will have to happen on a platform by platform basis in either
case, right? If you implement it in library, you can't change the alignment
of it later without breaking ABI compatibility, same as if it were a
fundamental type.
> After all, if you're going to make a fundamental type for this, it's
> presumably going to be THE _BitInt type that is also found in C, not
> just some run-of-the-mill type. std::bit_int should be ABI-compatible
> with C's _BitInt, similar to _Atomic and std::atomic. It certainly
> feels more consequential to define an ABI for a C type than for a C++
> library class.
>
If you don't plan on ever supporting _BitInt(N) for N greater than 64, then
you can make the ABI for std::bit_int<128> (or other N > 64) whatever you
want. It's not "defining an ABI for a C type" in that case.
If you might eventually support _BitInt(N) for N > 64, then you can't
implement std::bit_int<128> until you've decided what ABI the former are
going to have, since once you implement the former, you'll want them to
have the same ABI as the latter.
In both cases I can't see that making std::bit_int<128> a class type buys
you anything here.
> If you only ever do this as a library class and you're sure you'll
> never have _BitInt, or you don't care about breaking ABI in the
> standard library, then class std::bit_int's ABI is rather
> inconsequential. You just wing it.
>
wrote:
> > I think "without further work" is not true. If you implement these types
> in library, you need to put in the work to come up with a fast
> implementation. It probably requires platform intrinsics or assembly
> language, and a lot of experimentation if you don't have an existing fast
> implementation you're allowed to copy from.
>
> There's certainly a point to be made there, yeah.
> I will incorporate that into the draft.
>
> Nonetheless, it seems a lot easier to implement in a library using
> intrinsics than to come up with the codegen yourself. There are also
> architectural similarities. The general technique for addition is
> going to be the same on any architecture that has ADC instructions,
> for example. If you just implement __builtin_add_overflow and
> libstdc++ uses that in its implementation of operator+, you're
> basically done. Adding support for a __builtin_add_overflow intrinsic
> is much easier than implementing N-bit operation support.
>
If you're convinced that the library implementation can be as efficient as
an in-compiler implementation but with significantly less effort, then you
can implement it in library and then special case it in the compiler, the
way we do std::initializer_list. This is of course assuming that we
actually want the e.g. overload resolution properties that a fundamental
type would have. Like, yes, there is extra implementation effort, but of
course there is, because there are simply more core language rules in this
case. The real question is whether or not the overload resolution
properties are actually desirable.
> > Do we impose a disproportionate burden on freestanding compiler vendors
> every time we approve a new core language feature, regardless of what that
> feature is? I wasn't under the impression that that was the case, but I
> could very well be wrong.
>
> It feels disproportionate to me when the feature could be implemented
> in library with absolute certainty, but we choose not to. It would be
> a bit like requiring that the implementation supports
> multi-dimensional arrays as built-in types instead of std::mdspan and
> std::mdarray or whatever.
>
> > If you were going to implement it in library but WG21 decides to make it
> a fundamental type, couldn't you just make the ABI whatever the ABI of the
> class implementation would have been? Sorry if these are very basic
> questions, but I am sure that I am not alone in my ignorance.
>
> Yes, you could, but then you're de-facto creating an ABI for one of
> the fundamental types in your language. This is pretty consequential.
> The ABI debates about whether __int128 should have 16-byte or 8-byte
> alignment, and what the alignment of _BitInt(128) and above should be
> were pretty extensive.
>
Those debates will have to happen on a platform by platform basis in either
case, right? If you implement it in library, you can't change the alignment
of it later without breaking ABI compatibility, same as if it were a
fundamental type.
> After all, if you're going to make a fundamental type for this, it's
> presumably going to be THE _BitInt type that is also found in C, not
> just some run-of-the-mill type. std::bit_int should be ABI-compatible
> with C's _BitInt, similar to _Atomic and std::atomic. It certainly
> feels more consequential to define an ABI for a C type than for a C++
> library class.
>
If you don't plan on ever supporting _BitInt(N) for N greater than 64, then
you can make the ABI for std::bit_int<128> (or other N > 64) whatever you
want. It's not "defining an ABI for a C type" in that case.
If you might eventually support _BitInt(N) for N > 64, then you can't
implement std::bit_int<128> until you've decided what ABI the former are
going to have, since once you implement the former, you'll want them to
have the same ABI as the latter.
In both cases I can't see that making std::bit_int<128> a class type buys
you anything here.
> If you only ever do this as a library class and you're sure you'll
> never have _BitInt, or you don't care about breaking ABI in the
> standard library, then class std::bit_int's ABI is rather
> inconsequential. You just wing it.
>
Received on 2025-02-16 18:12:36