Date: Sun, 11 Feb 2024 19:19:26 +0100
> If you change that to a division, Clang inserts a very long, looping code that
fortunately has no actual divisions.
That's basically what I would have expected and I don't see that as a
problem. The user has the agency to decide which integer width they
use. If they're on a 32-bit platform and decide to use 128-bit
arithmetic, it's their fault if it's slow.
It's not like _BitInt(N) was rejected because someone could perform
1024 division and it would be slow.
> I should also note that current GCC trunk still has no support for _BitInt (that I can see).
GCC only supports _BitInt in C mode at this time (for up to
_BitInt(65535)). It's simply not enabled in C++ mode.
> So on a hypothetical 9-bit CPU, it would likely still be an 8-byte type (72 bits).
Well, that's different from supporting 128-bit integers, but that's
not what I'm proposing anyway. On a 9-bit CPU, int_least128_t could be
a 16-byte type (144 bits) so the approach is the same in principle.
> Then it's an architectural decision, not the Standard Library's.
I think what Jonathan was sceptical of is whether you can have a
difference_type that is unable to represent all possible differences.
Indeed, https://eel.is/c++draft/iterator.requirements#iterator.concept.random.access
does not seem to mandate that (a - b) is valid way to obtain the
difference between iterators, even for a random access iterator. The
Cpp17RandomAccessIterator requirement also does not require this to be
possible. The only requirements assume that you have a valid
difference already, and mandate that you can use this for pointer
arithmetic etc.
Therefore, it's totally valid for iota_view<long> to have signed char
as a difference_type, it would just be dumb and evil.
fortunately has no actual divisions.
That's basically what I would have expected and I don't see that as a
problem. The user has the agency to decide which integer width they
use. If they're on a 32-bit platform and decide to use 128-bit
arithmetic, it's their fault if it's slow.
It's not like _BitInt(N) was rejected because someone could perform
1024 division and it would be slow.
> I should also note that current GCC trunk still has no support for _BitInt (that I can see).
GCC only supports _BitInt in C mode at this time (for up to
_BitInt(65535)). It's simply not enabled in C++ mode.
> So on a hypothetical 9-bit CPU, it would likely still be an 8-byte type (72 bits).
Well, that's different from supporting 128-bit integers, but that's
not what I'm proposing anyway. On a 9-bit CPU, int_least128_t could be
a 16-byte type (144 bits) so the approach is the same in principle.
> Then it's an architectural decision, not the Standard Library's.
I think what Jonathan was sceptical of is whether you can have a
difference_type that is unable to represent all possible differences.
Indeed, https://eel.is/c++draft/iterator.requirements#iterator.concept.random.access
does not seem to mandate that (a - b) is valid way to obtain the
difference between iterators, even for a random access iterator. The
Cpp17RandomAccessIterator requirement also does not require this to be
possible. The only requirements assume that you have a valid
difference already, and mandate that you can use this for pointer
arithmetic etc.
Therefore, it's totally valid for iota_view<long> to have signed char
as a difference_type, it would just be dumb and evil.
Received on 2024-02-11 18:20:08