> _BitInt behaves very differently from other integer types, e.g. no promotions or implicit conversions.
Well, that's true, but what I meant is that the infrastructure for
multi-precision arithmetic already has to exist in compilers. [...]
the proposal does not make uint128_t mandatory, only
uint_least128_t, so the type could be _BitInt(128) in disguise (with
traditional conversion/promotion rules).
I'm slightly confused by your and Jonathan's phrasing here.
What I'd like to see is that a vendor on a sane platform could do literally
using int64_t = long long;
using int128_t = _BitInt(128);
Are you saying that such a library implementation (even on a sane platform) would not be conforming to your proposal?
If that's what you're saying, then I'd oppose the proposal. I'd like the proposed `int128_t` and/or the (proposed/existing) `_BitInt(128)` to move a little, in order that they arrive at the same place and can be made synonymous.
Ditto the sure-to-be-proposed-in-the-future
using int256_t = _BitInt(256);
and so on. I don't want to have two subtly different hierarchies of integer types with the same object representations but subtly different behaviors!
my $.02,
–Arthur