Date: Sun, 30 Mar 2025 16:37:15 -0400
On Sun, Mar 30, 2025 at 12:22 PM Frederick Virchanza Gotham via
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> Jonathan wrote:
> > This certainly looks like it's checking for promotion:
> >
> > std::is_same< uint_fast32_t, decltype(uint_fast32_t() + > > uint_fast32_t()) >
>
>
> Yeah that's exactly what I meant.
>
>
> Jonathan also wrote:
> >> So anyway my code looks a little ridiculous in places
> >> as I'm super-paranoid about stuff such as a 32-Bit
> >> unsigned int promoting to a 64-Bit signed int.
> >
> > As discussed, there are no platforms in existence where that can happen,
> > and that's unlikely to change.
>
>
> You're saying there will never be a computer with a 64-Bit int?
> Because on such a computer, a 32-Bit integer type (for instance 'short
> unsigned') would promoted to a 64-Bit int.
>
> Maybe in a decade or two we'll have computers as follows:
>
> char - 8
> short - 32
> int - 64
> long - 128
> long long - 128
Do you know how much code that would break? There is tons of code out
there that assumes that `int` is 32-bits.
Not only that, it would break ABI compatibility; you would be
completely unable to use a library that was compiled against prior
versions of the ABI.
This is the sort of thing that would only be done for an entirely new
system/ABI that had no prior code. And even then, it wouldn't be done
for no reason. When people moved to 64-bit native CPUs, `int` in those
ABIs could have been defined to be 64-bits. But they didn't do that
precisely because a *lot* of code expects 32-bit `int`s. Indeed, a lot
of code expected 32-bit `long`, which is why we needed `long long` to
access 64-bit values.
It is just impractical to ever do that.
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> Jonathan wrote:
> > This certainly looks like it's checking for promotion:
> >
> > std::is_same< uint_fast32_t, decltype(uint_fast32_t() + > > uint_fast32_t()) >
>
>
> Yeah that's exactly what I meant.
>
>
> Jonathan also wrote:
> >> So anyway my code looks a little ridiculous in places
> >> as I'm super-paranoid about stuff such as a 32-Bit
> >> unsigned int promoting to a 64-Bit signed int.
> >
> > As discussed, there are no platforms in existence where that can happen,
> > and that's unlikely to change.
>
>
> You're saying there will never be a computer with a 64-Bit int?
> Because on such a computer, a 32-Bit integer type (for instance 'short
> unsigned') would promoted to a 64-Bit int.
>
> Maybe in a decade or two we'll have computers as follows:
>
> char - 8
> short - 32
> int - 64
> long - 128
> long long - 128
Do you know how much code that would break? There is tons of code out
there that assumes that `int` is 32-bits.
Not only that, it would break ABI compatibility; you would be
completely unable to use a library that was compiled against prior
versions of the ABI.
This is the sort of thing that would only be done for an entirely new
system/ABI that had no prior code. And even then, it wouldn't be done
for no reason. When people moved to 64-bit native CPUs, `int` in those
ABIs could have been defined to be 64-bits. But they didn't do that
precisely because a *lot* of code expects 32-bit `int`s. Indeed, a lot
of code expected 32-bit `long`, which is why we needed `long long` to
access 64-bit values.
It is just impractical to ever do that.
Received on 2025-03-30 20:37:27