Date: Sun, 30 Mar 2025 19:42:02 -0400
On Sun, Mar 30, 2025 at 4:55 PM Frederick Virchanza Gotham via
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> On Sun, Mar 30, 2025 at 9:37 PM Jason McKesson wrote:
> >
> > Do you know how much code that would break? There is tons of code out
> > there that assumes that `int` is 32-bits.
> >
> > Not only that, it would break ABI compatibility; you would be
> > completely unable to use a library that was compiled against prior
> > versions of the ABI.
> >
> > This is the sort of thing that would only be done for an entirely new
> > system/ABI that had no prior code. And even then, it wouldn't be done
> > for no reason. When people moved to 64-bit native CPUs, `int` in those
> > ABIs could have been defined to be 64-bits. But they didn't do that
> > precisely because a *lot* of code expects 32-bit `int`s. Indeed, a lot
> > of code expected 32-bit `long`, which is why we needed `long long` to
> > access 64-bit values.
> >
> > It is just impractical to ever do that.
>
>
> Surely they had people saying the same stuff back in 1987 when they
> bumped int from 16-Bit up to 32-Bit?
>
> Maybe we won't see a 64-Bit int in the next 10 years . . . but 25
> years from now, maybe things will change eventually.
>
> Having 32-Bit int's on x86_64 is fine because the instruction set
> accommodates them, but on another CPU architecture that deals solely
> with 64-Bit integers, a lot of CPU cycles would be wasted
> Bitwise-AND'ing with 0xffffffff.
Which sounds like a good reason not to make such CPUs. Or to give them
instructions that make such things not cost performance. Hardware
design is often a complex dance between what the hardware vendors
would like and what users can tolerate. If your new CPU is 10% slower
for no good reason, people won't buy it.
And if it can't work with existing ABIs, then nobody can use it.
> I feel the need to make the point that any code that uses 'int' where
> a 32-Bit integer is required is badly written.
Irrelevant. It still exists, it is still being used, and it is still
*being written* today despite having an alternative. The makers of
implementations and platforms have to deal with the codebases and
programmers we have, not how we might wish they were.
The overall point is this: the premise of this feature is predicated
on a future compatibility break that is so much of a compatibility
break that it basically won't ever be practical. So if it won't
actually happen in the foreseeable future... what's the point of it?
> I started programming
> in C++ by borrowing the book "C++ for Dummies" from my local library
> back in 2002, and in the 23 years since then, I've never written code
> that assumes a 32-Bit int. For convenience I have at times assumed the
> existence of uint32_t, but if I were less lazy I could have just used
> uint_least32_t and been very conscientious about Bitwise-AND'ing with
> 0xffffffff everywhere (just in case I compile for 36-Bit int later
> on).
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> On Sun, Mar 30, 2025 at 9:37 PM Jason McKesson wrote:
> >
> > Do you know how much code that would break? There is tons of code out
> > there that assumes that `int` is 32-bits.
> >
> > Not only that, it would break ABI compatibility; you would be
> > completely unable to use a library that was compiled against prior
> > versions of the ABI.
> >
> > This is the sort of thing that would only be done for an entirely new
> > system/ABI that had no prior code. And even then, it wouldn't be done
> > for no reason. When people moved to 64-bit native CPUs, `int` in those
> > ABIs could have been defined to be 64-bits. But they didn't do that
> > precisely because a *lot* of code expects 32-bit `int`s. Indeed, a lot
> > of code expected 32-bit `long`, which is why we needed `long long` to
> > access 64-bit values.
> >
> > It is just impractical to ever do that.
>
>
> Surely they had people saying the same stuff back in 1987 when they
> bumped int from 16-Bit up to 32-Bit?
>
> Maybe we won't see a 64-Bit int in the next 10 years . . . but 25
> years from now, maybe things will change eventually.
>
> Having 32-Bit int's on x86_64 is fine because the instruction set
> accommodates them, but on another CPU architecture that deals solely
> with 64-Bit integers, a lot of CPU cycles would be wasted
> Bitwise-AND'ing with 0xffffffff.
Which sounds like a good reason not to make such CPUs. Or to give them
instructions that make such things not cost performance. Hardware
design is often a complex dance between what the hardware vendors
would like and what users can tolerate. If your new CPU is 10% slower
for no good reason, people won't buy it.
And if it can't work with existing ABIs, then nobody can use it.
> I feel the need to make the point that any code that uses 'int' where
> a 32-Bit integer is required is badly written.
Irrelevant. It still exists, it is still being used, and it is still
*being written* today despite having an alternative. The makers of
implementations and platforms have to deal with the codebases and
programmers we have, not how we might wish they were.
The overall point is this: the premise of this feature is predicated
on a future compatibility break that is so much of a compatibility
break that it basically won't ever be practical. So if it won't
actually happen in the foreseeable future... what's the point of it?
> I started programming
> in C++ by borrowing the book "C++ for Dummies" from my local library
> back in 2002, and in the 23 years since then, I've never written code
> that assumes a 32-Bit int. For convenience I have at times assumed the
> existence of uint32_t, but if I were less lazy I could have just used
> uint_least32_t and been very conscientious about Bitwise-AND'ing with
> 0xffffffff everywhere (just in case I compile for 36-Bit int later
> on).
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Received on 2025-03-30 23:42:14