Date: Tue, 02 Sep 2025 13:21:50 -0700
> On Sep 2, 2025, at 12:56 PM, Julien Villemure-Fréchette via Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> > That is the mathematically correct answer for a modular arithmetic ring.
>
> But the signed integer types are not intended to model modular arithmetic on a finite ring of characteristic 2^N. An operation "a + b" is meant to express the mathematical operation on the integers, not modular arithmetic.
>
This _might_ be a reasonable argument for `int` or `int32_t` (basically because legacy, and the widespread use of int32 induction variables on 64bit machine - which at this point seems to be the only meaningfully performance impacting application of signed overflow is UB)
But for `_BitInt(N)` that legacy is not relevant, the optimization is not relevant (the stated use cases are things like fpgas where _BitInt(N) is likely always going to mean N-bit integer unit rather than M bit unit with N<=M), so the use cases are more reasonably assumed to be operating on the “finite field” definition. The reality is developers already assume that, and the only reason it is incorrect is because the specification says so.
Trapping, wrapping, saturating - going to infinity? (I recall early gpu shader languages have “int” but using floats, and I don’t know if such an implementation could possibly be conforming?) - are all reasonable definitions, maybe there are a few other options, but no hardware I’m aware of gives a non-deterministic result to overflow for any integral operation so continuing to pretend so for _new_ types is not a reasonable path forward.
_If_ a compiler (or a user) really wanted a “assume this cannot overflow” path there are a number of options as language extensions (pragma, type attribute, …), or restructuring the code (the UB advantage for overflow in induction variables can be easily resolved simply by using the machine word type explicitly and removing the need for UB to permit that type when lowering).
—Oliver
>
> > That is the mathematically correct answer for a modular arithmetic ring.
>
> But the signed integer types are not intended to model modular arithmetic on a finite ring of characteristic 2^N. An operation "a + b" is meant to express the mathematical operation on the integers, not modular arithmetic.
>
This _might_ be a reasonable argument for `int` or `int32_t` (basically because legacy, and the widespread use of int32 induction variables on 64bit machine - which at this point seems to be the only meaningfully performance impacting application of signed overflow is UB)
But for `_BitInt(N)` that legacy is not relevant, the optimization is not relevant (the stated use cases are things like fpgas where _BitInt(N) is likely always going to mean N-bit integer unit rather than M bit unit with N<=M), so the use cases are more reasonably assumed to be operating on the “finite field” definition. The reality is developers already assume that, and the only reason it is incorrect is because the specification says so.
Trapping, wrapping, saturating - going to infinity? (I recall early gpu shader languages have “int” but using floats, and I don’t know if such an implementation could possibly be conforming?) - are all reasonable definitions, maybe there are a few other options, but no hardware I’m aware of gives a non-deterministic result to overflow for any integral operation so continuing to pretend so for _new_ types is not a reasonable path forward.
_If_ a compiler (or a user) really wanted a “assume this cannot overflow” path there are a number of options as language extensions (pragma, type attribute, …), or restructuring the code (the UB advantage for overflow in induction variables can be easily resolved simply by using the machine word type explicitly and removing the need for UB to permit that type when lowering).
—Oliver
Received on 2025-09-02 20:22:04