Date: Thu, 4 Sep 2025 06:05:09 +0000
I think you might be interested in this paper: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p3161r3.html
And also this code:
bool test(uint8_t const a, uint8_t const b, uint8_t const c)
{
int8_t const s_a = std::bit_cast<int8_t>(a);
int8_t const s_b = std::bit_cast<int8_t>(b);
int8_t const s_c = std::bit_cast<int8_t>(c);
uint8_t const ur = a + b * c;
int8_t const sr = s_a + s_b * s_c;
return ur == std::bit_cast<uint8_t>(sr);
}
(division is really the only problem)
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of connor horman via Std-Proposals
Sent: Thursday, September 4, 2025 03:59
To: std-proposals_at_[hidden]
Cc: connor horman <chorman64_at_[hidden]>
Subject: [std-proposals] Signed Overflow
I'm going to spin this off from the bit-precise integer thread, because I don't think this is restricted to bit-precise integers (although there is an argument that this could only apply to extended integer types like bit-precise integers).
Currently, in the standard, signed overflow is UB. Various arguments are made about that over on the other thread, which can be summarized as:
* Any program that is doing computation on signed integers is incorrect if overflow occurs,
* Compilers like optimizations,
* Debuggability,
* Mathematical correctness (which I disagree with as someone who does modular arithmetic).
The optimizations point is potentially reasonable, there are a decent number of optimizations that are performed on signed integers in C/++ by both gcc and llvm, however, I don't find the others persuasive, for the reasons I will list below.
Instead I would propose the following:
* Convert signed integer overflow from undefined behaviour to erroneous behaviour (suggested in the bit-precise integer)
* Guarantee wrapping mod 2^w for a w-bit integer type when the EB is not trapped at runtime,
* Add a type `std::wrapping<T>` to the numerics header, which defines its operations to wrap mod 2^w for a w-bit integer (not erroneous behaviour). This would allow programs that are doing advanced mathematical calculations to opt-into wrapping behaviour without potential EB traps (and without switching to `unsigned` integers, which does not have the correct comparison behaviour, and division/modulus is also more expensive to implement without compiler support).
I would also note that most of the arguments that apply to a signed integer will also apply to an unsigned integer. Computing `len - 1` is probably going to do wrong things if `len` is 0, and there are optimizations that can be done on unsigned wrapping as well.
If the above is considered, most optimizations on signed integers will be pessimized based on wrapping behaviour. An example of this is `x + 3 < 0` getting turned into `x < -3`, which is only correct if `x` cannot be >`INT_MAX - 3` under 2s complement wrapping. Another example is `x * y / y` into `x`.
However, due to the fact that it remains Erroneous Behaviour, sanitizers, or opt-in compilation flags that check for signed overflow can still raise runtime errors on overflow - such programs would also be guaranteed to error in required constant expressions. Thus a program or routine that is incorrect on signed overflow can still receive debugging aids. In fact, there can be improved debugging, as even without these tools enabled, programs being checked with simple step-through or print debugging can observe potentially absurd values passing through without worrying about further state corruption that is allowed to arise from undefined behaviour (but not erroneous behaviour).
With the `std::wrapping<T>` template (which can also be provided as free functions on primitive types,mirroring the saturating_ versions added in C++26), programs can decide whether or not they are more correct with explicitly wrapping arithmetic or not.
Regarding optimization, a potential alternative may be to say that if an arithmetic subexpression may overflow, its erroneous behaviour, and (if not trapping) the implementation has the choice of either (it is unspecified which):
* Wrapping immediately, or
* Evaluating as an implementation-defined (or unspecified) signed integer type that is at least as wide, then wrapping at the end of the complete arithmetic expression (either the full expression, or an operand of something other than an arithmetic operation, such as a cast or a function call)
This would require more complex standards language, but would allow most optimizations that assume non-wrapping to be preserved.
And also this code:
bool test(uint8_t const a, uint8_t const b, uint8_t const c)
{
int8_t const s_a = std::bit_cast<int8_t>(a);
int8_t const s_b = std::bit_cast<int8_t>(b);
int8_t const s_c = std::bit_cast<int8_t>(c);
uint8_t const ur = a + b * c;
int8_t const sr = s_a + s_b * s_c;
return ur == std::bit_cast<uint8_t>(sr);
}
(division is really the only problem)
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of connor horman via Std-Proposals
Sent: Thursday, September 4, 2025 03:59
To: std-proposals_at_[hidden]
Cc: connor horman <chorman64_at_[hidden]>
Subject: [std-proposals] Signed Overflow
I'm going to spin this off from the bit-precise integer thread, because I don't think this is restricted to bit-precise integers (although there is an argument that this could only apply to extended integer types like bit-precise integers).
Currently, in the standard, signed overflow is UB. Various arguments are made about that over on the other thread, which can be summarized as:
* Any program that is doing computation on signed integers is incorrect if overflow occurs,
* Compilers like optimizations,
* Debuggability,
* Mathematical correctness (which I disagree with as someone who does modular arithmetic).
The optimizations point is potentially reasonable, there are a decent number of optimizations that are performed on signed integers in C/++ by both gcc and llvm, however, I don't find the others persuasive, for the reasons I will list below.
Instead I would propose the following:
* Convert signed integer overflow from undefined behaviour to erroneous behaviour (suggested in the bit-precise integer)
* Guarantee wrapping mod 2^w for a w-bit integer type when the EB is not trapped at runtime,
* Add a type `std::wrapping<T>` to the numerics header, which defines its operations to wrap mod 2^w for a w-bit integer (not erroneous behaviour). This would allow programs that are doing advanced mathematical calculations to opt-into wrapping behaviour without potential EB traps (and without switching to `unsigned` integers, which does not have the correct comparison behaviour, and division/modulus is also more expensive to implement without compiler support).
I would also note that most of the arguments that apply to a signed integer will also apply to an unsigned integer. Computing `len - 1` is probably going to do wrong things if `len` is 0, and there are optimizations that can be done on unsigned wrapping as well.
If the above is considered, most optimizations on signed integers will be pessimized based on wrapping behaviour. An example of this is `x + 3 < 0` getting turned into `x < -3`, which is only correct if `x` cannot be >`INT_MAX - 3` under 2s complement wrapping. Another example is `x * y / y` into `x`.
However, due to the fact that it remains Erroneous Behaviour, sanitizers, or opt-in compilation flags that check for signed overflow can still raise runtime errors on overflow - such programs would also be guaranteed to error in required constant expressions. Thus a program or routine that is incorrect on signed overflow can still receive debugging aids. In fact, there can be improved debugging, as even without these tools enabled, programs being checked with simple step-through or print debugging can observe potentially absurd values passing through without worrying about further state corruption that is allowed to arise from undefined behaviour (but not erroneous behaviour).
With the `std::wrapping<T>` template (which can also be provided as free functions on primitive types,mirroring the saturating_ versions added in C++26), programs can decide whether or not they are more correct with explicitly wrapping arithmetic or not.
Regarding optimization, a potential alternative may be to say that if an arithmetic subexpression may overflow, its erroneous behaviour, and (if not trapping) the implementation has the choice of either (it is unspecified which):
* Wrapping immediately, or
* Evaluating as an implementation-defined (or unspecified) signed integer type that is at least as wide, then wrapping at the end of the complete arithmetic expression (either the full expression, or an operand of something other than an arithmetic operation, such as a cast or a function call)
This would require more complex standards language, but would allow most optimizations that assume non-wrapping to be preserved.
Received on 2025-09-04 06:05:15