Date: Tue, 02 Sep 2025 12:42:23 -0700
> On Sep 2, 2025, at 7:14 AM, David Brown via Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> On 02/09/2025 15:38, Marcin Jaczewski wrote:
>> wt., 2 wrz 2025 o 14:49 David Brown via Std-Proposals
>> <std-proposals_at_[hidden]> napisał(a):
>>>
>>> On 02/09/2025 14:24, Hans Åberg via Std-Proposals wrote:
>>>>
>>>>
>>>>> On 2 Sep 2025, at 14:14, Jan Schultke <janschultke_at_[hidden]> wrote:
>>>>>
>>>>> You seem to be confusing some mostly unrelated concepts.
>>>>>
>>>>>>> 1. C does not allow _BitInt(1); should C++ to make generic programming
>>>>>>> more comfortable?
>>>>>>
>>>>>> The ring ℤ/2ℤ of integers modulo 2, also a field, is isomorphic to the Boolean ring 𝔹 having exclusive or as addition and logical conjunction as multiplication.
>>>>>>
>>>>>> If bool 1+1 is defined to 0, then it is already in C++.
>>>>>
>>>>> Whether there is some other C++ thing that works mathematically the
>>>>> same doesn't say anything about whether _BitInt(1) is valid or should
>>>>> be valid. The issue is regarding a specific type.
>>>>
>>>> It could have been defined to be the same as bool.
>>>
>>> No, it could not. _BitInt(1), if it is to exist, has the two values -1
>>> and 0. Like all signed integer types, arithmetic overflow on it is
>>> undefined, and like all _BitInt types, there is no integer promotion.
>>> Thus for _BitInt(1), (-1) + (-1) is UB.
>>>
>> Do we need UB here? This is not `int` and we could have all operations
>> defined. What would we gain from it?
>
> Do we /need/ UB on signed arithmetic overflow? No. Do we /want/ UB on signed arithmetic overflow? Yes, IMHO. I am of the opinion that it makes no sense to add two negative numbers and end up with a positive number. There are very, very few situations where wrapping behaviour on signed integer arithmetic is helpful - making it defined as wrapping is simply saying that the language will pick a nonsensical result that can lead to bugs and confusion, limit optimisations and debugging, and cannot possibly give you a mathematically correct answer, all in the name of claiming to avoid undefined behaviour.
This as argument for unspecified or implementation defined behavior, not introducing a brand new type, with *all* of the known security issues of `int` (that we decided were not necessary for the performance of unsigned integers).
“It might be possible to optimize this” needs to stop being a justification for UB.
The fact that we were willing to adopt 2s complement for unsigned arithmetic implies that the real world benefit of leaving this UB for signed arithmetic is extremely limited.
This also ignores the increasing prevalence of bounds checks which by definition don’t get to pretend that overflow is not well defined on all hardware.
At the same time every new systems language of the last 2+ decades has been able to say overflow has defined behavior, we need to stop acting like introducing new security flaws and gotchas that result in the compiler being an adversary, just because they might make optimizations possible.
We need to stop pretending that adding features that are fundamentally unsound don’t introduce costs outside of the system: we’re already at the point that we need hardware level mitigation, in addition to the copious software level mitigations already present, that are needed due to the huge amount of existing C and C++ getting compromised through existing unnecessarily UB edges on the language.
In new features, UB should be reserved solely for things that cannot reasonably be given deterministic semantics.
We are trying very hard to convince the outside world that we are taking security seriously, introducing *new* UB is _much_ more effective at undermining those claims than anything we do in the other direction. No amount of “we have added new features to help mitigate the safety weaknesses in the existing language” stands up against “but we’re going to continue adding new footguns and UB” - that sends an very clear message to the outside world: ”we don’t actually care about improving the language and consider this to just be a marketing problem”.
—Oliver
>
> On 02/09/2025 15:38, Marcin Jaczewski wrote:
>> wt., 2 wrz 2025 o 14:49 David Brown via Std-Proposals
>> <std-proposals_at_[hidden]> napisał(a):
>>>
>>> On 02/09/2025 14:24, Hans Åberg via Std-Proposals wrote:
>>>>
>>>>
>>>>> On 2 Sep 2025, at 14:14, Jan Schultke <janschultke_at_[hidden]> wrote:
>>>>>
>>>>> You seem to be confusing some mostly unrelated concepts.
>>>>>
>>>>>>> 1. C does not allow _BitInt(1); should C++ to make generic programming
>>>>>>> more comfortable?
>>>>>>
>>>>>> The ring ℤ/2ℤ of integers modulo 2, also a field, is isomorphic to the Boolean ring 𝔹 having exclusive or as addition and logical conjunction as multiplication.
>>>>>>
>>>>>> If bool 1+1 is defined to 0, then it is already in C++.
>>>>>
>>>>> Whether there is some other C++ thing that works mathematically the
>>>>> same doesn't say anything about whether _BitInt(1) is valid or should
>>>>> be valid. The issue is regarding a specific type.
>>>>
>>>> It could have been defined to be the same as bool.
>>>
>>> No, it could not. _BitInt(1), if it is to exist, has the two values -1
>>> and 0. Like all signed integer types, arithmetic overflow on it is
>>> undefined, and like all _BitInt types, there is no integer promotion.
>>> Thus for _BitInt(1), (-1) + (-1) is UB.
>>>
>> Do we need UB here? This is not `int` and we could have all operations
>> defined. What would we gain from it?
>
> Do we /need/ UB on signed arithmetic overflow? No. Do we /want/ UB on signed arithmetic overflow? Yes, IMHO. I am of the opinion that it makes no sense to add two negative numbers and end up with a positive number. There are very, very few situations where wrapping behaviour on signed integer arithmetic is helpful - making it defined as wrapping is simply saying that the language will pick a nonsensical result that can lead to bugs and confusion, limit optimisations and debugging, and cannot possibly give you a mathematically correct answer, all in the name of claiming to avoid undefined behaviour.
This as argument for unspecified or implementation defined behavior, not introducing a brand new type, with *all* of the known security issues of `int` (that we decided were not necessary for the performance of unsigned integers).
“It might be possible to optimize this” needs to stop being a justification for UB.
The fact that we were willing to adopt 2s complement for unsigned arithmetic implies that the real world benefit of leaving this UB for signed arithmetic is extremely limited.
This also ignores the increasing prevalence of bounds checks which by definition don’t get to pretend that overflow is not well defined on all hardware.
At the same time every new systems language of the last 2+ decades has been able to say overflow has defined behavior, we need to stop acting like introducing new security flaws and gotchas that result in the compiler being an adversary, just because they might make optimizations possible.
We need to stop pretending that adding features that are fundamentally unsound don’t introduce costs outside of the system: we’re already at the point that we need hardware level mitigation, in addition to the copious software level mitigations already present, that are needed due to the huge amount of existing C and C++ getting compromised through existing unnecessarily UB edges on the language.
In new features, UB should be reserved solely for things that cannot reasonably be given deterministic semantics.
We are trying very hard to convince the outside world that we are taking security seriously, introducing *new* UB is _much_ more effective at undermining those claims than anything we do in the other direction. No amount of “we have added new features to help mitigate the safety weaknesses in the existing language” stands up against “but we’re going to continue adding new footguns and UB” - that sends an very clear message to the outside world: ”we don’t actually care about improving the language and consider this to just be a marketing problem”.
—Oliver
Received on 2025-09-02 19:42:57