C++ Logo

std-proposals

Advanced search

Re: [std-proposals] D3666R0 Bit-precise integers

From: David Brown <david.brown_at_[hidden]>
Date: Tue, 2 Sep 2025 16:14:00 +0200
On 02/09/2025 15:38, Marcin Jaczewski wrote:
> wt., 2 wrz 2025 o 14:49 David Brown via Std-Proposals
> <std-proposals_at_[hidden]> napisał(a):
>>
>> On 02/09/2025 14:24, Hans Åberg via Std-Proposals wrote:
>>>
>>>
>>>> On 2 Sep 2025, at 14:14, Jan Schultke <janschultke_at_[hidden]> wrote:
>>>>
>>>> You seem to be confusing some mostly unrelated concepts.
>>>>
>>>>>> 1. C does not allow _BitInt(1); should C++ to make generic programming
>>>>>> more comfortable?
>>>>>
>>>>> The ring ℤ/2ℤ of integers modulo 2, also a field, is isomorphic to the Boolean ring 𝔹 having exclusive or as addition and logical conjunction as multiplication.
>>>>>
>>>>> If bool 1+1 is defined to 0, then it is already in C++.
>>>>
>>>> Whether there is some other C++ thing that works mathematically the
>>>> same doesn't say anything about whether _BitInt(1) is valid or should
>>>> be valid. The issue is regarding a specific type.
>>>
>>> It could have been defined to be the same as bool.
>>
>> No, it could not. _BitInt(1), if it is to exist, has the two values -1
>> and 0. Like all signed integer types, arithmetic overflow on it is
>> undefined, and like all _BitInt types, there is no integer promotion.
>> Thus for _BitInt(1), (-1) + (-1) is UB.
>>
>
> Do we need UB here? This is not `int` and we could have all operations
> defined. What would we gain from it?

Do we /need/ UB on signed arithmetic overflow? No. Do we /want/ UB on
signed arithmetic overflow? Yes, IMHO. I am of the opinion that it
makes no sense to add two negative numbers and end up with a positive
number. There are very, very few situations where wrapping behaviour on
signed integer arithmetic is helpful - making it defined as wrapping is
simply saying that the language will pick a nonsensical result that can
lead to bugs and confusion, limit optimisations and debugging, and
cannot possibly give you a mathematically correct answer, all in the
name of claiming to avoid undefined behaviour.

Remember, if you first /define/ a behaviour in the standards, you are
stuck with it. Leave it as UB, and the compiler can do what it wants -
perhaps helped by developer choices. Thus in gcc, you can choose to
give signed integer arithmetic modulo behaviour by using the "-fwrapv"
flag. You can choose to have the compiler generate run-time checks and
stop the program with an error message on overflow using the
"-fsanitize=signed-integer-overflow" flag. A compiler for a DSP could
choose to use saturating arithmetic. A C interpreter could choose to
use arbitrary precision arithmetic and give a run-time error for out of
range values when assigning the result to a variable. A C++ compiler
could have an option to throw an exception on overflow. And of course C
and C++ compilers can optimise code on the assumption that overflow,
being UB, does not occur, leading to smaller and faster code in some
situations (typically in loops, and also noticeable in some kinds of
array indexing on 64-bit machines).

But if the standards say overflow is wrapping, you are stuck with it.
Your code gives results that are almost certainly incorrect and
unhelpful on overflow, but your tools can't help you find your mistake.

This is why the C and C++ standards committees have both refused to
define the behaviour of arithmetic overflow even when they have both
restricted the latest language versions to requiring two's complement
representation of signed integers.

In C23, signed integer arithmetic overflow with _BitInt types is UB,
just like for other signed integer types. That makes perfect sense to
me. Hopefully the same will apply to C++23 _BitInt.


(Sorry for the long and slightly ranting answer to a two line question!)

Received on 2025-09-02 14:14:02