C++ Logo

std-proposals

Advanced search

Re: [std-proposals] D3666R0 Bit-precise integers

From: Sebastian Wittmeier <wittmeier_at_[hidden]>
Date: Thu, 4 Sep 2025 00:38:00 +0200
I try to not fully repeat David's or my arguments, but still answer to the points.   Assumption: For the typical usage of signed integers, the cases, when signed overflow happens, the program does something not intended by the programmer. That means there was a bug, the program has a logic error, it should have caught it, instead of letting it happen. Either just a resulting number is wrong without much consequences or the whole program runs haywire (even without the compiler using the UB). Just depending how the result is used. What is the best (most secure) reaction for a detected bug (if we accept loss of performance)? Either throw an exception, which is or is not caught, or terminate the program at once (-ftrapv). So that is the safest option. But if you are sure that your program is bug-free and no signed overflow happens, e.g. because you manually checked the inputs or the used numbers are small in range, then you can deactivate the checks and have full performance (no option) with all consequences (if the compiler does not detect it at compile time), if it happens (or would happen) after all. If you want wrap-around behaviour as you add large numbers and subtract them again and accept strange things like the sign of the result suddenly switching, you also have an option (-fwrapv), but then you really rely on implementation (and flag)-defined behaviour. In this case better use a different type. It is the same with range checks of containers. One can always do them (and if the compiler can prove, it can optimize out the check), or one can put the obolus on the programmer to make sure the index is in range.   So the whole argument is actually about:  - should preconditions always be checked in the called function?  - how should the program react, if it detects that preconditions are not fulfilled? With a small side argument about:  - should we implicitly fulfill preconditions by removing them altogether and giving some unintended, but predictable outcome for the whole range of possible input values   This code is wrong, because we have said it is wrong (and we said so, because it does call functions whose preconditions are not met, and we defined the preconditions), is a valid argument in my opinion. Should we say, a code is correct, because whatever the code says is correct? The question is,  - whether we accept a distinction between correct and incorrect code. Between code that not only has to be syntactically correct, but also has to run bug-free in the sense that it cannot reach certain not allowed states or call functions in a certain way. You said yourself (at least paraphrased) that UB in this situation is bad in your opinion, because it is difficult to assure that signed overflow would not happen. And that you prefer erroneous behaviour or making contract violations an error. So we agree that programs may be often incorrect in practice.  - You would make it wrapping or an error prescribed in the standard  - I would make it UB or an error in the build flags With wrapping or with UB the error is not detected and has bad consequences. Perhaps less bad consequences with wrapping. But on the other hand, if the programmer is sure that it does not happen, we can go full performance. But (as long as you don't rely on wrapping for specific computations - then better use a different type) why should all compilers do the same in case of an (undetected) error? Why sacrifice a bit of performance for it? If you are not sure that it could happen with code or want to really harden it, then fully detect it.   -----Ursprüngliche Nachricht----- Von:Oliver Hunt <oliver_at_[hidden]> Gesendet:Mi 03.09.2025 23:08 Betreff:Re: [std-proposals] D3666R0 Bit-precise integers An:std-proposals_at_[hidden]; CC:Sebastian Wittmeier <wittmeier_at_[hidden]>; I’ve replied by points below, but I just want to address the core of your argument.  Your core argument is “signed overflow is always an error/bug”.  Declaring “signed overflow is UB” does not support that point of view.  The correct way to say “signed overflow is an error” is to say “signed overflow is erroneous behavior”. That makes it explicit that the overflow is an error, and it permits developers to rely on consistent and deterministic behavior, rather than dealing with an adversarial compiler that is blindly assuming that it cannot happen. On Sep 3, 2025, at 1:13 PM, Sebastian Wittmeier via Std-Proposals <std-proposals_at_[hidden]> wrote: If something is UB in the standard, the implementations can define it to be something specific instead. That is standards compliant and not a new dialect, as long as the code does not depend on that outcome.  Yes, but the code is wrong even if it is correct under your specific dialect. You cannot say “I am relying on this specific behavior” and then say “I am writing C++”, because your “correct” code is incorrect per standard C++, another compiler results in your code invoking UB. > Because of that code that is correct with that flag, is not correct C++ But that is the point of it. Signed overflow has not correct inputs. The preconditions are wrong. An implementation (and its flags) can choose, what happens under those circumstance.  I know that C++ says there is no correct input. Literally the entire point of what I have been saying, is that the only reason you are able to say "Signed overflow has not correct inputs” is because the specification has defined it that way. That’s a tautological argument, “this is undefined because we have said it is undefined”. I am saying “we should not be saying well defined behavior is undefined behaviour”.   That is similar as if you have a contract violation for the contracts feature.  Yes, and I argued strongly for contract violations to be an error. A program error was detected and it can be configured, what happens in that case.  I’m not sure if you’re talking about contracts (with their own issues w.r.t permitting UB) or overflow UB where the entire point is that the overflow is not being detected, the compiler is just pretending it cannot happen. The program is not correct in the first place, if it comes to such an error.  Again, the program is only wrong because we have said it is wrong. You are saying we should continue to repeat choices made in the past that are demonstrably bad for program safety, Even though there is literally no reason to repeat the mistake.   There is no imaginable use to have wrap-around instead (except that you get a wrong, but the same behavior between different compilers). But no use for wrap-around specifically. You could as well have a result of 0.  That’s an argument for giving defined trapping behavior, not UB. But the error is not caught either with wrap-around behavior, whereas with UB the compiler may at least give an error, if it detects it at compile-time.  This is a misunderstanding of what UB means.  The compiler does not prove that your program has invoked UB, it simply assumes that the UB cannot happen.  You are arguing “I believe this is an error, and therefore we should make even more errors possible”. For example back when compilers silently changed a pile of secure code into exploitable code: developers wrote `if (a + b < a) { .. }` the _only_ reason this is “incorrect” is the language defining it as incorrect, the only reason it became exploitable is that the specification says the behavior that is well defined and deterministic everywhere, is incorrect, and in the early to mid 2000s compilers started applying this to overflow..  There is no reason to keep doing that. Repeating “this code is wrong because we have said it is wrong” is not a valid counter argument: the only reason it is wrong is *because* we have said it is.  —Oliver   

Received on 2025-09-03 22:49:15