Date: Mon, 12 Mar 2018 14:36:49 -0700
On 3/12/18, Myria <myriachan_at_[hidden]> wrote:
> On Mon, Mar 12, 2018 at 13:32 Lawrence Crowl <Lawrence_at_[hidden]> wrote:
>> On 3/12/18, Myria <myriachan_at_[hidden]> wrote:
>>> The severity of the current situation is that I generally avoid signed
>>> integers if I intend to do any arithmetic on them whatsoever, lest the
>>> compiler decide to make demons come out of my nose.
>>
>> So why not specify the option to turn on trapping?
>>
>>> And even then, I'm not safe:
>>>
>>> std::uint16_t x = 0xFFFF;
>>> x *= x; // undefined behavior on most modern platforms
>>
>> How? The C++ standard defines unsigned arithmetic as
>> modular arithmetic.
>
> But that's the catch: it's double secret signed arithmetic. The
> promotion rules of C, inherited by C++, state that on any arithmetic
> operation, integer types of rank less than int promote to int. This
> promotion is regardless of signedness.
>
> On a "typical modern platform", std::uint16_t is unsigned short. That
> is of lesser rank than signed int, so it promotes to signed int on any
> arithmetic operation, resulting in the following:
>
> int promoted_x = x;
> x = static_cast<std::uint16_t>(promoted_x * promoted_x);
>
> 65535 * 65535 overflows a signed int on a typical 32-bit int platform,
> which is undefined behavior.
Good example.
>> More importantly, what happens to your program when x*x < x?
>
> The code that led me to finding this was a 16-bit variant of the FNV
> hash function, so it worked properly after the correct casts were added
> to allow the wrap.
So the application intended modular arithmetic? I was concerned about
the normal case where 'unsigned' is used to constrain the value range,
not to get modular arithmetic.
>>> My code has to do silly things like this in order to safeguard against
>>> such potential compiler abuses:
>>>
>>> typedef decltype(std::uint16_t() + 0u) promoted_uint16;
>>
>> How does this typedef help?
>
> Arithmetic between any unsigned type and unsigned int results in a type
> of at least the first type's size that cannot be promoted to a signed
> type in arithmetic with other unsigned types.
>
> It's like uint16_fast_t, except that it guarantees that all operations
> performed will be well-defined to wrap, with just the inconvenience of
> potentially being larger than the actual intended type.
So, you intend that folks assign to temporary variables of this type, do
their arithmetic, and then convert to the final value? That should work
for everything but division.
>>> I would be happy if an option like -fwrapv were supported everywhere,
>>> but Visual Studio doesn't have such an option, and Microsoft has
>>> already denied requests for such an option to be implemented.
>>
>> What about -ftrapv?
>
> If I were working on something where signed int overflow were a problem,
> then sure, in debug builds. In release builds, I wouldn't use that for
> performance reasons (except where it's mostly free, like on MIPS).
As long as your test runs use the option and have some value-extreme
tests, that seems like a fine strategy.
I'm curious about the performance difference on modern systems. Testing
the overflow condition bit is highly predictable and so the net cost
should be low.
> On Mon, Mar 12, 2018 at 13:32 Lawrence Crowl <Lawrence_at_[hidden]> wrote:
>> On 3/12/18, Myria <myriachan_at_[hidden]> wrote:
>>> The severity of the current situation is that I generally avoid signed
>>> integers if I intend to do any arithmetic on them whatsoever, lest the
>>> compiler decide to make demons come out of my nose.
>>
>> So why not specify the option to turn on trapping?
>>
>>> And even then, I'm not safe:
>>>
>>> std::uint16_t x = 0xFFFF;
>>> x *= x; // undefined behavior on most modern platforms
>>
>> How? The C++ standard defines unsigned arithmetic as
>> modular arithmetic.
>
> But that's the catch: it's double secret signed arithmetic. The
> promotion rules of C, inherited by C++, state that on any arithmetic
> operation, integer types of rank less than int promote to int. This
> promotion is regardless of signedness.
>
> On a "typical modern platform", std::uint16_t is unsigned short. That
> is of lesser rank than signed int, so it promotes to signed int on any
> arithmetic operation, resulting in the following:
>
> int promoted_x = x;
> x = static_cast<std::uint16_t>(promoted_x * promoted_x);
>
> 65535 * 65535 overflows a signed int on a typical 32-bit int platform,
> which is undefined behavior.
Good example.
>> More importantly, what happens to your program when x*x < x?
>
> The code that led me to finding this was a 16-bit variant of the FNV
> hash function, so it worked properly after the correct casts were added
> to allow the wrap.
So the application intended modular arithmetic? I was concerned about
the normal case where 'unsigned' is used to constrain the value range,
not to get modular arithmetic.
>>> My code has to do silly things like this in order to safeguard against
>>> such potential compiler abuses:
>>>
>>> typedef decltype(std::uint16_t() + 0u) promoted_uint16;
>>
>> How does this typedef help?
>
> Arithmetic between any unsigned type and unsigned int results in a type
> of at least the first type's size that cannot be promoted to a signed
> type in arithmetic with other unsigned types.
>
> It's like uint16_fast_t, except that it guarantees that all operations
> performed will be well-defined to wrap, with just the inconvenience of
> potentially being larger than the actual intended type.
So, you intend that folks assign to temporary variables of this type, do
their arithmetic, and then convert to the final value? That should work
for everything but division.
>>> I would be happy if an option like -fwrapv were supported everywhere,
>>> but Visual Studio doesn't have such an option, and Microsoft has
>>> already denied requests for such an option to be implemented.
>>
>> What about -ftrapv?
>
> If I were working on something where signed int overflow were a problem,
> then sure, in debug builds. In release builds, I wouldn't use that for
> performance reasons (except where it's mostly free, like on MIPS).
As long as your test runs use the option and have some value-extreme
tests, that seems like a fine strategy.
I'm curious about the performance difference on modern systems. Testing
the overflow condition bit is highly predictable and so the net cost
should be low.
-- Lawrence Crowl
Received on 2018-03-12 22:36:51