C++ Logo

sg12

Advanced search

Re: [ub] [c++std-ext-14592] Re: Re: Sized integer types and char bits

From: Jean-Marc Bourguet <jm_at_[hidden]>
Date: Sun, 27 Oct 2013 09:06:51 +0100
On 26/10/2013 22:51, John Regehr wrote:
>> ... there is no
>> representation change when converting a signed int value to unsigned int
>> or when converting an unsigned int value to signed int.
> Wow-- anyone care to guess what fraction of existing C programs run
> correctly under these conditions?
Define correctly. Note that they have a switch to get a more conform
behaviour, but they preferred to provide by default a non-conform
one, probably because it is more useful for their customer; meeting
the expectations of the platform users often beats fulfilling the
requirements
of the standard. They are not alone in doing that, everybody does it
to a more or less wide extend -- from delaying the release of something
because it would break the ABI to not implementing a standard feature
and lobbying to remove it because it isn't convenient -- the the results
is just more unexpected because their constraints are less familiar as less
well taken into account in the standard. I've heard about C compilers for
micro-controllers using an 8 bit int, I've used a compiler which had a 7 bit
char by default (and an array of char didn't cover all the bits of memory).

The question is "how much to we want to take those situations into account
in the standard?" knowing very well that
1/ implementers will do whatever they want, influenced first by their
customers
and then by the standard
2/ programmers will still make the assumption that everything is a
<del>System/360</del>, <del>PDP11</del>, <del>VAX</del>,... whatever
they are using now
3/ there is an inherent conflict of POV between those desiring that a
language
like C++ exposes the underlying machine and doesn't impose a performance
hit for the sake of portability and those desiring that running the same
program
give bit for bit exactly the same result on widely different machine.
Most probably
we are even not consistent about the side on which we are depending on the
issue (Java went back on FP for instance).


I wholly agree that undefined behaviour has been overused in C and C++
description.
I'm not sure at all that fixing the sizes and requiring two's complement
and wrap
around arithmetic would be the correct solution. (For overflow, I wonder
if saying
that it gives an unspecified result or raise an unspecified signal
wouldn't leave enough
latency for the variations in implementation without the wide open doors
of the
undefined behaviour).

Yours,

-- 
Jean-Marc

Received on 2013-10-27 09:07:13