C++ Logo


Advanced search

Re: [ub] [c++std-ext-14592] Re: Re: Sized integer types and char bits

From: Lawrence Crowl <Lawrence_at_[hidden]>
Date: Wed, 30 Oct 2013 14:48:57 -0700
On 10/28/13, Ion Gazta˝aga <igaztanaga_at_[hidden]> wrote:
> El 28/10/2013 2:32, Lawrence Crowl escribiˇ:
>>> I think requiring 2's complement in the long term would be a
>>> good idea, even in C, as no new architecture is using other
>>> representation and this simplifies teaching and programming in
>>> C/C++.
>> Why do you think intN_t is not sufficient? The interpretation is
>> that int means "representation is not relevant, add traps if you
>> want" while (e.g.) int32_t means "I care about representation".
> It's not enough because they don't cover some types like 24 bit or
> 128 bit integers with no padding, available in some modern systems.
> And more importantly, 2's complement simplifies the language with
> little loss in portability (unisys machines currently don't support
> C++). C could have supported systems with CHAR_BIT < 8 but the
> committee chose not to allow that. Not supporting (rare and in
> extinction) one's complement or sign-magnitude representations it's
> a similar decision for C++.

My concern is not the loss in platforms, but the loss in distinction
between code operating on mathematical integers as opposed to the
particular representation. This distinction matters because we can
tell when a program strays from the mathematical integers, but we
cannot tell when it strays from a particular representation. The
resulting diagnostics can avoid bugs that would otherwise have a
good chance of making it through to deployed code.

>>> We could start having a ISO C macro (for C compatibility) to
>>> detect 2's complement at compile time and deprecate 1's complement
>>> a sign-magnitude representations for C++. If no one objects then
>>> only 2's complement could be allowed for the next standard.
>> WG21 cannot add a macro to ISO C. Did you mean laison work or did
>> you mean simply adding a macro?
> I meant a liason work.

That is where it would need to be, I think.

>>> I think banning or deprecating systems with CHAR_BIT != 8 would be
>>> a very bad idea as C++ is a natural choice for high-performance
>>> data/signal processors.
>> Agreed. But also, it turns out, a UTF-12 and UTF-24 are pretty
>> good at representing Unicode.
> char_least12_t & char_least24_t to the rescue? ;-)


Well, of course, you might what to know if char_least12_t is smaller
than char_list24_t. Hm. We don't have any such relationships now.

Oh, for the good old days when all you got was a word. :-)

Lawrence Crowl

Received on 2013-10-30 22:48:58