C++ Logo


Advanced search

Re: Reserve good keywords for floating point types

From: Andrey Semashev <andrey.semashev_at_[hidden]>
Date: Wed, 16 Oct 2019 10:17:28 +0300
On 2019-10-16 01:16, Tony V E wrote:
> What I've found over and over again in the past is that code that used
> fixed-sizes was later upgraded to "default" sizes, or upgraded to the
> next size up.
> ie code that used int16 later used int32 or just int. (Unless it was
> writing to a file format, etc)
> Often people think "I know this is N bits, I should 'lock it in' to be N
> bits forever, so that behaviour doesn't change". But circumstances
> change. You run on a bigger machine, it uses bigger numbers (ie more
> elements in a vector). You run on a smaller machine, you tackle smaller
> problems with smaller numbers. But the original idea was that int would
> scale (someone broke that when it didn't move to 64bits unfortunately).

I don't think int could ever become 64-bit because in that case we
wouldn't have a 32-bit or 16-bit integer type. long could become 64-bit,
and it actually did in most environments.

> I can see cases for int_least_N, saying "it won't work otherwise". But
> lots of code can generically scale to machine size.

(u)int_leastN_t types are good for guaranteeing the lowest size, but
they don't they don't offer much compared to (u)intN_t. Given that
(u)intN_t are universally available, people don't see the point in using

> Maybe times have changed. Maybe we will never upgrade to 128bit
> pointers/ints/floats. But I went through both the 16 -> 32 and the 32
> -> 64 transitions, and I had to fix much much more fixed-sized code than
> default-sized code.
> Although maybe that experience is no longer valid, and is just tainting
> my judgement.

When your code needs to scale, you obviously don't want to use
fixed-size types like uint32_t. You want to use size_t, ptrdiff_t and
uintptr_t, as Thiago suggested. However, some things don't scale and
should be the same on every platform you run. For example, you set a
limit for URIs to be no larger than 64 KiB, so you you can store its
sizes as uint16_t. That won't change whether you're running on an
embedded system or a mainframe. But in no case you will be storing sizes
as unsigned int because it either doesn't guarantee the capacity or
doesn't offer size advantage compared to size_t.

When/if the transition to 128-bit pointers happens, I suspect there will
be a much stronger resistance to promote existing integer sizes to 128
bits. We've already seen that when we switched to 64 bits - some
software preferred to stay 32-bit despite the advantages of 64-bit
architectures, x32 ABI appeared, etc. In my code, I quite often
purposely use uint32_t for sizes because I know I don't want these
objects and containers larger than 4 GiB. If size_t ever becomes
128-bit, I bet quite a lot of code will want to switch to some other
integer type just to avoid wasting memory. We'll need
uint_leastN_mostM_t set of types.

Going back to floating point types, I suspect changing sizes of float
and double would be even more detrimental, since beside the space
implications this would also change the results of computation. In some
places that won't matter but I'm sure there are places where it will.
Basically, most if not all code written today believes that float is
binary32, and will probably break if it becomes binary16 or bfloat16. It
may not break if it becomes binary64, but it will likely suffer in
memory consumption and performance. (On the performance side, note that
even if the CPU implements operations on 32 and 64-bit FP numbers in the
same amount of cycles, vector registers will fit twice as many 32-bit
elements and thus vectorized code will execute faster with 32-bit types.)

Received on 2019-10-16 02:19:46