Date: Tue, 15 Oct 2019 21:56:18 +0200
On Tue, Oct 15, 2019 at 7:33 PM Thiago Macieira via Std-Proposals
<std-proposals_at_[hidden]> wrote:
>
> The standard does give minimum maximum values for short, int and long, so you
> can be assured they are at least that big. You also know the relationship
> between them, so you know you can promote without loss if you go up the
> ladder.
>
> Also, don't forget uinptr_t, size_t and ptrdiff_t. You *really* shouldn't use
> a specific bit count when dealing with object sizes and pointers.
>
> That means you don't need uint32_t for counting the elements in an array that
> you declared in your function. int, size_t or ptrdiff_t would be just fine.
>
> I firmly believe you should use types that are good enough for your needs, not
> coerce them to specific sizes. Back in the 1990s, when I started coding,
> float, double and long double had the same performance due to the i387 FPU
> always operating in 80-bit extended precision. So I wrote all my code with
> long doubles. But after 2004, long doubles are MUCH slower than floats and
> doubles. Even today, some operations (like division) are faster on floats than
> on doubles.
While I agree on the general principle ("use types that are good
enough for your needs"), we don't really have that in C-like languages
where types are concrete with a defined size (rather than being an
abstract list of orthogonal requirements that they must fulfill).
In practice, we end up just choosing a general int/float type for code
that doesn't care. As soon as precise semantics (like in I/O) or
performance (not just due to performance in a particular chip, but
also due to size/cache etc.) matter, we choose explicitly sized types.
Therefore, having only explicitly sized types makes sense: it is a
simpler system which covers all use cases anyway.
Cheers,
Miguel
<std-proposals_at_[hidden]> wrote:
>
> The standard does give minimum maximum values for short, int and long, so you
> can be assured they are at least that big. You also know the relationship
> between them, so you know you can promote without loss if you go up the
> ladder.
>
> Also, don't forget uinptr_t, size_t and ptrdiff_t. You *really* shouldn't use
> a specific bit count when dealing with object sizes and pointers.
>
> That means you don't need uint32_t for counting the elements in an array that
> you declared in your function. int, size_t or ptrdiff_t would be just fine.
>
> I firmly believe you should use types that are good enough for your needs, not
> coerce them to specific sizes. Back in the 1990s, when I started coding,
> float, double and long double had the same performance due to the i387 FPU
> always operating in 80-bit extended precision. So I wrote all my code with
> long doubles. But after 2004, long doubles are MUCH slower than floats and
> doubles. Even today, some operations (like division) are faster on floats than
> on doubles.
While I agree on the general principle ("use types that are good
enough for your needs"), we don't really have that in C-like languages
where types are concrete with a defined size (rather than being an
abstract list of orthogonal requirements that they must fulfill).
In practice, we end up just choosing a general int/float type for code
that doesn't care. As soon as precise semantics (like in I/O) or
performance (not just due to performance in a particular chip, but
also due to size/cache etc.) matter, we choose explicitly sized types.
Therefore, having only explicitly sized types makes sense: it is a
simpler system which covers all use cases anyway.
Cheers,
Miguel
Received on 2019-10-15 14:58:45