Date: Sat, 18 Sep 2021 23:19:37 +0200
On 18/09/2021 22.36, David Olsen via Liaison wrote:
> Matthias Kretz pointed out that the proposed rules for C and C++ have different arithmetic conversion rules:
>>> float + _Float32 -> _Float32
>>> float + std::float32 -> float.
>
> As Jens said, "Not good." It's also unfortunate that I didn't notice this difference sooner. But before I start to panic about this, I want to understand what the impact of this difference is.
>
> When float and _Float32 have the same representation and sets of values, can well-formed C code detect the difference between "float + _Float32 -> _Float32" and "float + _Float32 -> float" without using a _Generic expression? Are there situations where the result type of "float + _Float32" affects the correctness of the program or the behavior of a correct program? It would be very easy to detect this in C++. I can't think of how to detect the difference in C without resorting to _Generic, though I don't know C as well.
First, what's wrong about _Generic?
Second, it would be surprising if the implicit conversion rules for
these types would differ between C and C++, even if it is just a
teaching problem ("I learned the conversions in C, now I upgrade to
C++ and the conversions are different, and because C++, it even
makes a difference.")
> If the user writes code that is valid in both C and C++ (assuming that the user has dealt with the fact that _Float32 and std::float32_t are different names for essentially the same type in the two languages), will the fact that "float + _Float32/std::float32_t" have different result types in the two languages cause any differences in behavior?
Side note: In order to be interoperable, we have to solve the naming problem.
It's seriously user-unfriendly to require #ifdef to choose between the
_FloatN spelling and the std::floatN spelling.
(Note: _Float32_t has different semantics vs. _Float32; we should thus
reconsider _t on the std:: names.)
Jens
> Matthias Kretz pointed out that the proposed rules for C and C++ have different arithmetic conversion rules:
>>> float + _Float32 -> _Float32
>>> float + std::float32 -> float.
>
> As Jens said, "Not good." It's also unfortunate that I didn't notice this difference sooner. But before I start to panic about this, I want to understand what the impact of this difference is.
>
> When float and _Float32 have the same representation and sets of values, can well-formed C code detect the difference between "float + _Float32 -> _Float32" and "float + _Float32 -> float" without using a _Generic expression? Are there situations where the result type of "float + _Float32" affects the correctness of the program or the behavior of a correct program? It would be very easy to detect this in C++. I can't think of how to detect the difference in C without resorting to _Generic, though I don't know C as well.
First, what's wrong about _Generic?
Second, it would be surprising if the implicit conversion rules for
these types would differ between C and C++, even if it is just a
teaching problem ("I learned the conversions in C, now I upgrade to
C++ and the conversions are different, and because C++, it even
makes a difference.")
> If the user writes code that is valid in both C and C++ (assuming that the user has dealt with the fact that _Float32 and std::float32_t are different names for essentially the same type in the two languages), will the fact that "float + _Float32/std::float32_t" have different result types in the two languages cause any differences in behavior?
Side note: In order to be interoperable, we have to solve the naming problem.
It's seriously user-unfriendly to require #ifdef to choose between the
_FloatN spelling and the std::floatN spelling.
(Note: _Float32_t has different semantics vs. _Float32; we should thus
reconsider _t on the std:: names.)
Jens
Received on 2021-09-18 16:19:49