Date: Thu, 24 Oct 2013 09:02:38 +0000
| > In that case, int32_t et.al. require a two's complement
| > representation. The programmers that care should use those
| > typedefs.
|
| But they can't because these typedefs are optional.
The "optional" here is to be understood as "conditionally supported":
C++ provides necessary abstraction tools to assert at translation time
(with diagnostics in case of failure) the presence of these typedefs,
e.g. using static_assert. The assertions will fail only on those platforms
that do not support two's complement no-padding bit binary
representation of sized integer types. But then, programmers who
insist on two's complement representation do not care about those
platforms. So, no harm there. Once the programmer has
(static_)asserted support for intN_t, she is free to use that knowledge
in the remaining of the program, free of any worry about any gotchas
from integer arithmetic on unfamiliar architectures she would
never see in her programmer life.
A point of concern though is that these intN_t are typedefs, so
overloading on those types is unwise. Therefore, the thing to do is
not to use those types in interfaces, or to define wrapper
classes -- these days, we can expect near-overhead free wrappers
with good ABIs that support modern C++.
Requiring these intN_t to be builtin types wouldn't necessarily be
a simplification of the standards specification -- in fact, it is
most likely the opposite, for one would need to specify
conversion details, etc. [ Xavier, I understand that is not what
you were arguing for; I am just noting this for completeness. ]
| > Net result, no change to the standard.
|
| And no improvements for programmers who want to write portable code.
Lawrence, I would like to understand the kind of optimizations
you have in mind that we would lose if signed integer arithmetic
overflow is 'unspecified' as opposed to 'undefined behavior'.
-- Gaby
| > representation. The programmers that care should use those
| > typedefs.
|
| But they can't because these typedefs are optional.
The "optional" here is to be understood as "conditionally supported":
C++ provides necessary abstraction tools to assert at translation time
(with diagnostics in case of failure) the presence of these typedefs,
e.g. using static_assert. The assertions will fail only on those platforms
that do not support two's complement no-padding bit binary
representation of sized integer types. But then, programmers who
insist on two's complement representation do not care about those
platforms. So, no harm there. Once the programmer has
(static_)asserted support for intN_t, she is free to use that knowledge
in the remaining of the program, free of any worry about any gotchas
from integer arithmetic on unfamiliar architectures she would
never see in her programmer life.
A point of concern though is that these intN_t are typedefs, so
overloading on those types is unwise. Therefore, the thing to do is
not to use those types in interfaces, or to define wrapper
classes -- these days, we can expect near-overhead free wrappers
with good ABIs that support modern C++.
Requiring these intN_t to be builtin types wouldn't necessarily be
a simplification of the standards specification -- in fact, it is
most likely the opposite, for one would need to specify
conversion details, etc. [ Xavier, I understand that is not what
you were arguing for; I am just noting this for completeness. ]
| > Net result, no change to the standard.
|
| And no improvements for programmers who want to write portable code.
Lawrence, I would like to understand the kind of optimizations
you have in mind that we would lose if signed integer arithmetic
overflow is 'unspecified' as opposed to 'undefined behavior'.
-- Gaby
Received on 2013-10-24 11:03:05