Date: Wed, 14 Mar 2018 23:10:10 +0000
On Wed, Mar 14, 2018 at 2:30 PM Hyman Rosen <hyman.rosen_at_[hidden]> wrote:
> On Wed, Mar 14, 2018 at 3:44 PM, Lawrence Crowl <Lawrence_at_[hidden]>
> wrote:
>>
>> The phrase "trust the programmer" was directed to language designers
>> and compiler writers to not be nit-picking or straight-jacketing the
>> program. The long form of the phrase is "trust the programmer to
>> write correct programs". If the programmer writes incorrect programs,
>> it's on the programmer to deal with the consequences.
>>
>
> No, that's wrong: <http://beza1e1.tuxen.de/articles/spirit_of_c.html>
> It really is just "trust the programmer".
>
The compiler is "trusting the programmer" the programmer to never run
undefined behaviour. The language trusts the programmer in the sense that
casts are not checked.
I don't see anything in your link that defends the interpretation of "trust
the programmer" as meaning that compiler should emit obvious assembly. Do
you have a supplemental reference?
As far as incorrect programs, such programs were deliberately designated
> as incorrect so that the optimizationists could break them. There is no
> reason why a program that says int x; ... x = ~x + 1; should do
> anything besides the obvious operations on ordinary 2's-complement
> hardware, even when x is INT_MIN.
>
There are a large number of reasons that a compiler should not emit the
obvious operations on ordinary 2's complement hardware, as I have wide
freedom to choose the contents of the "...". It ranges from the very
obvious ("x = 0;" so constant folding occurs and blocks emission of the
negation, or 'x' is never used again so dead code elimination blocks
emission of the negation) to the less obvious (~x is used in multiple
calculations in different expressions) to the downright hard to reason
about.
The other reason to do something other than emit the "obvious operations"
is to detect unintended overflow. For one large industry example Android
security is deploying unsigned integer overflow checking:
https://android-developers.googleblog.com/2016/05/hardening-media-stack.html
. Instead of overflow, their compiler emits an overflow check and trap
instruction.
It is discouraging, looking at the linked C 2003 Rationale, how most of the
> principles listed in the introduction are blithely violated by the C (and
> C++)
> standards, with absolutely no sense of irony or self-awareness:
>
> - Existing code is important, existing implementations are not.
> - C code can be non-portable.
> - Avoid “quiet changes.”
> - A standard is a treaty between implementor and programmer.
> - Trust the programmer.
> - Don’t prevent the programmer from doing what needs to be done.
>
> Undefined behavior has meant "and your anchovy pizza will arrive tomorrow"
>
> for decades. Partly the meaning is there because once a pointer goes
>> haywire,
>
> anything can happen.
>>
>
> Then the error lies in characterizing certain behavior as undefined, when
> it
> should instead be unspecified or implementation-dependent. Signed integer
> arithmetic should mean "issue the underlying machine instructions and
> return
> whatever result they provide." Even if some platforms trap on overflow,
> that
>
does not mean other platforms should have license to assume that overflow
> never happens in valid programs.
>
What semantics does "issue the underlying machine instructions and return
whatever result they provide" have? What about template non-type arguments?
Constant expression evaluation? Used in the size of an array?
What if the compiler uses "whatever the machine does" when determining the
size of an array as part of compilation, then we copy the resulting program
to a different computer that claims to have the same ISA, but where
"whatever that machine does" happens to be different. (This situation would
be surprising for integer arithmetic, but did occur for x86 floating point
arithmetic.) Are you okay with the compiler evaluating one size for an
array at compile time, but calculating a different size for the array at
run time?
What if I tell you, for the sake of argument, that compilers today are
already following your proposed rule: since you didn't specify which
instructions to issue, you have no standing to complain about the
instructions the compiler chose. Can you fix this without adding a listing
of CPU instructions to the language standard and without fully defining it?
Indirecting a pointer should mean "refer to
> the memory pointed to as if there is an object there of the pointer type"
> and
> should be undefined only if the pointer does not point to correctly aligned
> memory owned by the (entire) program. And so on.
>
Suppose I have a function with two local variables, "int x, y;" and I take
&x, I can use x[1] (or x[-1]) to make changes to 'y'?
And similarly, I can construct a pointer to stack variables in another call
stack frame whose address was never taken? As long as I can cast the right
integer to a pointer?
Given these rules, when would it be valid to move automatic local variables
into registers? Only when there are no opaque pointers used and no opaque
functions called?
Nick
> *No behavior should ever be designated undefined in order to
> allowoptimizers to generate code by assuming that the behavior never
> occurs.*
>
And BTW, the development and implementation of Ada costs way more than
>> most contemporary organizations could have spent. In particular, a
>> small research team at Bell Labs did not have that budget.
>>
>
> What does that have to do with anything? Jean Ichbiah's design of Ada
> was a work of genius, and it was done on paper. The free GNAT Ada
> compiler, part of GCC, was developed along with the Ada95 standard
> revision.
>
> That does not, however, change the fact that the code has always been
>> wrong.
>
>
> You may believe that, but I believe you are wrong.
> _______________________________________________
> ub mailing list
> ub_at_[hidden]
> http://www.open-std.org/mailman/listinfo/ub
>
> On Wed, Mar 14, 2018 at 3:44 PM, Lawrence Crowl <Lawrence_at_[hidden]>
> wrote:
>>
>> The phrase "trust the programmer" was directed to language designers
>> and compiler writers to not be nit-picking or straight-jacketing the
>> program. The long form of the phrase is "trust the programmer to
>> write correct programs". If the programmer writes incorrect programs,
>> it's on the programmer to deal with the consequences.
>>
>
> No, that's wrong: <http://beza1e1.tuxen.de/articles/spirit_of_c.html>
> It really is just "trust the programmer".
>
The compiler is "trusting the programmer" the programmer to never run
undefined behaviour. The language trusts the programmer in the sense that
casts are not checked.
I don't see anything in your link that defends the interpretation of "trust
the programmer" as meaning that compiler should emit obvious assembly. Do
you have a supplemental reference?
As far as incorrect programs, such programs were deliberately designated
> as incorrect so that the optimizationists could break them. There is no
> reason why a program that says int x; ... x = ~x + 1; should do
> anything besides the obvious operations on ordinary 2's-complement
> hardware, even when x is INT_MIN.
>
There are a large number of reasons that a compiler should not emit the
obvious operations on ordinary 2's complement hardware, as I have wide
freedom to choose the contents of the "...". It ranges from the very
obvious ("x = 0;" so constant folding occurs and blocks emission of the
negation, or 'x' is never used again so dead code elimination blocks
emission of the negation) to the less obvious (~x is used in multiple
calculations in different expressions) to the downright hard to reason
about.
The other reason to do something other than emit the "obvious operations"
is to detect unintended overflow. For one large industry example Android
security is deploying unsigned integer overflow checking:
https://android-developers.googleblog.com/2016/05/hardening-media-stack.html
. Instead of overflow, their compiler emits an overflow check and trap
instruction.
It is discouraging, looking at the linked C 2003 Rationale, how most of the
> principles listed in the introduction are blithely violated by the C (and
> C++)
> standards, with absolutely no sense of irony or self-awareness:
>
> - Existing code is important, existing implementations are not.
> - C code can be non-portable.
> - Avoid “quiet changes.”
> - A standard is a treaty between implementor and programmer.
> - Trust the programmer.
> - Don’t prevent the programmer from doing what needs to be done.
>
> Undefined behavior has meant "and your anchovy pizza will arrive tomorrow"
>
> for decades. Partly the meaning is there because once a pointer goes
>> haywire,
>
> anything can happen.
>>
>
> Then the error lies in characterizing certain behavior as undefined, when
> it
> should instead be unspecified or implementation-dependent. Signed integer
> arithmetic should mean "issue the underlying machine instructions and
> return
> whatever result they provide." Even if some platforms trap on overflow,
> that
>
does not mean other platforms should have license to assume that overflow
> never happens in valid programs.
>
What semantics does "issue the underlying machine instructions and return
whatever result they provide" have? What about template non-type arguments?
Constant expression evaluation? Used in the size of an array?
What if the compiler uses "whatever the machine does" when determining the
size of an array as part of compilation, then we copy the resulting program
to a different computer that claims to have the same ISA, but where
"whatever that machine does" happens to be different. (This situation would
be surprising for integer arithmetic, but did occur for x86 floating point
arithmetic.) Are you okay with the compiler evaluating one size for an
array at compile time, but calculating a different size for the array at
run time?
What if I tell you, for the sake of argument, that compilers today are
already following your proposed rule: since you didn't specify which
instructions to issue, you have no standing to complain about the
instructions the compiler chose. Can you fix this without adding a listing
of CPU instructions to the language standard and without fully defining it?
Indirecting a pointer should mean "refer to
> the memory pointed to as if there is an object there of the pointer type"
> and
> should be undefined only if the pointer does not point to correctly aligned
> memory owned by the (entire) program. And so on.
>
Suppose I have a function with two local variables, "int x, y;" and I take
&x, I can use x[1] (or x[-1]) to make changes to 'y'?
And similarly, I can construct a pointer to stack variables in another call
stack frame whose address was never taken? As long as I can cast the right
integer to a pointer?
Given these rules, when would it be valid to move automatic local variables
into registers? Only when there are no opaque pointers used and no opaque
functions called?
Nick
> *No behavior should ever be designated undefined in order to
> allowoptimizers to generate code by assuming that the behavior never
> occurs.*
>
And BTW, the development and implementation of Ada costs way more than
>> most contemporary organizations could have spent. In particular, a
>> small research team at Bell Labs did not have that budget.
>>
>
> What does that have to do with anything? Jean Ichbiah's design of Ada
> was a work of genius, and it was done on paper. The free GNAT Ada
> compiler, part of GCC, was developed along with the Ada95 standard
> revision.
>
> That does not, however, change the fact that the code has always been
>> wrong.
>
>
> You may believe that, but I believe you are wrong.
> _______________________________________________
> ub mailing list
> ub_at_[hidden]
> http://www.open-std.org/mailman/listinfo/ub
>
Received on 2018-03-15 00:10:25