Date: Thu, 15 Mar 2018 02:05:40 -0400
On Wed, Mar 14, 2018 at 7:10 PM, Nick Lewycky <nlewycky_at_[hidden]> wrote:
>
> The compiler is "trusting the programmer" the programmer to never run
> undefined behaviour.
>
The language trusts the programmer in the sense that casts are not checked.
>
The concept of undefined behavior is erroneous except for things like
dereferencing pointers that do not point into owned memory, where there
is literally no meaningful thing that can happen.
I don't see anything in your link that defends the interpretation of "trust
> the programmer" as
>
meaning that compiler should emit obvious assembly. Do you have a
> supplemental reference?
>
"Trust the programmer" must mean to make the actions of the compiled code
follow the actions of the written code, or act as if it does. Anything
else is
literally not trusting the programmer.
There are a large number of reasons that a compiler should not emit the
> obvious operations on ordinary 2's complement hardware, as I have wide
> freedom to choose the contents of the "...". It ranges from the very
> obvious ("x = 0;" so constant folding occurs and blocks emission of the
> negation, or 'x' is never used again so dead code elimination blocks
> emission of the negation) to the less obvious (~x is used in multiple
> calculations in different expressions) to the downright hard to reason
> about.
>
The compiler still has as-if latitude. As-if transformations maintain the
actions
of the written program. Assuming undefined behavior doesn't happen does
not.
The other reason to do something other than emit the "obvious operations"
> is to detect unintended overflow. For one large industry example Android
> security is deploying unsigned integer overflow checking: https://android-
> developers.googleblog.com/2016/05/hardening-media-stack.html . Instead of
> overflow, their compiler emits an overflow check and trap instruction.
>
That's fine. Ada generated exceptions on overflow and other subtype
constraint
violations from its inception. It simply means that the underlying
computer isn't a
2's-complement arithmetic machine, and programs written for 2's-complement
arithmetic won't work there. C and C++ programs do not have to be portable
to
be valid. It's going to be up to the users of the non-2's-complement
platform to
decide if it fits their needs.
What semantics does "issue the underlying machine instructions and return
> whatever result they provide" have?
>
The compiler has a mapping from abstract machine to its target computer,
whereby it represents program objects as machine objects. The target
computer
has its own semantics for arithmetic operations. By far the most common
semantics for machine integer arithmetic is fixed-width 2's-complement, and
when that's the case, that's what the quoted phrase means - do the
arithmetic
in 2's-complement and return the result.
That is what the standard explicitly requires everywhere for atomic ints,
so I don't
see why you would find this surprising or difficult.
> What about template non-type arguments? Constant expression evaluation?
> Used in the size of an array?
>
What is difficult to understand? For those things, the compiler's target
platform
is an artificial target platform embedded within the compiler itself. For
ease of
programmer understanding, that platform should behave in the same way as the
actual target platform.
What if the compiler uses "whatever the machine does" when determining the
> size of an array as part of compilation, then we copy the resulting program
> to a different computer that claims to have the same ISA, but where
> "whatever that machine does" happens to be different. (This situation would
> be surprising for integer arithmetic, but did occur for x86 floating point
> arithmetic.) Are you okay with the compiler evaluating one size for an
> array at compile time, but calculating a different size for the array at
> run time?
>
As I said, it would be extremely confusing for the compiler to do that, so
it shouldn't.
But C and C++ already have the maximum possible stupidity of allowing
floating-point
expressions to be calculated at different precisions, so that it is
literally possible for the
expression a + b == a + b to be false when the operands are normal
floating-point
values. And everyone seems to accept that with equanimity. Optimization,
naturally.
What if I tell you, for the sake of argument, that compilers today are
> already following
>
your proposed rule: since you didn't specify which instructions to issue,
> you have no
>
standing to complain about the instructions the compiler chose.
>
You don't get to tell me whether or not I have standing. I'm going to
complain about
this forever, so that all the people who made the bad decisions will feel
bad about
themselves, and not sleep at night because of all the grief they've caused
programmers.
You also don't get to be immunized from critics who weren't there when
decisions were
made. Learning from bad decisions of the past helps prevent bad decisions
in the future,
for those willing to learn.
Can you fix this without adding a listing of CPU instructions to the
> language standard
>
and without fully defining it?
>
I don't need to. The C++ Standard already specifies that atomic integers
obey the rules
of 2's-complement arithmetic, and it doesn't spell out what those rules are.
Indirecting a pointer should mean "refer to
>> the memory pointed to as if there is an object there of the pointer type"
>> and
>> should be undefined only if the pointer does not point to correctly
>> aligned
>> memory owned by the (entire) program. And so on.
>>
>
> Suppose I have a function with two local variables, "int x, y;" and I take
> &x, I can use
>
x[1] (or x[-1]) to make changes to 'y'?
>
Yes, provided you know that the memory layout corresponds to that.
And similarly, I can construct a pointer to stack variables in another call
> stack frame
>
whose address was never taken? As long as I can cast the right integer to
> a pointer?
>
Yes.
Given these rules, when would it be valid to move automatic local variables
> into registers?
>
Always. If a variable is not in memory, then a pointer can't be made to
point to it,
no matter how the pointer is manipulated. As I said, you must know the
memory
layout in order to do tricks with pointers.
> Only when there are no opaque pointers used and no opaque functions called?
>
No, always. In fact, that's why we had the register keyword. That told
the compiler
that the address of such variables could never be taken.
>
> The compiler is "trusting the programmer" the programmer to never run
> undefined behaviour.
>
The language trusts the programmer in the sense that casts are not checked.
>
The concept of undefined behavior is erroneous except for things like
dereferencing pointers that do not point into owned memory, where there
is literally no meaningful thing that can happen.
I don't see anything in your link that defends the interpretation of "trust
> the programmer" as
>
meaning that compiler should emit obvious assembly. Do you have a
> supplemental reference?
>
"Trust the programmer" must mean to make the actions of the compiled code
follow the actions of the written code, or act as if it does. Anything
else is
literally not trusting the programmer.
There are a large number of reasons that a compiler should not emit the
> obvious operations on ordinary 2's complement hardware, as I have wide
> freedom to choose the contents of the "...". It ranges from the very
> obvious ("x = 0;" so constant folding occurs and blocks emission of the
> negation, or 'x' is never used again so dead code elimination blocks
> emission of the negation) to the less obvious (~x is used in multiple
> calculations in different expressions) to the downright hard to reason
> about.
>
The compiler still has as-if latitude. As-if transformations maintain the
actions
of the written program. Assuming undefined behavior doesn't happen does
not.
The other reason to do something other than emit the "obvious operations"
> is to detect unintended overflow. For one large industry example Android
> security is deploying unsigned integer overflow checking: https://android-
> developers.googleblog.com/2016/05/hardening-media-stack.html . Instead of
> overflow, their compiler emits an overflow check and trap instruction.
>
That's fine. Ada generated exceptions on overflow and other subtype
constraint
violations from its inception. It simply means that the underlying
computer isn't a
2's-complement arithmetic machine, and programs written for 2's-complement
arithmetic won't work there. C and C++ programs do not have to be portable
to
be valid. It's going to be up to the users of the non-2's-complement
platform to
decide if it fits their needs.
What semantics does "issue the underlying machine instructions and return
> whatever result they provide" have?
>
The compiler has a mapping from abstract machine to its target computer,
whereby it represents program objects as machine objects. The target
computer
has its own semantics for arithmetic operations. By far the most common
semantics for machine integer arithmetic is fixed-width 2's-complement, and
when that's the case, that's what the quoted phrase means - do the
arithmetic
in 2's-complement and return the result.
That is what the standard explicitly requires everywhere for atomic ints,
so I don't
see why you would find this surprising or difficult.
> What about template non-type arguments? Constant expression evaluation?
> Used in the size of an array?
>
What is difficult to understand? For those things, the compiler's target
platform
is an artificial target platform embedded within the compiler itself. For
ease of
programmer understanding, that platform should behave in the same way as the
actual target platform.
What if the compiler uses "whatever the machine does" when determining the
> size of an array as part of compilation, then we copy the resulting program
> to a different computer that claims to have the same ISA, but where
> "whatever that machine does" happens to be different. (This situation would
> be surprising for integer arithmetic, but did occur for x86 floating point
> arithmetic.) Are you okay with the compiler evaluating one size for an
> array at compile time, but calculating a different size for the array at
> run time?
>
As I said, it would be extremely confusing for the compiler to do that, so
it shouldn't.
But C and C++ already have the maximum possible stupidity of allowing
floating-point
expressions to be calculated at different precisions, so that it is
literally possible for the
expression a + b == a + b to be false when the operands are normal
floating-point
values. And everyone seems to accept that with equanimity. Optimization,
naturally.
What if I tell you, for the sake of argument, that compilers today are
> already following
>
your proposed rule: since you didn't specify which instructions to issue,
> you have no
>
standing to complain about the instructions the compiler chose.
>
You don't get to tell me whether or not I have standing. I'm going to
complain about
this forever, so that all the people who made the bad decisions will feel
bad about
themselves, and not sleep at night because of all the grief they've caused
programmers.
You also don't get to be immunized from critics who weren't there when
decisions were
made. Learning from bad decisions of the past helps prevent bad decisions
in the future,
for those willing to learn.
Can you fix this without adding a listing of CPU instructions to the
> language standard
>
and without fully defining it?
>
I don't need to. The C++ Standard already specifies that atomic integers
obey the rules
of 2's-complement arithmetic, and it doesn't spell out what those rules are.
Indirecting a pointer should mean "refer to
>> the memory pointed to as if there is an object there of the pointer type"
>> and
>> should be undefined only if the pointer does not point to correctly
>> aligned
>> memory owned by the (entire) program. And so on.
>>
>
> Suppose I have a function with two local variables, "int x, y;" and I take
> &x, I can use
>
x[1] (or x[-1]) to make changes to 'y'?
>
Yes, provided you know that the memory layout corresponds to that.
And similarly, I can construct a pointer to stack variables in another call
> stack frame
>
whose address was never taken? As long as I can cast the right integer to
> a pointer?
>
Yes.
Given these rules, when would it be valid to move automatic local variables
> into registers?
>
Always. If a variable is not in memory, then a pointer can't be made to
point to it,
no matter how the pointer is manipulated. As I said, you must know the
memory
layout in order to do tricks with pointers.
> Only when there are no opaque pointers used and no opaque functions called?
>
No, always. In fact, that's why we had the register keyword. That told
the compiler
that the address of such variables could never be taken.
Received on 2018-03-15 07:06:03