On Fri, Mar 16, 2018 at 2:09 PM, Lawrence Crowl <Lawrence@crowl.org> wrote:
There are machines that do not define the behavior of some instructions
with some arguments and/or in come contexts.  Are those machines

It depends?  (There was the notorious Intel division bug, for example.)
But generally speaking, no.  It's the compiler's job to model the abstract
machine on the target computer, and that would generally mean avoiding
such constructs.  (But see below.)

The meaning of "x trusts y" is "x relies on the (promised) behavior of
y".  I see no trust in your interpretation because the compiler does
not rely on any behavior on the part of the programmer.  Your view is
blind obedience.

Yes.  I'm glad you understand.  Programs are written later than the compiler,
and therefore programmers have a better model of what the compiler will do
than compilers will have about what programmers will do.  (And programmers
are intelligent people while compilers are stupid machines.)

Therefore, when a compiler is about to make a decision that it does not
understand what a program is doing, it must trust that the programmer
correctly expressed a computational intention, and produce instructions
that (with blind obedience) model what the program contains.

In contrast, in "the compiler trusts the programmer to not write wrong
programs", there is demonstrable trust on the part of the compiler.

That is the exact opposite of trust.

> The compiler still has as-if latitude.  As-if transformations maintain
> the actions of the written program.

As-if only has meaning with respect to an abstract machine.  The issue
here is what you think the abstract machine should be.

That was clear from K&R; when the abstract machine maps cleanly onto
CPU instruction sets, perform the obvious mapping and then produce the
results from that mapping.

> Assuming undefined behavior doesn't happen does not.
That assumption is the trust.

That assumption is the literal opposite of trust.  The compiler says to the
programmer that despite the fact that the programmer has written a section
of code, the compiler will refuse to compile it, often silently.

In this phrase you are trying to impose a change to the C/C++ abstract
machine.  The standard defines the abstract machine; you do not.  If you
want to change that machine, you need to change the standard.

Choosing not to define behavior is not defining behavior.

Compilers choosing to not compile any code path that provably has undefined
behavior is stupid and pernicious and driven by optimizationism.  When the
compiler detects that some piece of code must execute undefined behavior,
it should map that code onto the actual machine in the way the code is written.

That does not involve changing the standard, since the standard already says
that it has no requirements for this code.  It's the compilers that need to change.
(Of course, every time more undefined behavior is added to the standard, it
only encourages the misguided compiler writers to do more of the same, so that
should be avoided.)

Atomic ints are a very special case because programners cannot anticipate
or avoid undefined behavior.  The operations on atomic ints must be
fully closed.  Those constraints do not apply elsewhere.

But they should.  They very, very should.

> But C and C++ already have the maximum possible stupidity of allowing
> floating-point expressions to be calculated at different precisions, so
> that it is literally possible for the expression a + b == a + b to be
> false when the operands are normal floating-point values.  And everyone
> seems to accept that with equanimity.  Optimization, naturally.

No, not optimization.  Some floating-point hardware had operations that
were no commutative.  IIRC, one was IBM 360 floating point.  Machines
still run this instruction set.  This license in the standard merely
reflects the underlying hardware space.

Why are you mentioning commutativity?  I didn't say a + b == b + a,
i said a + b == a + b.  And you're wrong.  The exact reason for this
is that the Intel 8087 floating-point processor had extended-precision
registers, and compilers, for the sake of optimization (of course!) chose
to keep intermediate results in extended precision rather than truncate
them to the lower precision.  But when they needed to spill those registers
to memory, they would spill them in reduced-precision mode.  So even
when two parts of an expression are identical, it can be that one is in a
register and one is in memory, and they therefore have different values.

SSE instructions do away with the extended precision and this problem.
See <https://gcc.gnu.org/onlinedocs/gcc-4.5.3/gcc/i386-and-x86_002d64-Options.html#index-mcpu-1294>.

This statement seems inconsistent.  Since I can make a pointer to any
variable, then it must be in memory.

No.  You cannot make a pointer to any variable, you can make a pointer
to any variable that's in memory.  And to make that pointer, you need to
know or figure out where that variable is.  And it's still legitimate for the
underlying machine to deal with pointers in such a way that pointers
can only move within a particular region of storage and not between
regions, although that's also a dying artifact of Intel architecture.

No, that's not why we had the register keyword.  We had the register
keyword because some programmers were not happy with the simple
mapping of the abstract machine to the hardware.  They wanted a means
to override the mapping.  The 'cannot take the address' part is a
consequence, not a cause.

Just as inline expresses the intent that a function or variable should be
expanded inline, but in the language means only that the definition must
appear in every compilation unit that uses it, so too does register express
the intent that a variable should be kept in a register (or at least that it will be
very frequently accessed) but in the language means only that its address
may not be taken.