Date: Wed, 14 Mar 2018 11:23:37 -0700
On 3/14/18, Hyman Rosen <hyman.rosen_at_[hidden]> wrote:
> On Mar 14, 2018, Lawrence Crowl <Lawrence_at_[hidden]> wrote:
>>> Optimizationism is the bane of C and C++.
>> It is also a major reason for the use of the language. Yin and Yang.
>
> I believe you're wrong. The major reason for the use of the languages
> is that (in the old days) their mapping of programming language to
> underlying machine was transparent, and looking at the code could tell
> you relatively easily what the the compiled code would look like.
That was not the reason. There were many languages that had the same
level of transparency, Fortran and Pascal to name two prominent ones.
There were two reasons that C became popular.
First, it was the implementation language of a free operating system
called Unix. If you wanted to work on Unix, you needed to use C.
Second, and most imporant, C's pointer and array model enabled
programmers to write in source code the kinds of optimizations that
compilers of the day were not doing. In particular, compilers were
not doing strength reduction in loops. In practice, a C program
could run in half the time of a Pascal program. Consequently, the
need for using assembler was lower, which avoided the significant
cost of writing in assembler. That optimization ability is exactly
why C became popular.
> The languages did not contain hidden surprises whereby a seemingly
> simple line of code could expand into huge or slow behavior.
In this regard, programming languages of the time were a mixed bag.
Some languages, like Cobol, could have those traps. Many others did
not.
> Then the optimizationists attacked the languages by pretending that
> speed was the same thing as transparency, and destroyed transparency
> by insisting that common behavior should be treated as undefined and
> then that compilers could pretend that undefined behavior never
> happened.
You have the history entirely wrong.
The language has undefined behavior because it was targeted to systems
in which instruction sets did radically different things. Signed
integer overflow and right shift is undefined behavior because some
machines were ones' complement, some were signed magnitude, and of
course, some were two's complement. The effect on the sign bit
of a logical operation was wildly different. For instance a shift
may or may not affect the sign bit. There are other examples of
difference.
C has undefined behavior because there was no common behavior and
undefined behavior was the only way to get portable and performant
programs. The compilers of the day were not doing anything tricky.
It was hard enough to fit a relatively simple compiler into the
machine.
On occasion new features have undefined behavior. Most of the time,
that behavior is undefined because recognizing the situation is
computationally infeasible. In some cases, the problem is equivalent
to the halting problem. In the absence of undefined behavior in the
feature, you would not have the feature.
> Now compilers take seemingly obvious code and obliterate the intent
> of the programmers. Then the compiler writers gleefully show off
> how wonderful their snippet of code looks now that they have
> miscompiled it.
Compiler writers are often under intense pressure to squeeze more
performance out of programs. They have made legal code faster at the
expense of illegal programs. In doing so, they did not change the
language. On the surface, that is reasonable. The major problem was
that they started performing an entirely new class of optimizations
without warning anyone.
> On Mar 14, 2018, Lawrence Crowl <Lawrence_at_[hidden]> wrote:
>>> Optimizationism is the bane of C and C++.
>> It is also a major reason for the use of the language. Yin and Yang.
>
> I believe you're wrong. The major reason for the use of the languages
> is that (in the old days) their mapping of programming language to
> underlying machine was transparent, and looking at the code could tell
> you relatively easily what the the compiled code would look like.
That was not the reason. There were many languages that had the same
level of transparency, Fortran and Pascal to name two prominent ones.
There were two reasons that C became popular.
First, it was the implementation language of a free operating system
called Unix. If you wanted to work on Unix, you needed to use C.
Second, and most imporant, C's pointer and array model enabled
programmers to write in source code the kinds of optimizations that
compilers of the day were not doing. In particular, compilers were
not doing strength reduction in loops. In practice, a C program
could run in half the time of a Pascal program. Consequently, the
need for using assembler was lower, which avoided the significant
cost of writing in assembler. That optimization ability is exactly
why C became popular.
> The languages did not contain hidden surprises whereby a seemingly
> simple line of code could expand into huge or slow behavior.
In this regard, programming languages of the time were a mixed bag.
Some languages, like Cobol, could have those traps. Many others did
not.
> Then the optimizationists attacked the languages by pretending that
> speed was the same thing as transparency, and destroyed transparency
> by insisting that common behavior should be treated as undefined and
> then that compilers could pretend that undefined behavior never
> happened.
You have the history entirely wrong.
The language has undefined behavior because it was targeted to systems
in which instruction sets did radically different things. Signed
integer overflow and right shift is undefined behavior because some
machines were ones' complement, some were signed magnitude, and of
course, some were two's complement. The effect on the sign bit
of a logical operation was wildly different. For instance a shift
may or may not affect the sign bit. There are other examples of
difference.
C has undefined behavior because there was no common behavior and
undefined behavior was the only way to get portable and performant
programs. The compilers of the day were not doing anything tricky.
It was hard enough to fit a relatively simple compiler into the
machine.
On occasion new features have undefined behavior. Most of the time,
that behavior is undefined because recognizing the situation is
computationally infeasible. In some cases, the problem is equivalent
to the halting problem. In the absence of undefined behavior in the
feature, you would not have the feature.
> Now compilers take seemingly obvious code and obliterate the intent
> of the programmers. Then the compiler writers gleefully show off
> how wonderful their snippet of code looks now that they have
> miscompiled it.
Compiler writers are often under intense pressure to squeeze more
performance out of programs. They have made legal code faster at the
expense of illegal programs. In doing so, they did not change the
language. On the surface, that is reasonable. The major problem was
that they started performing an entirely new class of optimizations
without warning anyone.
-- Lawrence Crowl
Received on 2018-03-14 19:23:39