C++ Logo


Advanced search

Re: [wg14/wg21 liaison] Designated initializers in C++ and C

From: Niall Douglas <s_sourceforge_at_[hidden]>
Date: Thu, 13 Aug 2020 20:40:08 +0100
On 13/08/2020 16:09, Rajan Bhakta via Liaison wrote:

> Niall, we disagree on a lot of things, but in this I agree with you!

It pleases me that we have actually found something substantial to agree
upon. As you mention, very rare in our case!

> I see two others:

Yeah, I really have to disagree on these ...

> 2) What You See Is What You Get (macros notwithstanding). C code can be
> *read* easily and understood almost 1-1 to your machine language,
> whatever that happens to be. (Ex. No auto, no namespaces, no templates
> generating code you don't see)

I know we've disagreed on this in the past, but to restate my position
on this, there is in my opinion a yawning and ever growing gulf between
C source code and the implementation. As a rough schematic:

1. The C compiler rewrites your C source code to your compiler's choice
of interpretation of the C abstract machine for some target architecture.

2. The CPU rewrites the assembly opcodes to your CPU vendor's choice of
interpretation of the architecture abstraction machine for some target

3. The target silicon is increasingly a stochastic stream computer which
pseudo-emulates a PDP 11 with a floating point extension. In this,
modern general purpose CPUs are becoming ever closer to GPUs, which are
also increasingly emulating a PDP 11 so C targeting software runs well
on them.

Personally speaking, I don't find that C++ adds much to that gulf in
aggregate, at least if you don't choose to write in obfuscated C++.
Modern software and hardware is just layers upon layers of emulation.
Same as your IBM mainframes Rajan!

Me personally speaking, I think that when Moore's Law eventually goes
completely linear, finally there will be incentive to design a
programming language that maps to the bare metal of a stream processor.
This will unlock perhaps thirty years of further exponential improvement
in computer efficiency as software gets increasingly rewritten into the
new abstraction. But I suspect all that will belong to a younger
generation than any of us.

> - Makes it perfect for the embedded world where there are no surprises
> like exceptions causing stack unwinding to callers who had no idea there
> could be a return that way.

C++, especially as the safety critical folk gain more traction, will
continue to eat C's lunch on this. And that's a good thing: a subset of
C++ which is fully compatible with C++, but which avoids footguns you
don't get in C, is an explicit goal for MISRA and many other folk.

>From the other end of things, Rust probably will eat some of C's lunch
here too. I don't see many *major* projects genuinely considering a
transition from modern C++ into Rust, but I have seen some genuinely
considering a transition from C or legacy C++ into Rust.

> 3) Backwards compatibility (Existing C code will continue to work 10,
> 20, 100 years from now).
> - This gives rise to a large body of C code that keeps it in the top 3
> on TIOBE for years. It lets people invest in C and not be worried.

I know you'll disagree, but I think you overestimate this. I know of
plenty of C code which once used to work, but does not on modern
systems, whether due to compiler changes or hardware changes. Correctly
written code shouldn't do this of course, but there is also plenty of C
code which was very efficient on the hardware of the 1990s, and is now
hideously inefficient on current hardware. I, personally, have rewritten
bits of code last touched in 1992 and yielded 500x performance gains
with very little work (tip: don't chase long sequences of pointer
indirection chains!).

In my opinion, all software is like a broom: it wears out over time, and
requires constant renewal and replacement to remain fit for use. This is
due to the ever shifting foundations upon which everything is built.


Received on 2020-08-13 14:43:35