On Thu, 9 Jun 2022 at 18:29, Jason McKesson via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
On Thu, Jun 9, 2022 at 11:45 AM Hyman Rosen via Std-Proposals
<std-proposals@lists.isocpp.org> wrote:
>
> The problem started with code like
>
>     volatile char v[4] = 0;
>     ++v[2];
>
> On architectures that do not provide byte-level memory access, the abstract machine cannot be followed exactly. Reading or writing one bye from that array could require reading or writing all four.  Rather than trying to come up with wording to cover such situations, the C standard made volatile access implementation-defined, with the understanding that compilers should implement volatile semantics as best they can in weird situations.  But once the optimizationists captured the standardization processes, they took that implementation-defined behavior as permission to disregard volatile semantics altogether even on normal platforms.
>
> Here is an interesting case.
>
>     void foo(int *p) {
>         if (!p) { printf("null pointer\n"); }
>         volatile bool b = false;
>         if (b) { abort(); }
>         *p = 0;
>     }
>
> It's implementing a poor-man's contract check that the argument is not null and trying to print a message if it is (but continue going).  With correctly implemented volatile semantics, the message will print because printing is a side effect that must happen before the volatile access which is also a side effect.  But the Microsoft compiler elides the volatile variable and test altogether, then sees that if the initial test is true then undefined behavior must result, and eliminates that test and the print.

OK, I'm a normal C++ programmer and I read over that code. My first
thought will be "why is there a pointless variable here?" The idea
that the presence of `if(b)` should change *anything* about the
execution of this code is absurd. This only makes sense to those who
already know way too much about the language.

It would make far more sense if you could just stick some kind of
syntax that obviously says, "Follow the abstract machine exactly"
there. Like:

>     void foo(int *p) {
>         [[exact_code]] {
>           if (!p) { printf("null pointer\n"); }
>           *p = 0;
>         }
>     }

That is readable. It tells you what is happening. It makes it much
more clear not just what is happening but why it is there. And the
scope applied to the attribute tells you how much of the function it
covers.

I am not sure what it means to "follow the abstract machine exactly". Could you explain in a manner that could be followed by a compiler?

As for scope, we really aren't interested in covering more than a single read or write (in the benchmarking case) of an object (which may involve more than one read or write of scalars). Quite possibly this could be accomplished by std::<bikeshed>_load and std::<bikeshed>_store functions acting as an optimizer barrier.

Or semantics could be given to volatile automatic variables if there is indeed a consensus among the people who use them as to what those semantics are; I think it would be sufficient to specify that each volatile qualified automatic object (and function parameter) has a single, consistent (throughout its lifetime) storage location that is distinct from the storage location of every other volatile qualified object (concurrently within its lifetime) and every object of static storage duration; it would follow that reads and writes to that volatile qualified object must result in reads and writes to that storage location.

I'm not arguing against the desire for this. But spelling it `virtual`
is clearly an artifact, a thing people do because it work(ed), not
because it makes any kind of obvious sense that it does what it did.

Important aspects of the language should not be buried under layers of
obfuscation.

Yes, `virtual` automatic variables are a bit obscure. But we should consider backwards compatibility, and compatibility with C.