C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Standardising 0xdeadbeef for pointers

From: Jason McKesson <jmckesson_at_[hidden]>
Date: Sat, 26 Jul 2025 15:31:51 -0400
On Sat, Jul 26, 2025 at 2:55 PM Frederick Virchanza Gotham via
Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> On Sat, Jul 26, 2025 at 2:38 PM Tiago Freire wrote:
> >
> >
> > This was your stated use case:
> >
> > ~Governor(void) noexcept
> > {
> > try
> > {
> > delete this->p;
> > }
> > catch(...){}
> >
> > this->p = (void*)0xdeadbeef;
> > }
> >
> > > If another part of the code goes to delete the object a second time, it will segfault on 0xdeadbeef
> >
> > No!
> > That! is a destructor! "p" is a member of this object.
>
>
> We are in agreement that "p" is a member variable inside the class 'Governor'.
>
>
> > By the time it returns the objects lifetime is over.
>
>
> Agreed.
>
>
> > Whatever expectations you now have for that memory is now UB.
>
>
> Mostly agreed. (But maybe I created the object with 'placement new'
> and so the char array is still valid after the object's destruction).
> But for simplicity I'll say Agreed here.
>
>
> > Accessing it in the first place is the problem, the value you will find there is irrelevant.
>
>
> Disagreed.
>
> Maybe somewhere else in the program there is a pointer to the object
> that just got destroyed.
>
>
> > If the access was valid null would already be a perfect indication that
> > the pointer doesn't point to anything, no other value required.
> >
> > This is an exercise in pure nonsense, as usual.
>
>
> Sometimes when we write a bug in C++ code, the behaviour is
> well-defined by the C++ Standard. I remember one time I wrote a
> program that worked fine on Little Endian, but entered an eternal loop
> and froze up on Big Endian. The behaviour of the code was perfectly
> well-defined, but the logic of my code was wrong.

I'm kinda curious how you can possibly write code that is
simultaneously endian-dependent *and* is "well-defined?" Note that,
because endian-ness is implementation-specific behavior, code that
computes a value in some way such that the resulting computation is
endian-dependent must be living in the realm of
"implementation-defined" or "unspecified" behavior.

Which aren't not the same thing as "well-defined" behavior.

> When we want the compiler and the debugger to help us find buggy code,
> sometimes the code is ill-formed, sometimes the code has
> implementation-defined behaviour, sometimes the code has undefined
> behaviour, and sometimes the code has well-defined behaviour.
>
> In cases where I'm using 0xdeadbeef, I'm forcing a predicable segfault
> in the Debug build of my program. Sometimes this will be in lieu of a
> harmless "delete nullptr", and sometimes it will be in lieu of a
> double-delete. The former is not undefined behaviour. The latter is
> undefined behaviour.
>
> But "undefined behaviour" doesn't mysteriously mean that the compiler
> and debugger just give up and go crazy. 9 times out of 10, undefined
> behaviour is very predictable,

... no, it isn't. And the fact that you *still* refuse to recognize
the difference between "what the standard says" and "what
implementations do" is an ongoing problem.

10 out of 10 times, undefined behavior is "behavior for which this
document imposes no requirements". It does not matter what
implementations may "predictably" do in those situations; anything
those implementations do is outside of the boundary of what the
standard will impose.

> like in the following line of code:
>
> for ( int = 0; i > 0; ++i ) DoSomething();
>
> If you look at the assembler generated for the above line of code, you
> can predict it will be written in one of two ways. There won't be a
> third way -- it's not that mysterious.

Received on 2025-07-26 19:32:04