Date: Sat, 26 Jul 2025 22:02:17 +0200
The issue with the example is a different one than whether it is UB or not.
As soon as the Governor object goes out of scope, the memory is typically reclaimed and overwritten.
That means, there is no longer 0xdeadbeef at the location of .p.
You should have just written an example with a pointer that does *not* go out of scope.
Then somebody may legally access the invalid pointer and illegally dereference it.
But as mentioned in that thread,
- setting it to nullptr is already a good option
- you can define an invalid virtual page, and set the pointer to that dynamic value
-----Ursprüngliche Nachricht-----
Von:Frederick Virchanza Gotham via Std-Proposals <std-proposals_at_[hidden]>
Gesendet:Sa 26.07.2025 20:55
Betreff:Re: [std-proposals] Standardising 0xdeadbeef for pointers
An:std-proposals_at_[hidden];
CC:Frederick Virchanza Gotham <cauldwell.thomas_at_[hidden]>;
On Sat, Jul 26, 2025 at 2:38 PM Tiago Freire wrote:
>
>
> This was your stated use case:
>
> ~Governor(void) noexcept
> {
> try
> {
> delete this->p;
> }
> catch(...){}
>
> this->p = (void*)0xdeadbeef;
> }
>
> > If another part of the code goes to delete the object a second time, it will segfault on 0xdeadbeef
>
> No!
> That! is a destructor! "p" is a member of this object.
We are in agreement that "p" is a member variable inside the class 'Governor'.
> By the time it returns the objects lifetime is over.
Agreed.
> Whatever expectations you now have for that memory is now UB.
Mostly agreed. (But maybe I created the object with 'placement new'
and so the char array is still valid after the object's destruction).
But for simplicity I'll say Agreed here.
> Accessing it in the first place is the problem, the value you will find there is irrelevant.
Disagreed.
Maybe somewhere else in the program there is a pointer to the object
that just got destroyed.
> If the access was valid null would already be a perfect indication that
> the pointer doesn't point to anything, no other value required.
>
> This is an exercise in pure nonsense, as usual.
Sometimes when we write a bug in C++ code, the behaviour is
well-defined by the C++ Standard. I remember one time I wrote a
program that worked fine on Little Endian, but entered an eternal loop
and froze up on Big Endian. The behaviour of the code was perfectly
well-defined, but the logic of my code was wrong.
When we want the compiler and the debugger to help us find buggy code,
sometimes the code is ill-formed, sometimes the code has
implementation-defined behaviour, sometimes the code has undefined
behaviour, and sometimes the code has well-defined behaviour.
In cases where I'm using 0xdeadbeef, I'm forcing a predicable segfault
in the Debug build of my program. Sometimes this will be in lieu of a
harmless "delete nullptr", and sometimes it will be in lieu of a
double-delete. The former is not undefined behaviour. The latter is
undefined behaviour.
But "undefined behaviour" doesn't mysteriously mean that the compiler
and debugger just give up and go crazy. 9 times out of 10, undefined
behaviour is very predictable, like in the following line of code:
for ( int = 0; i > 0; ++i ) DoSomething();
If you look at the assembler generated for the above line of code, you
can predict it will be written in one of two ways. There won't be a
third way -- it's not that mysterious.
--
Std-Proposals mailing list
Std-Proposals_at_[hidden]
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Received on 2025-07-26 20:12:12