Date: Sat, 26 Jul 2025 19:55:22 +0100
On Sat, Jul 26, 2025 at 2:38 PM Tiago Freire wrote:
>
>
> This was your stated use case:
>
> ~Governor(void) noexcept
> {
> try
> {
> delete this->p;
> }
> catch(...){}
>
> this->p = (void*)0xdeadbeef;
> }
>
> > If another part of the code goes to delete the object a second time, it will segfault on 0xdeadbeef
>
> No!
> That! is a destructor! "p" is a member of this object.
We are in agreement that "p" is a member variable inside the class 'Governor'.
> By the time it returns the objects lifetime is over.
Agreed.
> Whatever expectations you now have for that memory is now UB.
Mostly agreed. (But maybe I created the object with 'placement new'
and so the char array is still valid after the object's destruction).
But for simplicity I'll say Agreed here.
> Accessing it in the first place is the problem, the value you will find there is irrelevant.
Disagreed.
Maybe somewhere else in the program there is a pointer to the object
that just got destroyed.
> If the access was valid null would already be a perfect indication that
> the pointer doesn't point to anything, no other value required.
>
> This is an exercise in pure nonsense, as usual.
Sometimes when we write a bug in C++ code, the behaviour is
well-defined by the C++ Standard. I remember one time I wrote a
program that worked fine on Little Endian, but entered an eternal loop
and froze up on Big Endian. The behaviour of the code was perfectly
well-defined, but the logic of my code was wrong.
When we want the compiler and the debugger to help us find buggy code,
sometimes the code is ill-formed, sometimes the code has
implementation-defined behaviour, sometimes the code has undefined
behaviour, and sometimes the code has well-defined behaviour.
In cases where I'm using 0xdeadbeef, I'm forcing a predicable segfault
in the Debug build of my program. Sometimes this will be in lieu of a
harmless "delete nullptr", and sometimes it will be in lieu of a
double-delete. The former is not undefined behaviour. The latter is
undefined behaviour.
But "undefined behaviour" doesn't mysteriously mean that the compiler
and debugger just give up and go crazy. 9 times out of 10, undefined
behaviour is very predictable, like in the following line of code:
for ( int = 0; i > 0; ++i ) DoSomething();
If you look at the assembler generated for the above line of code, you
can predict it will be written in one of two ways. There won't be a
third way -- it's not that mysterious.
>
>
> This was your stated use case:
>
> ~Governor(void) noexcept
> {
> try
> {
> delete this->p;
> }
> catch(...){}
>
> this->p = (void*)0xdeadbeef;
> }
>
> > If another part of the code goes to delete the object a second time, it will segfault on 0xdeadbeef
>
> No!
> That! is a destructor! "p" is a member of this object.
We are in agreement that "p" is a member variable inside the class 'Governor'.
> By the time it returns the objects lifetime is over.
Agreed.
> Whatever expectations you now have for that memory is now UB.
Mostly agreed. (But maybe I created the object with 'placement new'
and so the char array is still valid after the object's destruction).
But for simplicity I'll say Agreed here.
> Accessing it in the first place is the problem, the value you will find there is irrelevant.
Disagreed.
Maybe somewhere else in the program there is a pointer to the object
that just got destroyed.
> If the access was valid null would already be a perfect indication that
> the pointer doesn't point to anything, no other value required.
>
> This is an exercise in pure nonsense, as usual.
Sometimes when we write a bug in C++ code, the behaviour is
well-defined by the C++ Standard. I remember one time I wrote a
program that worked fine on Little Endian, but entered an eternal loop
and froze up on Big Endian. The behaviour of the code was perfectly
well-defined, but the logic of my code was wrong.
When we want the compiler and the debugger to help us find buggy code,
sometimes the code is ill-formed, sometimes the code has
implementation-defined behaviour, sometimes the code has undefined
behaviour, and sometimes the code has well-defined behaviour.
In cases where I'm using 0xdeadbeef, I'm forcing a predicable segfault
in the Debug build of my program. Sometimes this will be in lieu of a
harmless "delete nullptr", and sometimes it will be in lieu of a
double-delete. The former is not undefined behaviour. The latter is
undefined behaviour.
But "undefined behaviour" doesn't mysteriously mean that the compiler
and debugger just give up and go crazy. 9 times out of 10, undefined
behaviour is very predictable, like in the following line of code:
for ( int = 0; i > 0; ++i ) DoSomething();
If you look at the assembler generated for the above line of code, you
can predict it will be written in one of two ways. There won't be a
third way -- it's not that mysterious.
Received on 2025-07-26 18:55:33