Date: Fri, 25 Jul 2025 12:17:32 +0100
I'm writing some code at the moment, and the destructor looks like this:
~Governor(void) noexcept
{
try
{
delete this->p;
}
catch(...){}
this->p = (void*)0xdeadbeef;
}
I've set the pointer to 0xdeadbeef because I want to make sure that
there's nowhere else in the code that this particular object gets
deleted. If another part of the code goes to delete the object a
second time, it will segfault on 0xdeadbeef -- which is exactly what I
want to happen because I want the debugger to catch it.
If alternatively I were to do:
this->p = nullptr;
then a double-delete won't be caught by the compiler because "delete
nullptr" is harmless.
We already have "nullptr_t", but what if we also had "badptr_t" and
the constant expression "badptr"?
So my destructor would become:
~Governor(void) noexcept
{
try
{
delete this->p;
}
catch(...){}
this->p = badptr;
}
"badptr" coould be a pointer with all bits set to 1 (instead of
0xdeadbeef). I think it should be more elaborate than this though, and
I'll explain why. I remember I was assigned a bug at work one time, a
program was segfaulting because the memory address 0xFFFFFFFE was
being accessed inside "libc". It turns out that the code did this:
char *p = std::strchr( str, 'Z' ) - 2;
strcpy( str2, p );
The call to 'strchr' was failing to find the letter 'Z', and so it
returned a nullptr. But then the nullptr got 2 subtracted from it,
giving 0xFFFFFFFE. So then the implementation of "std::strcpy" inside
"libc" was segfaulting on accessing memory address 0xFFFFFFFE.
This is the reason why I think "nullptr" and "badptr" should be far
away from each other -- it's not good enough to be able to
increment/decrement one of them to get the other.
And I suppose I might be running the risk in my own program running on
x86_64 that 0xdeadbeef could be a valid address. Maybe make sure the
top bit is high: 0xdeadbeefdeadbeef.
~Governor(void) noexcept
{
try
{
delete this->p;
}
catch(...){}
this->p = (void*)0xdeadbeef;
}
I've set the pointer to 0xdeadbeef because I want to make sure that
there's nowhere else in the code that this particular object gets
deleted. If another part of the code goes to delete the object a
second time, it will segfault on 0xdeadbeef -- which is exactly what I
want to happen because I want the debugger to catch it.
If alternatively I were to do:
this->p = nullptr;
then a double-delete won't be caught by the compiler because "delete
nullptr" is harmless.
We already have "nullptr_t", but what if we also had "badptr_t" and
the constant expression "badptr"?
So my destructor would become:
~Governor(void) noexcept
{
try
{
delete this->p;
}
catch(...){}
this->p = badptr;
}
"badptr" coould be a pointer with all bits set to 1 (instead of
0xdeadbeef). I think it should be more elaborate than this though, and
I'll explain why. I remember I was assigned a bug at work one time, a
program was segfaulting because the memory address 0xFFFFFFFE was
being accessed inside "libc". It turns out that the code did this:
char *p = std::strchr( str, 'Z' ) - 2;
strcpy( str2, p );
The call to 'strchr' was failing to find the letter 'Z', and so it
returned a nullptr. But then the nullptr got 2 subtracted from it,
giving 0xFFFFFFFE. So then the implementation of "std::strcpy" inside
"libc" was segfaulting on accessing memory address 0xFFFFFFFE.
This is the reason why I think "nullptr" and "badptr" should be far
away from each other -- it's not good enough to be able to
increment/decrement one of them to get the other.
And I suppose I might be running the risk in my own program running on
x86_64 that 0xdeadbeef could be a valid address. Maybe make sure the
top bit is high: 0xdeadbeefdeadbeef.
Received on 2025-07-25 11:17:45