Date: Thu, 31 Jul 2025 17:09:08 +0100
It doesn't matter about the top bits being the same or not, just that there
is a range of definitely invalid addresses with NULL at the center for
standardised communication between libraries and software. What goes on
inside said libraries to detect invalid pointers doesn't matter, they could
use the void const *badptr = (void*)badptr; idea, they could use something
from the standard, they could use anything, so long as when they are passed
external pointers they can check against a known range to optionally avoid
segfaulting on. Likewise they need to be able to declare to external
software when they encountered a invalid pointer in their own code. Like
malloc/calloc/realloc/new/new[] for instance could make use of that range
if they detect a buggy pointer while searching for spare memory to
allocate. Does not mean that the range needs valid upper bits, only that
the range centres around NULL.
On Thu, 31 Jul 2025 at 16:47, Thiago Macieira <thiago_at_[hidden]> wrote:
> On Thursday, 31 July 2025 08:51:11 Pacific Daylight Time zxuiji wrote:
> > My bad on the name, misread it. As for the "doesn't solve it at all",
> but it
> > does. Think about it for a moment, if the MAX_INVALID_ADDRESS is always
> > defined as at least 512 and the 1st & last page (using page for
> > convenience, always relative to NULL) is always assigned as sealed
> > non-access pages then any time those pages are referenced then a segfault
> > is guaranteed to occur if it's not caught before hand for release/server
> > stability purposes. That in itself is useful to catch erroneous code
> using
> > uninitialised (as in no allocation made, not as in not assigned any solid
> > value) buffer pointers during debugging.
>
> Again, disagree. An *uninitialised* pointer is unlikely to have the top
> 52-55
> bits set exactly the same. It's not impossible because those are "small
> integers", but it the likelihood is too low to be useful for debugging.
>
> The original proposal in this thread was to catch arithmetic on nullptr,
> because pointers by ±512 bytes or less is reasonable. So I agree on
> *recommending* the first and last pages be marked inaccessible because
> it's a
> good practice, I don't see the need to *mandate* it. Manipulating the
> nullptr
> is already UB, so marking those pointer values around it also invalid
> doesn't
> make anything less UB. Likewise for using uninitialised memory as a
> pointer...
> or as anything else.
>
> We have other memory debugging tools. And the first and last pages *are*
> inaccessible in any modern OS anyway. For example, on FreeBSD:
> http://fxr.watson.org/fxr/source/kern/kern_exec.c?im=10#L150
>
> --
> Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
> Principal Engineer - Intel Platform & System Engineering
>
>
>
>
is a range of definitely invalid addresses with NULL at the center for
standardised communication between libraries and software. What goes on
inside said libraries to detect invalid pointers doesn't matter, they could
use the void const *badptr = (void*)badptr; idea, they could use something
from the standard, they could use anything, so long as when they are passed
external pointers they can check against a known range to optionally avoid
segfaulting on. Likewise they need to be able to declare to external
software when they encountered a invalid pointer in their own code. Like
malloc/calloc/realloc/new/new[] for instance could make use of that range
if they detect a buggy pointer while searching for spare memory to
allocate. Does not mean that the range needs valid upper bits, only that
the range centres around NULL.
On Thu, 31 Jul 2025 at 16:47, Thiago Macieira <thiago_at_[hidden]> wrote:
> On Thursday, 31 July 2025 08:51:11 Pacific Daylight Time zxuiji wrote:
> > My bad on the name, misread it. As for the "doesn't solve it at all",
> but it
> > does. Think about it for a moment, if the MAX_INVALID_ADDRESS is always
> > defined as at least 512 and the 1st & last page (using page for
> > convenience, always relative to NULL) is always assigned as sealed
> > non-access pages then any time those pages are referenced then a segfault
> > is guaranteed to occur if it's not caught before hand for release/server
> > stability purposes. That in itself is useful to catch erroneous code
> using
> > uninitialised (as in no allocation made, not as in not assigned any solid
> > value) buffer pointers during debugging.
>
> Again, disagree. An *uninitialised* pointer is unlikely to have the top
> 52-55
> bits set exactly the same. It's not impossible because those are "small
> integers", but it the likelihood is too low to be useful for debugging.
>
> The original proposal in this thread was to catch arithmetic on nullptr,
> because pointers by ±512 bytes or less is reasonable. So I agree on
> *recommending* the first and last pages be marked inaccessible because
> it's a
> good practice, I don't see the need to *mandate* it. Manipulating the
> nullptr
> is already UB, so marking those pointer values around it also invalid
> doesn't
> make anything less UB. Likewise for using uninitialised memory as a
> pointer...
> or as anything else.
>
> We have other memory debugging tools. And the first and last pages *are*
> inaccessible in any modern OS anyway. For example, on FreeBSD:
> http://fxr.watson.org/fxr/source/kern/kern_exec.c?im=10#L150
>
> --
> Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
> Principal Engineer - Intel Platform & System Engineering
>
>
>
>
Received on 2025-07-31 15:55:07