C++ Logo

std-proposals

Advanced search

Re: [std-proposals] Standardising 0xdeadbeef for pointers

From: Julien Villemure-Fréchette <julien.villemure_at_[hidden]>
Date: Fri, 01 Aug 2025 12:07:32 -0400
> My bad on the name, misread it. As for the "doesn't solve it at all", but it does. Think about it for a moment, if the MAX_INVALID_ADDRESS is always defined as at least 512 and the 1st & last page (using page for convenience, always relative to NULL) is always assigned as sealed non-access pages then any time those pages are referenced then a segfault is guaranteed to occur if it's not caught before hand for release/server stability purposes. That in itself is useful to catch erroneous code using uninitialised (as in no allocation made, not as in not assigned any solid value) buffer pointers during debugging.

Indirection through an invalid pointer or null pointer value does not at all guarantee that a SIGSEGFAULT will be generated. A compiler could infer that a branch in code is unreachable on the basis that this branch would evaluate an indirection of a null pointer value (or possibly an invalid pointer value, like a dangling one). The following code could compile to no code at all:

```
    if (p == nullptr)
        i = *p;
```
This is not "lala land", it does happen in modern compilers on modern hardware. AFAIR, clang does it when optimization enabled "-O2".

Also, SIGSEGFAULT will almost never catch any other kind of access through invalid pointer value. In particular, access out of lifetime (i.e. dangling pointer and reference to local variables, pointer to deleted memory) and access out of bound (past the end or before begin of an object) will normally occur within the user space address space and may go unnoticed; or sometimes trigger a SIGILL if the stack gets corrupted.




On July 31, 2025 11:51:11 a.m. EDT, zxuiji via Std-Proposals <std-proposals_at_[hidden]> wrote:
>My bad on the name, misread it. As for the "doesn't solve it at all", but
>it does. Think about it for a moment, if the MAX_INVALID_ADDRESS is always
>defined as at least 512 and the 1st & last page (using page for
>convenience, always relative to NULL) is always assigned as sealed
>non-access pages then any time those pages are referenced then a segfault
>is guaranteed to occur if it's not caught before hand for release/server
>stability purposes. That in itself is useful to catch erroneous code using
>uninitialised (as in no allocation made, not as in not assigned any solid
>value) buffer pointers during debugging.
>
>On Thu, 31 Jul 2025 at 16:30, Ville Voutilainen <ville.voutilainen_at_[hidden]>
>wrote:
>
>> On Thu, 31 Jul 2025 at 18:27, zxuiji <gb2985_at_[hidden]> wrote:
>> >
>> > villy that email was in response to the sysconf(_SC_PAGESIZE) being as
>> little as 1, anyone programming for something embedded (which I presume the
>> MMU would be) would not bother with using that call in the first place
>>
>> Try to spell my name correctly when responding to me.
>>
>> Yes, you also wrote
>> "Then the answer to the potential null+-1 being valid is simple,
>> mandate in the next standard that the 0+-PAGE_SIZE be premapped as
>> sealed no rwx pages. That resolves the problem completely. They don't
>> need to have anything mapped to them, just that they be mapped as
>> invalid."
>>
>> which doesn't solve the problem "completely", or in fact, at all.
>>

Received on 2025-08-01 16:07:44