Date: Thu, 31 Jul 2025 10:22:56 -0700
On Thursday, 31 July 2025 10:00:36 Pacific Daylight Time zxuiji wrote:
> How's it undefined? Take my MAX_INVALID_ADDRESS for example, let's say NULL
> is defined as and nullptr is likewise defined to use 0xdeadbeef. 0xdeadbeef
> +- MAX_INVALID_ADDRESS would be the range for the inline to check against.
> without or without a 0 based NULL/nullptr the compiler can optimise out the
> addition/subtraction applied to NULL & nullptr to check the range when
> compiling it for a library. Granted I prefer being able to check the upper
> bits but that's something I would leave to a glibc/ucrt function to provide
> a extension function for.
The standard defines this as UB:
char *ptr = nullptr;
ptr + 1;
It doesn't matter that you did not dereference. It doesn't matter that the
result was not stored. It's UB and anything past that point is UB.
Adding a *Standard* function that can only detect something after UB has
already happened is pointless. Just like you can't detect a signed integer
overflow *after* the overflow has happened. You need to detect the problem
before UB has happened.
I would however like:
- guaranteed ability to round-trip small numbers through pointer variables
- guaranteed that such small numbers never be returned by memory allocation
It is similar to what you're asking, but avoids the UB by not doing arithmetic
on the null pointer or an invalid pointer. Instead, such pointers are formed
by casting an integer to pointer and only used by casting back from pointer to
integer. The first request is de facto universal right now and the second is
pretty much so but there can be outlier architectures with near-null valid
pointers (usually in kernel mode).
> How's it undefined? Take my MAX_INVALID_ADDRESS for example, let's say NULL
> is defined as and nullptr is likewise defined to use 0xdeadbeef. 0xdeadbeef
> +- MAX_INVALID_ADDRESS would be the range for the inline to check against.
> without or without a 0 based NULL/nullptr the compiler can optimise out the
> addition/subtraction applied to NULL & nullptr to check the range when
> compiling it for a library. Granted I prefer being able to check the upper
> bits but that's something I would leave to a glibc/ucrt function to provide
> a extension function for.
The standard defines this as UB:
char *ptr = nullptr;
ptr + 1;
It doesn't matter that you did not dereference. It doesn't matter that the
result was not stored. It's UB and anything past that point is UB.
Adding a *Standard* function that can only detect something after UB has
already happened is pointless. Just like you can't detect a signed integer
overflow *after* the overflow has happened. You need to detect the problem
before UB has happened.
I would however like:
- guaranteed ability to round-trip small numbers through pointer variables
- guaranteed that such small numbers never be returned by memory allocation
It is similar to what you're asking, but avoids the UB by not doing arithmetic
on the null pointer or an invalid pointer. Instead, such pointers are formed
by casting an integer to pointer and only used by casting back from pointer to
integer. The first request is de facto universal right now and the second is
pretty much so but there can be outlier architectures with near-null valid
pointers (usually in kernel mode).
-- Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org Principal Engineer - Intel Platform & System Engineering
Received on 2025-07-31 17:23:00