Date: Sat, 26 Jul 2025 08:17:02 +0200
> On Jul 25, 2025, at 10:50 PM, Bo Persson via Std-Proposals <std-proposals_at_[hidden]> wrote:
>
> On 2025-07-25 at 20:27, Frederick Virchanza Gotham via Std-Proposals wrote:
>> On Friday, July 25, 2025, Oliver Hunt wrote:
>> > Not really. There isn't a computer in existence today -- I don't
>> think
>> > -- that uses more than 49 bits for a memory address. 64-Bit ARM uses
>> > 48 bits but it can be extended by 1 bit to 49 bits.
>> >
>> > So you can mark a pointer as 'bad' by manipulating the top 15
>> bits. Or
>> > even just set the top bit high.
>> This is nonsense.
>> High bits are 100% valid on numerous platforms.
>> Numerous platforms make use of the high bits: CHERI, ARMv8.3 with
>> PAC extensions, MTE, etc
>> In addition to that many OS’s use high bits in kernel addresses. e.g
>> 0x111….. is kernel space, 0x0000…. is user space.
>> Gonna make an attempt at deductive reasoning here.
>> Computers nowadays have 32-Bit or 64-Bit pointers. Some microcontrollers have 8-Bit or 16-Bit pointers.
>> Talking about 64-Bit pointers . . . if each individual increment is one 8-Bit byte, then a 64-Bit pointer can address 18 million terabytes.
>> But nobody has that much memory.
> Where have we heard that before? 640k, 16M, 4G, 18 Gazillion. Never happens. Ever!
>
It doesn’t actually matter if there’s ever going to be a machine with physical 18 Exabytes of memory. There are reasons why CPUs might support 18 Exabytes of virtual address space. I was recently reminded of that because of arena allocators: reserve large address spaces for each arena and then the physical memory can just grow within the virtual address space. With special network cards you might even map other computers’ memory into your own address space. Or you can map full hard drives (or file servers) into memory. You see, it is easy to create scenarios that in the near future could use the full virtual address space of 64 bit pointers.
However, C++ could still standardize a second value sufficiently distinct from nullptr that has special meaning. We just should prescribe the actual value of that pointer. On the other hand, it might be sufficient if you can convince GCC or clang to implement this feature (without standardization) and use that compiler for debugging.
>
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
>
> On 2025-07-25 at 20:27, Frederick Virchanza Gotham via Std-Proposals wrote:
>> On Friday, July 25, 2025, Oliver Hunt wrote:
>> > Not really. There isn't a computer in existence today -- I don't
>> think
>> > -- that uses more than 49 bits for a memory address. 64-Bit ARM uses
>> > 48 bits but it can be extended by 1 bit to 49 bits.
>> >
>> > So you can mark a pointer as 'bad' by manipulating the top 15
>> bits. Or
>> > even just set the top bit high.
>> This is nonsense.
>> High bits are 100% valid on numerous platforms.
>> Numerous platforms make use of the high bits: CHERI, ARMv8.3 with
>> PAC extensions, MTE, etc
>> In addition to that many OS’s use high bits in kernel addresses. e.g
>> 0x111….. is kernel space, 0x0000…. is user space.
>> Gonna make an attempt at deductive reasoning here.
>> Computers nowadays have 32-Bit or 64-Bit pointers. Some microcontrollers have 8-Bit or 16-Bit pointers.
>> Talking about 64-Bit pointers . . . if each individual increment is one 8-Bit byte, then a 64-Bit pointer can address 18 million terabytes.
>> But nobody has that much memory.
> Where have we heard that before? 640k, 16M, 4G, 18 Gazillion. Never happens. Ever!
>
It doesn’t actually matter if there’s ever going to be a machine with physical 18 Exabytes of memory. There are reasons why CPUs might support 18 Exabytes of virtual address space. I was recently reminded of that because of arena allocators: reserve large address spaces for each arena and then the physical memory can just grow within the virtual address space. With special network cards you might even map other computers’ memory into your own address space. Or you can map full hard drives (or file servers) into memory. You see, it is easy to create scenarios that in the near future could use the full virtual address space of 64 bit pointers.
However, C++ could still standardize a second value sufficiently distinct from nullptr that has special meaning. We just should prescribe the actual value of that pointer. On the other hand, it might be sufficient if you can convince GCC or clang to implement this feature (without standardization) and use that compiler for debugging.
>
>
> --
> Std-Proposals mailing list
> Std-Proposals_at_[hidden]
> https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Received on 2025-07-26 06:17:20