Date: Sun, 14 Jul 2024 07:30:45 -0700
On Sunday 14 July 2024 06:53:33 GMT-7 zxuiji wrote:
> I agree it's pointless to argue it now, though I do question why there's a
> need to switch modes when x86_64 is a superset of x86.
Because it isn't. The instruction bytes are interpreted differently and decode
to different instructions or different behaviours of some instructions. The same
applies to AArch64 vs AArch32, BTW.
That's different from MIPS32 from MIPS64, where an MIPS32 instruction byte
stream is a perfectly valid MIPS64 instruction byte stream and decodes to the
exactly same instructions. I think PPC64 vs PPC32 is also like this. Maybe
RISC-V 32 and 64-bit too. Those are the exception, not the rule, and are
usually the result of having the 64-bit being designed at the same time as the
32-bit. That's not the case for 64-bit x86 or 64-bit ARM.
> Shouldn't x86_64
> functions just map to 32bit address when linked to 32bit applications?
If that had been the design, maybe. It wasn't. There's no mechanism on ELF
platforms to even load a 64-bit piece of code on a 32-bit application or vice-
versa: the loader refuses to open and interpret an ELF binary that is of the
wrong class or data order.
> As
> long as the application knows to expect bigger addresses from pointer
> functions defined by x86_64 code then there'd be no issue.
Do you know many 32-bit applications written in the late 90s and early 90s
that did that? Even if people had the foresight of predicting 64-bit (because
the transition to 32-bit on x86 was less than 10 years old at that time),
would they take this recommendation? Computers at the time had a handful of MB
of total RAM, they'd balk at wasting bytes in structures like that.
For that matter, why are we not writing 128-bit-pointer-safe code today? Let's
take the newest architecture: RISC-V 64-bit. Why isn't its ABI using 128-bit
pointers? It's not like we are constrained for memory now.
> I agree it's pointless to argue it now, though I do question why there's a
> need to switch modes when x86_64 is a superset of x86.
Because it isn't. The instruction bytes are interpreted differently and decode
to different instructions or different behaviours of some instructions. The same
applies to AArch64 vs AArch32, BTW.
That's different from MIPS32 from MIPS64, where an MIPS32 instruction byte
stream is a perfectly valid MIPS64 instruction byte stream and decodes to the
exactly same instructions. I think PPC64 vs PPC32 is also like this. Maybe
RISC-V 32 and 64-bit too. Those are the exception, not the rule, and are
usually the result of having the 64-bit being designed at the same time as the
32-bit. That's not the case for 64-bit x86 or 64-bit ARM.
> Shouldn't x86_64
> functions just map to 32bit address when linked to 32bit applications?
If that had been the design, maybe. It wasn't. There's no mechanism on ELF
platforms to even load a 64-bit piece of code on a 32-bit application or vice-
versa: the loader refuses to open and interpret an ELF binary that is of the
wrong class or data order.
> As
> long as the application knows to expect bigger addresses from pointer
> functions defined by x86_64 code then there'd be no issue.
Do you know many 32-bit applications written in the late 90s and early 90s
that did that? Even if people had the foresight of predicting 64-bit (because
the transition to 32-bit on x86 was less than 10 years old at that time),
would they take this recommendation? Computers at the time had a handful of MB
of total RAM, they'd balk at wasting bytes in structures like that.
For that matter, why are we not writing 128-bit-pointer-safe code today? Let's
take the newest architecture: RISC-V 64-bit. Why isn't its ABI using 128-bit
pointers? It's not like we are constrained for memory now.
-- Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org Principal Engineer - Intel DCAI Platform & System Engineering
Received on 2024-07-14 14:30:48