Date: Sun, 11 Feb 2024 09:23:20 +0000
Hi,
I would be in support of 128bit int as so far as there is hardware support for it, and as far as I am aware there isn't one.
And although the support for specific width integer types is optional, what isn't optional is uintmax_t.
Given that no platform has native support for 128bit integer this implies you think platforms should support it despite the fact that they don't.
And if they do, then uint128_t now becomes the largest supported int meaning that now uintmax_t must also be at least 128bits width (the impact of which is catastrophic).
The impact completely overlooks this point.
You would also need integer literals to be bale to explicit define constants, which the paper doesn't mention.
And the motivation suffers from the typical X Y problem, want you want to do is X you find a way in which Y gives you X, but you ended up proposing Y instead of X.
Quick rundown:
1. Cryptography cited many cyphers that use wider that 64bit numbers, that go beyond 128bits, and they way that they do this is by "widening", i.e. casting 64bit integers to 128 integers in order to perform arithmetic operations preserving all overflow bits.
You can do the exact same trick by cutting the numbers to 32bit chunks and performing the same trick with 64bit integers that already exist.
Plus, the cited cyphers that are exactly 128bit long are also considered unsafe.
But what you really want is an easier way to perform multi-word arithmetic.
I'm currently writing a paper on this: https://kaotic.software/cpp_papers/overflow_arithmetic.html
2. Random number generator. Where the problem is exactly the same, you need more that 128bits, but you use to do multi-word arithmetic.
3. Widening operations. Which again boils down to multi-word arithmetic.
4. Multi-precision operations, which is another term for multi-word arithmetic.
5. Fixed-point/Financial systems. That could best be fulfilled with a fixed-point library.
6. Double-wide atomic operations. Which isn't a use case. Just because a platform provides an operation, it is not in of itself a useful.
7. High-precision time calculations. Which doesn't make sense unless you also have a time facility to support it (which there isn't).
8. Floating-point operations. Which should use floating point facilities.
9. Float-to-string/String-to-float conversion. Which already exist, and those algorithms don't require it.
10. Networking. Specifically, IPv6 addresses, 2 ints.
11. Future proofing. May suggestion here, don't try to predict the future. When it comes, then we can look at it. Again I'm not opposed to it if hardware support exist, but it doesn't.
Every single time I have seen this idea float around, a better solution was always something else.
Given that it would break people's code overnight, the amount of effort required to support it, and not good reason to do it.
The math says no.
That's my opinion.
-----Original Message-----
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of Jan Schultke via Std-Proposals
Sent: Sunday, 11 February 2024 01:19
To: std-proposals_at_[hidden]
Cc: Jan Schultke <janschultke_at_[hidden]>
Subject: [std-proposals] 128-bit integers
Hi,
I've essentially finished my proposal for 128-bit integers:
https://eisenwave.github.io/cpp-proposals/int-least128.html
Please share your thoughts :)
I would be in support of 128bit int as so far as there is hardware support for it, and as far as I am aware there isn't one.
And although the support for specific width integer types is optional, what isn't optional is uintmax_t.
Given that no platform has native support for 128bit integer this implies you think platforms should support it despite the fact that they don't.
And if they do, then uint128_t now becomes the largest supported int meaning that now uintmax_t must also be at least 128bits width (the impact of which is catastrophic).
The impact completely overlooks this point.
You would also need integer literals to be bale to explicit define constants, which the paper doesn't mention.
And the motivation suffers from the typical X Y problem, want you want to do is X you find a way in which Y gives you X, but you ended up proposing Y instead of X.
Quick rundown:
1. Cryptography cited many cyphers that use wider that 64bit numbers, that go beyond 128bits, and they way that they do this is by "widening", i.e. casting 64bit integers to 128 integers in order to perform arithmetic operations preserving all overflow bits.
You can do the exact same trick by cutting the numbers to 32bit chunks and performing the same trick with 64bit integers that already exist.
Plus, the cited cyphers that are exactly 128bit long are also considered unsafe.
But what you really want is an easier way to perform multi-word arithmetic.
I'm currently writing a paper on this: https://kaotic.software/cpp_papers/overflow_arithmetic.html
2. Random number generator. Where the problem is exactly the same, you need more that 128bits, but you use to do multi-word arithmetic.
3. Widening operations. Which again boils down to multi-word arithmetic.
4. Multi-precision operations, which is another term for multi-word arithmetic.
5. Fixed-point/Financial systems. That could best be fulfilled with a fixed-point library.
6. Double-wide atomic operations. Which isn't a use case. Just because a platform provides an operation, it is not in of itself a useful.
7. High-precision time calculations. Which doesn't make sense unless you also have a time facility to support it (which there isn't).
8. Floating-point operations. Which should use floating point facilities.
9. Float-to-string/String-to-float conversion. Which already exist, and those algorithms don't require it.
10. Networking. Specifically, IPv6 addresses, 2 ints.
11. Future proofing. May suggestion here, don't try to predict the future. When it comes, then we can look at it. Again I'm not opposed to it if hardware support exist, but it doesn't.
Every single time I have seen this idea float around, a better solution was always something else.
Given that it would break people's code overnight, the amount of effort required to support it, and not good reason to do it.
The math says no.
That's my opinion.
-----Original Message-----
From: Std-Proposals <std-proposals-bounces_at_[hidden]> On Behalf Of Jan Schultke via Std-Proposals
Sent: Sunday, 11 February 2024 01:19
To: std-proposals_at_[hidden]
Cc: Jan Schultke <janschultke_at_[hidden]>
Subject: [std-proposals] 128-bit integers
Hi,
I've essentially finished my proposal for 128-bit integers:
https://eisenwave.github.io/cpp-proposals/int-least128.html
Please share your thoughts :)
-- Std-Proposals mailing list Std-Proposals_at_[hidden] https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals
Received on 2024-02-11 09:23:23