Date: Sun, 11 Feb 2024 14:51:20 +0100
> So, do you agree that crypto is not a justification then?
No. It's just not 100% of the motivation and 100% of the use case, as
you have suggested.
> Adding an additional n-bit width integer doesn't stop the problem, it just makes it 2x times wider.
Once again, why bother with 16-bit then? It doesn't solve the problem,
it just makes it 2x wider. Why not just stick with 8-bit?
The answer is obviously that you support the widths directly which
provide sufficient utility. As you have pointed out, there always
remains an infinite set of problems that a finite solution doesn't
solve. However, that has never been a compelling argument against
providing concrete, finite implementations.
The committee has standardized 128-bit floating point numbers, and 128
is obviously not infinite. How come a finite solution was not just
dismissed then?
> From a modern perspective the set of "char, short, int, long int, long long int" are terrible (hindsight is 20-20), but right now must be kept so that our infrastructure can keep functioning.
I agree. The design hasn't aged well, and the set of standard integers
really cannot be touched without breaking a lot of code and ABI.
> But that is a separate discussion from, should std::int128_t be mandatory in the first place?
Yes. Multiple compilers provide 128-bit support already, and there is
large-scale standard library support as well. However, many developers
will be unable to use this rich ecosystem because it's simply not
standard, and not really portable between compilers yet. That's
obviously a waste of potential.
150K C++ files on GitHub use 128-bit integers in some form, just not
as a standard feature. Let's just give them a common, portable,
teachable type instead of forcing people to reinvent the wheel for the
millionth time by making their own class type, or instead of forcing
them to use non-standard extensions.
No. It's just not 100% of the motivation and 100% of the use case, as
you have suggested.
> Adding an additional n-bit width integer doesn't stop the problem, it just makes it 2x times wider.
Once again, why bother with 16-bit then? It doesn't solve the problem,
it just makes it 2x wider. Why not just stick with 8-bit?
The answer is obviously that you support the widths directly which
provide sufficient utility. As you have pointed out, there always
remains an infinite set of problems that a finite solution doesn't
solve. However, that has never been a compelling argument against
providing concrete, finite implementations.
The committee has standardized 128-bit floating point numbers, and 128
is obviously not infinite. How come a finite solution was not just
dismissed then?
> From a modern perspective the set of "char, short, int, long int, long long int" are terrible (hindsight is 20-20), but right now must be kept so that our infrastructure can keep functioning.
I agree. The design hasn't aged well, and the set of standard integers
really cannot be touched without breaking a lot of code and ABI.
> But that is a separate discussion from, should std::int128_t be mandatory in the first place?
Yes. Multiple compilers provide 128-bit support already, and there is
large-scale standard library support as well. However, many developers
will be unable to use this rich ecosystem because it's simply not
standard, and not really portable between compilers yet. That's
obviously a waste of potential.
150K C++ files on GitHub use 128-bit integers in some form, just not
as a standard feature. Let's just give them a common, portable,
teachable type instead of forcing people to reinvent the wheel for the
millionth time by making their own class type, or instead of forcing
them to use non-standard extensions.
Received on 2024-02-11 13:51:32