The issue is not that `long long` shouldn't be a valid type for `int_least128_t`, it's that we don't spec that it's guaranteed to be so, which is different from every other `int_leastN_t`, which (as I mentioned previously) has at least one standard integer type that is guaranteed to satisfy it. In effect, we're specifying a completely new standard type. IMO, the proposal should not hide that fact behind a type alias to an unspecified type. The fixed width floating-point types were mentioned, but they are optional and mandate the use of an extended floating-point type - they are never allowed to be a standard type. And since they are optional, to my knowledge, there is no wording in the standard that requires an implementation to provide an extended integer type *or* an extended floating-point type.


> Yes. Multiple compilers provide 128-bit support already, and there is
large-scale standard library support as well. However, many developers
will be unable to use this rich ecosystem because it's simply not
standard, and not really portable between compilers yet. That's
obviously a waste of potential.

FTR, mandating int128_t, effectively mandates `CHAR_BIT` be a power of two (less than 128) since the fixed width integer types have no padding bits (thus the size must be exactly `128/CHAR_BIT`, where `/` is exact division, not truncating integer division). It should also be considered whether the proposal wants to change the requirements on `CHAR_BIT` at the same time.

On Sun, 11 Feb 2024 at 08:51, Jan Schultke via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
> So, do you agree that crypto is not a justification then?

No. It's just not 100% of the motivation and 100% of the use case, as
you have suggested.

> Adding an additional n-bit width integer doesn't stop the problem, it just makes it 2x times wider.

Once again, why bother with 16-bit then? It doesn't solve the problem,
it just makes it 2x wider. Why not just stick with 8-bit?

The answer is obviously that you support the widths directly which
provide sufficient utility. As you have pointed out, there always
remains an infinite set of problems that a finite solution doesn't
solve. However, that has never been a compelling argument against
providing concrete, finite implementations.

The committee has standardized 128-bit floating point numbers, and 128
is obviously not infinite. How come a finite solution was not just
dismissed then?

> From a modern perspective the set of "char, short, int, long int, long long int" are terrible (hindsight is 20-20), but right now must be kept so that our infrastructure can keep functioning.

I agree. The design hasn't aged well, and the set of standard integers
really cannot be touched without breaking a lot of code and ABI.

> But that is a separate discussion from, should std::int128_t be mandatory in the first place?

Yes. Multiple compilers provide 128-bit support already, and there is
large-scale standard library support as well. However, many developers
will be unable to use this rich ecosystem because it's simply not
standard, and not really portable between compilers yet. That's
obviously a waste of potential.

150K C++ files on GitHub use 128-bit integers in some form, just not
as a standard feature. Let's just give them a common, portable,
teachable type instead of forcing people to reinvent the wheel for the
millionth time by making their own class type, or instead of forcing
them to use non-standard extensions.
--
Std-Proposals mailing list
Std-Proposals@lists.isocpp.org
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals