On Tue, 28 Mar 2023 at 22:36, Frederick Virchanza Gotham via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
When doing cryptography with the GNU compiler, I like to make use of
the __uint128_t integer type, as the blocks are 16 bytes in size. I'd
like if there were also a __uint256_t integer type for dealing with
digests from the sha256 algorithm (actually come to think of it now I
might try patch g++ myself).

The __uint128_t integer type is implemented as efficiently as possible
on GNU g++, but of course mathematical operations will take more time
and use more code space than if a 64-Bit or 32-Bit type were used.
This is one of the reasons why uintmax_t is 64-Bit instead of 128-Bit
on GNU g++.

There seems to be a bit of confusion about whether or not it's okay
for uintmax_t to be 64-Bit if the compiler provides a 128-Bit integer
type. Take the following program:

#include <cstdint>

int main(void)
{
    __uint128_t huge = UINTMAX_MAX;
    if ( ++huge > UINTMAX_MAX )
    {
        // Should this be possible?

You're assuming that __uint128_t is an integral type. If it's not, then it's fine for it to be wider than uintmax_t.

With GCC it's not an integral type for strict -std=c++NN modes, but is considered an integral type for -std=gnu++NN modes.

Try static_assert(std::is_integral_v<__int128>);

So GCC in strict mode is fully conforming to the definition of uintmax_t. There is no integral type wider than uintmax_t.

But as https://cplusplus.github.io/LWG/issue3828 explains, C23 and C++23 are changing so that __int128 can be an extended integer type, and also be wider than intmax_t, even in strict modes.