On Tue, 28 Mar 2023 at 22:36, Frederick Virchanza Gotham via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
When doing cryptography with the GNU compiler, I like to make use of
the __uint128_t integer type, as the blocks are 16 bytes in size. I'd
like if there were also a __uint256_t integer type for dealing with
digests from the sha256 algorithm (actually come to think of it now I
might try patch g++ myself).

The __uint128_t integer type is implemented as efficiently as possible
on GNU g++, but of course mathematical operations will take more time
and use more code space than if a 64-Bit or 32-Bit type were used.
This is one of the reasons why uintmax_t is 64-Bit instead of 128-Bit
on GNU g++.

There seems to be a bit of confusion about whether or not it's okay
for uintmax_t to be 64-Bit if the compiler provides a 128-Bit integer
type. Take the following program:

#include <cstdint>

int main(void)
{
    __uint128_t huge = UINTMAX_MAX;
    if ( ++huge > UINTMAX_MAX )
    {
        // Should this be possible?
    }
}

Should future C++ standards be more verbose and pedantic about this
issue? If compilers are to be allowed to provide slow bulky integer
types that are bigger than uintmax_t, then perhaps the Standard's
definition of uintmax_t should be changed?

It already has been:
https://cplusplus.github.io/LWG/issue3828

 

Maybe there should be a new category of integer type, something like
"oversized integer types". And so any given integer would either be a
"compact integer type" or an "oversized integer type". Then uintmax_t
could be defined as the biggest of all the compact integer types. And
so there could be a new type, uintmax_oversized_t which is the biggest
integer type of all.
--
Std-Proposals mailing list
Std-Proposals@lists.isocpp.org
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals