The correct form is std::to_string(std::size_t).
on 32 bit machines, it is uint32_t:
std::to_string(std::uint32_t)

std::to_string(std::int128_t)  - this can not be done, because uint128_t may happen to "hold" a number bigger than size_t.
such big number is not "addressable", e.g. you can not make an array with more than size_t elements - this sounds strange, but on 32 bit machine, if you do array of UINT32_MAX (not sure of correct definition in C),
then if the array is from 1 byte elements, it will "eat" all your memory.

same in apply for 64 bit machines - they can address up to UINT64_MAX bytes of memory and such an array with just one byte elements will "eat" all its memory.

-----

some other thoughts - I would love to be able to print 128 bit numbers on the screen using standard ways (std::cout, printf etc) . also to do bit operations faster

For example I have a class that converts a 8 characters string into a 64 bit number, then comparing it as numbers instead of memcmp() derivatives like operator <=>.
Year ago I tried to do the same with gcc uint128_t and the result was really slow.



On Sun, Feb 11, 2024 at 2:28 PM Jan Schultke via Std-Proposals <std-proposals@lists.isocpp.org> wrote:
> 1) Is that talking about a possible alternative of trying to add such extended as pure library types ...

No, as fundamental types. The core language already permits the
implementation to provide additional extended integers such as
std::int128_t. However, the standard library does not allow an
implementation to define std::to_string(std::int128_t) in that event.
This makes no sense; it's an artificial restriction. The problem is
that the current wording defines these overloads exclusively for int,
long, long long, etc.

Basically, the implementation can provide as many fundamental types it
wants, but it would be illegal to give them library support. That's
dumb.

Similarly to std::to_string, the standard defines std::bitset to have
a constructor to have a constructor which takes only unsigned long
long. Even if the implementation provides a 128-bit type, it would not
comply with the standard to add a bitset(std::uint128_t) constructor.


> 2) ...

Yes, I mean that the wording in the core part of the C++ standard
would not be affected. The existing overload resolution, type
conversion, etc. rules have to work for extended integers already.
This is because the implementation could provide those if it wanted
to, and the core wording must be robust against that.

> In the same section you list some "oddities" ...

I cover the impact on existing code in a separate section as well.
Firstly, the introduction of a new type doesn't break any existing
code unless the user uses that type. New features can have breaking
changes when you opt into them explicitly; this is okay.

Secondly, the same issue happened when 8-bit developers had to
suddenly account for 16-bit types, etc. etc. It's ultimately the
developer's fault for not writing a generic implementation. C++ has
provided the means for 30 years.

If a user has written an overload set that covers only the standard
integers, then their code was already not covering all types because
implementations could have provided any amount of extended integers,
and __int128 and_BitInt(N) already exist in some compilers. I guess
this proposal throws such code under the bus and I don't care.

You can't make an omelette without breaking some eggs, and we're
cooking the mother of all omelettes here.
--
Std-Proposals mailing list
Std-Proposals@lists.isocpp.org
https://lists.isocpp.org/mailman/listinfo.cgi/std-proposals