Date: Thu, 16 Apr 2026 06:19:53 +0200
On Thu, 16 Apr 2026 at 05:51, Steve Weinrich via Std-Proposals <
std-proposals_at_[hidden]> wrote:
> I find it very interesting that folk think the argument should be signed.
> I have found that a very large percentage of ints should really be unsigned
> as negative values are not permitted in normal usage.
>
> As an example, I have used, and seen, this an uncountable number of times:
>
> T array[10];
>
> for (int i = 0; i < 10; ++i) { stuff }
>
> I have been embracing unsigned ints more and more in an effort to make it
> clear that negative values are not permitted.
>
I guess people will never stop using unsigned integers to give their
functions a wide contract because of how tempting it is, but it's not a
good thing to do. Unsigned integers have modular arithmetic, which is
clearly wrong for sizes and quantities.
- Subtracting 100 from an amount of 10 should give you a deficit of 90
elements (as a negative number), but unsigned integers would give you some
huge positive number.
- Comparing (x > -1) should always be true for positive numbers, but
it's always false for unsigned integers.
- Many other such bugs.
The silliness of unsigned integers as a way to skip input validation also
becomes apparent when you try to apply the same strategy to floating-point
types. We don't have a float that is always finite, a float that is always
in [0, 1], a float that is always in [0, 1), a float that is always in [0,
inf], a float that is non-NaN, a float that is nonzero, a float ...
All these preconditions should be handled by the function rather than
creating a distinct type for every possible precondition in existence.
Unsigned integers attempt the latter, and to be fair, they're not doing a
bad job considering how common the precondition of positivity is. Still,
there is a good chance your precondition is (x > 0), not (x >= 0), and then
you could have just as well used signed integers; it's a single check
either way.
As for the proposal of adding a std::signed_integral overload though ... I
don't think it's worth it at this point. It means that all sorts of code
doing reserve(123) now goes through an extra template instantiation, and at
least for literals, this doesn't meaningfully increase safety. In general,
we don't have a standard solution to preventing non-value-preserving
implicit conversions, and that's really what should be done there. You're
also not fixing operator[] and all sorts of other functions that take
size_type, and you're not changing the fact that size() and capacity()
return unsigned integers, which forces you into not using signed integers
in the first place (or have poor ergonomics).
tl; dr unsigned integers suck, but the train for fixing the problem has
left the station
std-proposals_at_[hidden]> wrote:
> I find it very interesting that folk think the argument should be signed.
> I have found that a very large percentage of ints should really be unsigned
> as negative values are not permitted in normal usage.
>
> As an example, I have used, and seen, this an uncountable number of times:
>
> T array[10];
>
> for (int i = 0; i < 10; ++i) { stuff }
>
> I have been embracing unsigned ints more and more in an effort to make it
> clear that negative values are not permitted.
>
I guess people will never stop using unsigned integers to give their
functions a wide contract because of how tempting it is, but it's not a
good thing to do. Unsigned integers have modular arithmetic, which is
clearly wrong for sizes and quantities.
- Subtracting 100 from an amount of 10 should give you a deficit of 90
elements (as a negative number), but unsigned integers would give you some
huge positive number.
- Comparing (x > -1) should always be true for positive numbers, but
it's always false for unsigned integers.
- Many other such bugs.
The silliness of unsigned integers as a way to skip input validation also
becomes apparent when you try to apply the same strategy to floating-point
types. We don't have a float that is always finite, a float that is always
in [0, 1], a float that is always in [0, 1), a float that is always in [0,
inf], a float that is non-NaN, a float that is nonzero, a float ...
All these preconditions should be handled by the function rather than
creating a distinct type for every possible precondition in existence.
Unsigned integers attempt the latter, and to be fair, they're not doing a
bad job considering how common the precondition of positivity is. Still,
there is a good chance your precondition is (x > 0), not (x >= 0), and then
you could have just as well used signed integers; it's a single check
either way.
As for the proposal of adding a std::signed_integral overload though ... I
don't think it's worth it at this point. It means that all sorts of code
doing reserve(123) now goes through an extra template instantiation, and at
least for literals, this doesn't meaningfully increase safety. In general,
we don't have a standard solution to preventing non-value-preserving
implicit conversions, and that's really what should be done there. You're
also not fixing operator[] and all sorts of other functions that take
size_type, and you're not changing the fact that size() and capacity()
return unsigned integers, which forces you into not using signed integers
in the first place (or have poor ergonomics).
tl; dr unsigned integers suck, but the train for fixing the problem has
left the station
Received on 2026-04-16 04:20:10
