Date: Thu, 16 Apr 2026 11:40:26 +0200
czw., 16 kwi 2026 o 11:11 Jonathan Wakely <cxx_at_[hidden]> napisał(a):
>
>
>
> On Thu, 16 Apr 2026 at 10:01, Marcin Jaczewski via Std-Proposals <std-proposals_at_[hidden]> wrote:
>>
>> czw., 16 kwi 2026 o 06:20 Jan Schultke via Std-Proposals
>> <std-proposals_at_[hidden]> napisał(a):
>> >
>> >
>> >
>> > On Thu, 16 Apr 2026 at 05:51, Steve Weinrich via Std-Proposals <std-proposals_at_[hidden]> wrote:
>> >>
>> >> I find it very interesting that folk think the argument should be signed. I have found that a very large percentage of ints should really be unsigned as negative values are not permitted in normal usage.
>> >>
>> >> As an example, I have used, and seen, this an uncountable number of times:
>> >>
>> >> T array[10];
>> >>
>> >> for (int i = 0; i < 10; ++i) { stuff }
>> >>
>> >> I have been embracing unsigned ints more and more in an effort to make it clear that negative values are not permitted.
>> >
>> >
>> > I guess people will never stop using unsigned integers to give their functions a wide contract because of how tempting it is, but it's not a good thing to do. Unsigned integers have modular arithmetic, which is clearly wrong for sizes and quantities.
>> >
>> > Subtracting 100 from an amount of 10 should give you a deficit of 90 elements (as a negative number), but unsigned integers would give you some huge positive number.
>> > Comparing (x > -1) should always be true for positive numbers, but it's always false for unsigned integers.
>> > Many other such bugs.
>> >
>> > The silliness of unsigned integers as a way to skip input validation also becomes apparent when you try to apply the same strategy to floating-point types. We don't have a float that is always finite, a float that is always in [0, 1], a float that is always in [0, 1), a float that is always in [0, inf], a float that is non-NaN, a float that is nonzero, a float ...
>> >
>> > All these preconditions should be handled by the function rather than creating a distinct type for every possible precondition in existence. Unsigned integers attempt the latter, and to be fair, they're not doing a bad job considering how common the precondition of positivity is. Still, there is a good chance your precondition is (x > 0), not (x >= 0), and then you could have just as well used signed integers; it's a single check either way.
>> >
>> > As for the proposal of adding a std::signed_integral overload though ... I don't think it's worth it at this point. It means that all sorts of code doing reserve(123) now goes through an extra template instantiation, and at least for literals, this doesn't meaningfully increase safety. In general, we don't have a standard solution to preventing non-value-preserving implicit conversions, and that's really what should be done there. You're also not fixing operator[] and all sorts of other functions that take size_type, and you're not changing the fact that size() and capacity() return unsigned integers, which forces you into not using signed integers in the first place (or have poor ergonomics).
>> >
>> > tl; dr unsigned integers suck, but the train for fixing the problem has left the station
>>
>> I do not think they are bad, if you limit allowed range (effectivine
>> abandoning only "advantage") then:
>> ```
>> (int)x < (unsigned)size
>> ```
>> Is always correct, you do not need to check for `x < 0` as it will
>> rollover to a very big value that will be greater than `size`.
>
>
> For some value of correct :-)
>
> It's not mathematically correct, but it does give the intended answer to "is x a non-negative number less than size?"
>
Yes, but this is the same problem like `x+1 < x` as we leave the realm
of standard mathematical integers :)
>
>>
>> This means the standard should guarantee that we can't allocate more
>> than `MAX_INT` and I recall that C had something like this as
>> there was a macro that was "max allocation size".
>> With this most rollovers will not be a problem.
>>
>
> I'm not aware of any such macro.
>
> Implementations typically prevent creating any object larger than PTRDIFF_MAX bytes, and that's a practical upper limit on a malloc result, even if you have more memory than that.
I read that a long time ago about this and possibly I confuse
something, but after googling I found this:
`RSIZE_MAX` but it's not linked directly to `malloc` more for safer C
string functions.
>
>
>
> On Thu, 16 Apr 2026 at 10:01, Marcin Jaczewski via Std-Proposals <std-proposals_at_[hidden]> wrote:
>>
>> czw., 16 kwi 2026 o 06:20 Jan Schultke via Std-Proposals
>> <std-proposals_at_[hidden]> napisał(a):
>> >
>> >
>> >
>> > On Thu, 16 Apr 2026 at 05:51, Steve Weinrich via Std-Proposals <std-proposals_at_[hidden]> wrote:
>> >>
>> >> I find it very interesting that folk think the argument should be signed. I have found that a very large percentage of ints should really be unsigned as negative values are not permitted in normal usage.
>> >>
>> >> As an example, I have used, and seen, this an uncountable number of times:
>> >>
>> >> T array[10];
>> >>
>> >> for (int i = 0; i < 10; ++i) { stuff }
>> >>
>> >> I have been embracing unsigned ints more and more in an effort to make it clear that negative values are not permitted.
>> >
>> >
>> > I guess people will never stop using unsigned integers to give their functions a wide contract because of how tempting it is, but it's not a good thing to do. Unsigned integers have modular arithmetic, which is clearly wrong for sizes and quantities.
>> >
>> > Subtracting 100 from an amount of 10 should give you a deficit of 90 elements (as a negative number), but unsigned integers would give you some huge positive number.
>> > Comparing (x > -1) should always be true for positive numbers, but it's always false for unsigned integers.
>> > Many other such bugs.
>> >
>> > The silliness of unsigned integers as a way to skip input validation also becomes apparent when you try to apply the same strategy to floating-point types. We don't have a float that is always finite, a float that is always in [0, 1], a float that is always in [0, 1), a float that is always in [0, inf], a float that is non-NaN, a float that is nonzero, a float ...
>> >
>> > All these preconditions should be handled by the function rather than creating a distinct type for every possible precondition in existence. Unsigned integers attempt the latter, and to be fair, they're not doing a bad job considering how common the precondition of positivity is. Still, there is a good chance your precondition is (x > 0), not (x >= 0), and then you could have just as well used signed integers; it's a single check either way.
>> >
>> > As for the proposal of adding a std::signed_integral overload though ... I don't think it's worth it at this point. It means that all sorts of code doing reserve(123) now goes through an extra template instantiation, and at least for literals, this doesn't meaningfully increase safety. In general, we don't have a standard solution to preventing non-value-preserving implicit conversions, and that's really what should be done there. You're also not fixing operator[] and all sorts of other functions that take size_type, and you're not changing the fact that size() and capacity() return unsigned integers, which forces you into not using signed integers in the first place (or have poor ergonomics).
>> >
>> > tl; dr unsigned integers suck, but the train for fixing the problem has left the station
>>
>> I do not think they are bad, if you limit allowed range (effectivine
>> abandoning only "advantage") then:
>> ```
>> (int)x < (unsigned)size
>> ```
>> Is always correct, you do not need to check for `x < 0` as it will
>> rollover to a very big value that will be greater than `size`.
>
>
> For some value of correct :-)
>
> It's not mathematically correct, but it does give the intended answer to "is x a non-negative number less than size?"
>
Yes, but this is the same problem like `x+1 < x` as we leave the realm
of standard mathematical integers :)
>
>>
>> This means the standard should guarantee that we can't allocate more
>> than `MAX_INT` and I recall that C had something like this as
>> there was a macro that was "max allocation size".
>> With this most rollovers will not be a problem.
>>
>
> I'm not aware of any such macro.
>
> Implementations typically prevent creating any object larger than PTRDIFF_MAX bytes, and that's a practical upper limit on a malloc result, even if you have more memory than that.
I read that a long time ago about this and possibly I confuse
something, but after googling I found this:
`RSIZE_MAX` but it's not linked directly to `malloc` more for safer C
string functions.
Received on 2026-04-16 09:40:38
