Date: Wed, 10 Sep 2025 11:10:18 +0100
On 10/09/2025 05:07, Jan Schultke via Std-Proposals wrote:
>> I understand that it's unlikely that such conversions will be removed. However,
>> I still think it's a bad idea to try and do this. _BitInt isn't meant to be a
>> replacement for the existing integer types, which it seems like you're
>> suggesting by saying programmers can opt into using these new types. They're
>> meant to complement the existing set of integer types. Trying to fix some of the
>> unfortunate design decisions every time a new feature gets added is not that
>> helpful and causes more problems than it is worth IMO. If for example a new type
>> long long long were to be added should we also try and disable some conversions
>> specifically for it? Maybe we could also add extra rules to the declaration
>> specifiers to prevent wacky stuff like long unsigned long int long.
>
> Note that the topic of implicit conversions is discussed in great
> detail here, and marked as a contentious decision.
> https://isocpp.org/files/papers/D3666r0.html#permissive-implicit-conversions
To add to that reasoning, implicit conversions are inherently part of APIs. One compelling
aspect of accepting _BitInt as-is is that we can have headers that use it that are usable
from both C and C++, typically for C libraries.
If a C library has something like:
void foo(_BitInt(16) x);
, then it can plausibly have documentation and example code with foo(42) or foo(some_int)
in it. It can be confusing and add friction when the same C library has subtly different
interfaces from C and from C++.
Putting C compatibility aside, we have different options for disallowing narrowing:
1. disallow the conversion always, even for constant expressions.
This makes even trivial cases, like foo(0) ill-formed.
2. do it like {}: well-formed for constant expressions, ill-formed otherwise. For overload
resolution it's treated similarly to foo({expr_with_integral_type}) (narrowing is checked
after overload resolution).
* This makes this type impossible to perfect forward to.
void bar(auto&& x) {
foo(FWD(x));
}
foo(0) is well-formed, bar(0) is ill-formed.
It's established that we can't forward {} (which is not an expression), but normally we
can forward expressions with a few unfortunate exceptions.
* It plays a bit weirdly with overload-resolution.
Although foo({runtime_int}) doesn't SFINAE in overload resolution, it otherwise fails in
an immediate context, similarly to a deleted function.
This can effectively introduce unwanted deleted functions when designing an overload set.
* std::is_convertible_v<int, _BitInt(16)> does not tell the full story, as there are some
int expressions that convert, and it is considered to convert in overload resolution (only
to potentially fail afterwards). This might throw off some generic code.
> A large number of other respondents said they'd want the implicit
> conversions to be more restrictive, so there's a good chance it will
> happen. I don't buy the rationale of "_BitInt isn't meant to be a
> replacement for the existing integer types". It wasn't originally
> designed this way, but there's no reason why it couldn't be a
> replacement in C++ (at least in most situations), and a lot of people
> are interested in using it this way.
>
> It certainly doesn't make my job easier when other people try to "just
> fix all problems with integers real quick" while we're adding _BitInt
> to C++, but oh well.
>
> A new kind of specifier that makes integers more strict may not find
> favor because it makes the system even more complicated, and at least
> for _BitInt, we don't technically need such a bifurcation.
>
>> My point being: instead of trying to design new types to be perfect, just
>> understand that the language isn't perfect and probably won't be. If every new
>> feature is an opportunity to fix some minor issue then adding anything new will
>> be an uphill battle to convince everyone that it's perfect. Also, every fix is a
>> new inconsistency in the language. For example, imagine if every new container
>> had to use the name is_empty instead of empty to check if the container is
>> empty. is_empty is certainly a better name than empty, but it would make generic
>> code using is_empty/empty much more cumbersome.
>
> The generic code argument may be the most convincing C++-internal
> argument that I have against restricting implicit conversions. See
> https://isocpp.org/files/papers/D3666r0.html#difficult-special-cases
> (the example with div_ceil).
>> I understand that it's unlikely that such conversions will be removed. However,
>> I still think it's a bad idea to try and do this. _BitInt isn't meant to be a
>> replacement for the existing integer types, which it seems like you're
>> suggesting by saying programmers can opt into using these new types. They're
>> meant to complement the existing set of integer types. Trying to fix some of the
>> unfortunate design decisions every time a new feature gets added is not that
>> helpful and causes more problems than it is worth IMO. If for example a new type
>> long long long were to be added should we also try and disable some conversions
>> specifically for it? Maybe we could also add extra rules to the declaration
>> specifiers to prevent wacky stuff like long unsigned long int long.
>
> Note that the topic of implicit conversions is discussed in great
> detail here, and marked as a contentious decision.
> https://isocpp.org/files/papers/D3666r0.html#permissive-implicit-conversions
To add to that reasoning, implicit conversions are inherently part of APIs. One compelling
aspect of accepting _BitInt as-is is that we can have headers that use it that are usable
from both C and C++, typically for C libraries.
If a C library has something like:
void foo(_BitInt(16) x);
, then it can plausibly have documentation and example code with foo(42) or foo(some_int)
in it. It can be confusing and add friction when the same C library has subtly different
interfaces from C and from C++.
Putting C compatibility aside, we have different options for disallowing narrowing:
1. disallow the conversion always, even for constant expressions.
This makes even trivial cases, like foo(0) ill-formed.
2. do it like {}: well-formed for constant expressions, ill-formed otherwise. For overload
resolution it's treated similarly to foo({expr_with_integral_type}) (narrowing is checked
after overload resolution).
* This makes this type impossible to perfect forward to.
void bar(auto&& x) {
foo(FWD(x));
}
foo(0) is well-formed, bar(0) is ill-formed.
It's established that we can't forward {} (which is not an expression), but normally we
can forward expressions with a few unfortunate exceptions.
* It plays a bit weirdly with overload-resolution.
Although foo({runtime_int}) doesn't SFINAE in overload resolution, it otherwise fails in
an immediate context, similarly to a deleted function.
This can effectively introduce unwanted deleted functions when designing an overload set.
* std::is_convertible_v<int, _BitInt(16)> does not tell the full story, as there are some
int expressions that convert, and it is considered to convert in overload resolution (only
to potentially fail afterwards). This might throw off some generic code.
> A large number of other respondents said they'd want the implicit
> conversions to be more restrictive, so there's a good chance it will
> happen. I don't buy the rationale of "_BitInt isn't meant to be a
> replacement for the existing integer types". It wasn't originally
> designed this way, but there's no reason why it couldn't be a
> replacement in C++ (at least in most situations), and a lot of people
> are interested in using it this way.
>
> It certainly doesn't make my job easier when other people try to "just
> fix all problems with integers real quick" while we're adding _BitInt
> to C++, but oh well.
>
> A new kind of specifier that makes integers more strict may not find
> favor because it makes the system even more complicated, and at least
> for _BitInt, we don't technically need such a bifurcation.
>
>> My point being: instead of trying to design new types to be perfect, just
>> understand that the language isn't perfect and probably won't be. If every new
>> feature is an opportunity to fix some minor issue then adding anything new will
>> be an uphill battle to convince everyone that it's perfect. Also, every fix is a
>> new inconsistency in the language. For example, imagine if every new container
>> had to use the name is_empty instead of empty to check if the container is
>> empty. is_empty is certainly a better name than empty, but it would make generic
>> code using is_empty/empty much more cumbersome.
>
> The generic code argument may be the most convincing C++-internal
> argument that I have against restricting implicit conversions. See
> https://isocpp.org/files/papers/D3666r0.html#difficult-special-cases
> (the example with div_ceil).
Received on 2025-09-10 10:10:24