Let me rephrase my proposal. In fact I begin to think the verb “to cast” is misleading, confusing in this context, as it suggests a kind of transformation, change of representation, whereas there’s none here.
Using a bit of platonic wording, let’s consider some ideal idea of a type with its set of representations and operations.
Then we can say T is a kind of named “view” of this ideal type.
When we declare:
using U = T;
…then we say that U is another “view” of the same ideal type as
T. The standard says “U is a synonym for
T” (rephrasing a bit 10.1.3).
Therefore after such a declaration, we can say:
a U can be seen as a T and a T can be seen as a
U.
No “cast” involved: it’s only just a different view of the same thing.
The proposal is to allow the user to restrict the “range” of the newly defined view.
When I declare:
using U = new T;
…then just as before, U is another “view” of the same ideal type as
T, but this time we have:
a U can be seen as a T but a T can’t be seen as a
U.
Again, no “cast” involved.
Now back to your questions and remarks. You defined the context as follow:
using U = new T;
void assign(T& dest, T& src);
T t;
U u;
Then:
t = u;
Indeed this is OK. A U can be seen as a T.
u = t;
Indeed this is not OK, a T can’t be seen as a U. However the restriction is “artificial”, semantic, in the same way the following would not be OK:
int x;
int const y;
y = x; // not OK, but not because of incompatibility
To do the assignment then a cast should be used:
u = static_cast<U>(t);
…however no “real cast” (no representation transformation) would be performed, at the binary level it would just call the equivalent to
operator=().
assign(t, t);
assign(u, u);
Indeed OK, will call assign(T,T) because a
U can be seen as a T.
assign(t, u);
assign(u, t);
Both OK, will call assign(T,T).
If we make the assumption that your assign() function is actually something equivalent to an
operator=() semantically, then indeed it may seem there’s some inconsistency. But I’d argue that’s
your choice to create such a semantic inconsistency
😉 Give this function another, totally different semantic, or rename it to something like
displaySideBySide() with the exact same signature, there’s no longer any
apparent inconsistency. At least I don’t see any.
void assign(T& dest, std::type_identity_t<T>& src);
void assign(std::type_identity_t<T>& dest, T& src);
Only talking on the first case as the two are symmetrical.
As std::type_identity_t<T> is
T, I don’t see the difference with the above situation.
Now supposing we have this template:
template<typename X>
void generic_assign(X& dest, std::type_identity_t<X>& src);
Then:
generic_assign(t, u);
Deduced to be generic_assign<T>(T&, T&), a
U can be seen as a T, so OK.
generic_assign(u, t);
Deduced to be generic_assign<U>(U&, U&), a
T can’t be seen as a U, so not OK.
About std::hash<U>, there’s no magic, just usual rules. When the compiler sees
std::hash<U>, then possibly the process could be:
Given:
class A { … };
template<> std::hash<A> { … };
using B = A;
…then in standard C++ the compiler can “magically” create
std::hash<B> because a B can be seen as a
A - so it will happily use std::hash<A>. Or my compiler is severely broken which would be very embarrassing
😊
In this situation there’s one important difference though. Because the suggested declaration breaks the symmetry between
T and U, they now can be distinguished – whereas A and B above
can’t be distinguished. Because a U can’t be seen as a T, then it is possible to provide a
std::hash<U> “specific specialization”, whereas it’s not possible to provide a
std::hash<B> specialization.
Now the question is, if we have:
template<T> std::hash<T> { … } ;
void foo(std::unordered_set<T>& st) { … };
// ... (1)
std::unordered_set<U> su; // (2)
foo(su); // (3)
The declaration in (2) is OK because as we’ve seen, the compiler can find
std::hash<U> as being std::hash<T> if it has not been specialized.
For the call in (3) we have two cases:
More generally, given:
template<typename X> class G { … };
using U = new T;
…then a G<U> can be seen as a
G<T> because a U can be seen as a
T, unless there’s an explicit specialization G<U>.
Example:
#include <unordered_set>
#include <string>
void print(std::unordered_set<std::string> const&) { ... }
using Name = new std::string;
std::unordered_set<Name> names;
print(names);
OK, std::hash<Name> is found to be
std::hash<std::string> here.
template<> std::hash<Name> { /* case-insensitive hashing */ };
std::unordered_set<Name> ci_names;
print(ci_names);
Not OK, std::hash<Name> isn’t
std::hash<std::string> anymore here.
Although here if the purpose is to provide a general “case-insensitive string”, then
maybe the other suggested notation “using ci_string = explicit std::string” would be preferred as it’s more constrained, but that may be for another discussion
😉
Last but not least:
struct T {
using self = T;
T(const self&) = default; // copy constructor
};
using U = new T;
This boils down to the relationships between T and
U.
CppReference states that std::is_same<> is commutative, which seems implied by the standard. While I defined
T and U to be two views on the same “ideal type”, they’re not equivalent, as they can be distinguished – which is the whole point of the proposal. This implies a break of the commutativity.
Taking the traits one by one:
The effect of the declaration using U = new T when
T is a class type is to declare a new view named U onto the same “ideal type” as
T as if we had copy-pasted the definition of T then replaced all the
T by U.
Therefore:
Regards,
Yves Bailly
Development engineer
Manufacturing Intelligence division
Hexagon
M: +33 (0) 6.82.66.09.01
HexagonMI.com