Especially when presentation like that affects the semantics of the text :(On 05/05/2020 08.34, Jens Maurer via SG16 wrote:On 05/05/2020 08.15, Jens Maurer via SG16 wrote:On 05/05/2020 07.58, Tom Honermann via SG16 wrote:P1949R3 <https://wg21.link/p1949r3> presents the following code that, assuming I accurately captured the discussion during the April 22nd SG16 telecon in https://github.com/sg16-unicode/sg16-meetings#april-22nd-2020, we intend to make well-formed (it is currently ill-formed because \u0300 doesn't match /identifier/ and is therefore lexed as '\' followed by 'u0300').|#define accent(x)x##\u0300 constexpr int accent(A) = 2; constexpr int gv2 = A\u0300; static_assert(gv2 == 2, "whatever");|(Did I mention I hate HTML e-mails?)
That section discusses NFKC, not NFC. I don't think it is applicable to our intents.The proposed wording does not attempt to make this example well-formed, assuming that a combining character is not in XID_Continue. (Please check me on the latter.)That's wrong. UAX#31 says XID_Continue is ID_Continue with a few (enumerated) exceptions. https://www.unicode.org/reports/tr31/tr31-33.html#NFKC_Modifications
It would be rather restrictive if it didn't :)... and ID_Continue includes combining characters.
When we preprocess accent(A), we perform A ## \u0300 which becomes A\u0300 which is not a (single) preprocessing token (because \u0300 is not in XID_Continue, so this is not an identifier, and none of the other kinds in [lex.pptoken] matches)So, this argument must become "which is lexed as an /identifier/ preprocessing token, and then immediately rejected because "The program is ill-formed if an identifier does not conform to the NFC normalization specified in ISO/IEC 10646."
Agreed for that example. But for the other example I provided,
the resulting identifier (if lexed such that \u0300\u0327
produces a single preprocessor token) is in NFC since there is no
precomposed character for a capital letter A with grave and
cedilla. Do we believe that that example should be well-formed?
And since UAX#31 actually specifies XID_Continue and ID_Continue, not UAX#44, we need to make our reference to UAX#31 normative and the reference to UAX#44 informative (bibliography). Also, the normative text should refer to UAX#31, not UAX#44.
I don't think that is correct. I believe UAX#44 does define the
XID_Start and XID_Continue properties; UAX#31 provides some
informational context for why they are defined as they are.
Tom.
Jensand we get undefined behavior per [cpp.concat] p3. We decided not to address the undefined behavior case here, because that's SG12 territory. JensHowever, the proposed wording would reject the following case involving multiple combining characters:|#define accent(x)x##\u0300\u0327 constexpr int accent(A) = 2; constexpr int gv2 = A\u0300\u0327; static_assert(gv2 == 2, "whatever");|The rejection occurs because the proposed wording <http://wiki.edg.com/pub/Wg21summer2020/SG16/uax31.html> results in each /universal-character-name/ that is not lexed as part of one of the existing /preprocessing-token/ cases being lexed as its own preprocessing token; the attempted concatenation produces two preprocessor tokens (A\u0300 and \u0327). I don't know of a principled reason for such rejection, though it isn't clear to me what characters should be permitted to be munched together. One approach would be to introduce another new /preprocessing-token/ category to match the proposed /identifier-continue/; max munch would still always prefer /identifier/ when such a sequence is preceded by a character in XID_Start. We would still want to retain the proposed new "each /universal-character-name/ ..." category as a way to avoid tearing of /universal-character-name/s that name a character not in XID_Start or XID_Continue. I'm not convinced that this scenario is worth addressing. It strikes me as approximately as valuable as the first example. Tom.