I have reviewed a version of D1885R8 that was fresh as of a few hours ago.

I appreciate the prose additions. Thank you, Corentin and Jens. Much progress has been made.

Some feedback below:

The prose should not state that the "C" locale is associated with US-ASCII.
Big5 and Extended_UNIX_Code_Fixed_Width_for_Japanese use fixed-width two-byte sequences. That is not equivalent to using 16-bit encoding units. In particular, correct implementations of the encoding would have lead bytes at lower addresses and the value of the same character would therefore appear as different 16-bit values depending on the endianness of the system.
The statement that "the size of code units of individual encodings is not exposed by IANA" really ignores RFC 2978 (which still provides the framework for registration), which clearly indicates that all charsets operate on sequences of octets. I think I got my answer from various reflector responses, but: The question I had started a whole thread with was how the octet values were to be extracted from C++ types that are not 8 bits, and I don't think the paper answers that question (it gives the conclusion based on an answer that is assumed and not stated). I think it would be accurate to say that some sort of object-representation based model is the one the paper advocates, so the paper should say "under an object-representation based model, 0-padded forms are distinct from the unpadded encoding".
To be consistent, the wide EBCDIC example should also include the size in the associated name.

Concern for SG 16 to evaluate:
The recommended practice re: UTF-16 and UTF-32 is not consistent with getting the correct treatment out of interfaces that attempt to read the wide character data as a byte stream (e.g., iconv) when there are invalid characters in a position to be confused as reverse-from-native-endian BOMs.

re: "antecedent", please use "precedent" or "antecedent example for such"

The recommended practice regarding (non-)use of registered encodings having single byte code units for the description of wide encodings ignores the fact that the UTF-16 encoding scheme has single byte code units (it is the encoding form that has 16-bit code units).

The recommended practice regarding "byte-order agnostic encodings" presumably means that the appropriately sized C++ types are expected to have the same value for a character on different platforms (regardless of the platform endianness). See above in this note re: why the non-Unicode entries don't really fit.

Concern for SG 16 to evaluate:
COMP_NAME is not agnostic to Unicode normalization differences. I strongly suggest making the name-accepting constructor impose a restriction on characters outside the basic character set.

With respect to the prohibition on using id::unknown with `wide_literal`, I ask that it be lifted. There is current implementation practice for a compiler to accept a (host system) locale name whereby `mbstowcs` is used to encode wide literals (and host locales can be user-defined).

On Fri, Oct 1, 2021 at 1:40 PM Tom Honermann via SG16 <sg16@lists.isocpp.org> wrote:

Please note that there has been a schedule change. The previously scheduled telecon for 2021-10-13 has been moved earlier to 2021-10-06. This change was made to accommodate schedule restrictions for the author of the two papers on the agenda below. The shared calendar has been updated (which triggered the sending of new meeting invitations).

SG16 will hold a telecon on Wednesday, October 6th (not the 13th) at 19:30 UTC (timezone conversion).

The agenda is:

D2460 is first on the agenda because establishing consensus on it will reduce complications for P1885. We'll plan to spend 30 minutes on D2460 and the remainder of our time on P1885.

D2460R0 seeks to address SG16 issue 9 (Requiring wchar_t to represent all members of the execution wide character set does not match existing practice). Please read through the comments in that issue.

P1885 is back on the agenda to discuss issues raised on the LEWG and SG16 mailing lists. The relevant email threads are linked below; there have been a lot.

The above threads probe fundamental concerns about the IANA registry and the goals that P1885 strives to fulfill. It probably isn't realistic to expect to resolve them all in a single telecon.  Given the amount of discussion that has taken place and the possible perspectives offered, I'm no longer confident that we have a shared deep understanding of the design and intent. Specific points I want to cover include the following.

  • Is the IANA registry sufficient and appropriate for the identification of both the ordinary and wide literal encodings?
  • How is the IANA registry intended to be applied? Which IANA encoding would be considered a match for each of the following cases?
    • Wide literal encoding is UTF-16, sizeof(wchar_t) is 2, CHAR_BIT is >= 8, little endian architecture.
    • Wide literal encoding is UTF-16, sizeof(wchar_t) is 1, CHAR_BIT is >= 16, architecture endianness is irrelevant since code units are a single byte.
    • Wide literal encoding is UTF-16LE, sizeof(wchar_t) is 1, CHAR_BIT is >= 8, architecture endianness is irrelevant since code units are a single byte.
  • How are conflicts between the IANA registered encoding names and other names recognized by implementations to be resolved?

Please feel free to suggest other topics.

Tom.

--
SG16 mailing list
SG16@lists.isocpp.org
https://lists.isocpp.org/mailman/listinfo.cgi/sg16