If implementors aren't going to be willing to change these tables once we ship, then I think we have a fairly serious issue.
Some have adamantly stated that these widths are estimates only and should not be counted on to remain stable. Code that is sensitive to the formatted size of the output should be calling std::formatted_size and allocating appropriately. I take it your concern is regarding code that calls std::format_to with an assumption that the provided output buffer is large enough? (or, code that calls std::format and assumes the size of the resulting std::string).
On 11/12/19 8:58 PM, Billy O'Neal (VC LIBS) wrote:
My only point was that the specified behavior gives grapheme clusters a width of 1 or 2, but there exist characters like U+FDFD that are wider than 2. (And many that have a width of 0) I would be very nervous about changing the constants used after std::format ships because that could introduce unexpected buffer overruns or underruns in user programs. This is the kind of thing that becomes contractual very quickly (which is one of the reasons I was weakly against trying to open this can of worms).
On 11/12/19 6:11 PM, Billy O'Neal (VC LIBS) via Lib-Ext wrote:
It came up in the context of that width thing in format and I was asking if I had permission to make wider-than-2 characters format properly, and the forwarded text doesn’t seem to allow that (which is OK, I just wanted to understand at the time); I was thinking of U+FDFD (﷽).
Can you elaborate? My understanding of the forwarded wording is that the assumed encoding for the input text is implementation defined (though not locale sensitive) and that implementors are encouraged to use the Unicode code point ranges indicated in the wording, but are not required to (that is my interpretation of the use of the word "should" in the proposed wording).
It does look like the provided code point ranges don't handle U+FDFD correctly.
I don't know how much confidence should be placed on the listed code point ranges. But I think it is important that we consider them amenable to change. I suspect that U+FDFD is not the last code point we'll find that is not correctly handled.
On Tue, 12 Nov 2019 at 16:58, Billy O'Neal (VC LIBS) via Lib-Ext <email@example.com> wrote:
During review of some Unicode stuff in LWG we had a mini discussion for some folks about grapheme clusters and I mentioned everyone who touches this stuff might understand the complexities better if they read this:
FYI SG-16 is aware of that blog post and i think there is a pretty strong agreement with it.
Codepoints have some use (notably the Unicode Character Database is really the Unicode Codepoint Database, and most Unicode algorithms works on codepoints), but any kind of user facing UX should deal with EGCS.
It is not always what applications choose to do for a variety of reasons. Notably Twitter character counts deals in codepoints, web browsers search function use codepoints as to ignore diacritics, and comparisons can be done on (normalized) codepoint sequences.
There is also not always a 1-1 mapping between what people understand as "character", grapheme clusters and glyphes.
Lib-Ext mailing list
Link to this post: http://lists.isocpp.org/lib-ext/2019/11/13606.php
_______________________________________________Lib-Ext mailing listSubscription: https://lists.isocpp.org/mailman/listinfo.cgi/lib-extLink to this post: http://lists.isocpp.org/lib-ext/2019/11/13609.php