How is managing this database different than the timze-zone database?
Why specify values when you can specify functions that query the database?
Why not specify that the database is updatable within a particular standard release and thus its results are not fixed across time?
We should say _something_ somewhere.
In many areas Unicode is purposefully not making any commitment to stability (it turns out that organizing the world cultures is hard), and that particular proposal is harder still.
Notably the width of an emoji sequence depends on vendors and Unicode version - some clusterization depends on locale - although by default format should not do tailoring.
Anyway promising anything but a best effort (with the expectation that both the standard and implementation will improve/evolve), backs us in a corner that i don't think anyone in SG-16 wants to be in.
This issue will arise for many Unicode/locales related proposals
On Wed, 13 Nov 2019 at 06:56, Titus Winters <firstname.lastname@example.org> wrote:
SD-8 is *appropriate* if we want to tell the public "The committee probably won't consider anything like X a breaking change, if your code gets in the way of that you may have a difficult time upgrading."
It's never *necessary*, nor does it *limit* us - we might still decide to do things that are outside of that scope. It's just trying to set general expectations.
(This doesn't sound like a case that falls into that category.)
On Tue, Nov 12, 2019 at 10:15 PM Billy O'Neal (VC LIBS) <email@example.com> wrote:
Sorry, I added Titus to ask if we need to talk about this in SD-8 somehow.
I haven’t seen how customers will use this API enough to go so far as make the statement “implementers aren’t going to be willing to change […]” at this time. It is certainly a possibility. Changes to that table are breaking changes. Whether we’re going to be willing to make such changes is a value judgement on potential breaks vs. such benefit that might be attained from those breaks.
> I take it your concern is regarding code that calls std::format_to with an assumption that the provided output buffer is large enough?
More or less, yes. Certainly we see people do that with sprintf today.
If implementors aren't going to be willing to change these tables once we ship, then I think we have a fairly serious issue.
Some have adamantly stated that these widths are estimates only and should not be counted on to remain stable. Code that is sensitive to the formatted size of the output should be calling std::formatted_size and allocating appropriately. I take it your concern is regarding code that calls std::format_to with an assumption that the provided output buffer is large enough? (or, code that calls std::format and assumes the size of the resulting std::string).
On 11/12/19 8:58 PM, Billy O'Neal (VC LIBS) wrote:
My only point was that the specified behavior gives grapheme clusters a width of 1 or 2, but there exist characters like U+FDFD that are wider than 2. (And many that have a width of 0) I would be very nervous about changing the constants used after std::format ships because that could introduce unexpected buffer overruns or underruns in user programs. This is the kind of thing that becomes contractual very quickly (which is one of the reasons I was weakly against trying to open this can of worms).
On 11/12/19 6:11 PM, Billy O'Neal (VC LIBS) via Lib-Ext wrote:
It came up in the context of that width thing in format and I was asking if I had permission to make wider-than-2 characters format properly, and the forwarded text doesn’t seem to allow that (which is OK, I just wanted to understand at the time); I was thinking of U+FDFD (﷽).
Can you elaborate? My understanding of the forwarded wording is that the assumed encoding for the input text is implementation defined (though not locale sensitive) and that implementors are encouraged to use the Unicode code point ranges indicated in the wording, but are not required to (that is my interpretation of the use of the word "should" in the proposed wording).
It does look like the provided code point ranges don't handle U+FDFD correctly.
I don't know how much confidence should be placed on the listed code point ranges. But I think it is important that we consider them amenable to change. I suspect that U+FDFD is not the last code point we'll find that is not correctly handled.
On Tue, 12 Nov 2019 at 16:58, Billy O'Neal (VC LIBS) via Lib-Ext <firstname.lastname@example.org> wrote:
During review of some Unicode stuff in LWG we had a mini discussion for some folks about grapheme clusters and I mentioned everyone who touches this stuff might understand the complexities better if they read this:
FYI SG-16 is aware of that blog post and i think there is a pretty strong agreement with it.
Codepoints have some use (notably the Unicode Character Database is really the Unicode Codepoint Database, and most Unicode algorithms works on codepoints), but any kind of user facing UX should deal with EGCS.
It is not always what applications choose to do for a variety of reasons. Notably Twitter character counts deals in codepoints, web browsers search function use codepoints as to ignore diacritics, and comparisons can be done on (normalized) codepoint sequences.
There is also not always a 1-1 mapping between what people understand as "character", grapheme clusters and glyphes.
Lib-Ext mailing list
Link to this post: http://lists.isocpp.org/lib-ext/2019/11/13606.php
_______________________________________________Lib-Ext mailing listSubscription: https://lists.isocpp.org/mailman/listinfo.cgi/lib-extLink to this post: http://lists.isocpp.org/lib-ext/2019/11/13609.php