|Document Number:||DXXXXR0 Draft|
|Reply-to:||Tom Honermann <firstname.lastname@example.org>|
The SG16 Unicode study group was officially formed at the 2018 WG21 meeting in Jacksonville, Florida. We have not yet had our inaugural meeting (that is planned to be held during the upcoming meeting in San Diego), but we've had an active group of WG21 members meeting via video conference regularly since August of 2017, well before our formation as an official study group. Summaries of these meetings are available at https://github.com/sg16-unicode/sg16-meetings/blob/master/README.md.
This paper discusses a set of constraints, guidelines, directives, and non-directives intended to guide our efforts as we pursue improvements to Unicode support in C++. Paper authors proposing Unicode or text processing features are encouraged to consider the perspectives and guidelines discussed here in their designs, or to submit papers arguing against them.
C++ has a long history and, as unfortunate as it may be at times, the past remains stubbornly immutable. As we work to improve the future, we must remain cognizant of the many billions of lines of C++ code in use today and how we will enable past work to retain its value in the future. The following limitations reflect constraints on what we cannot affordably change, at least not in the short term.
UTF-8 has conquered the web, but no such convergence has yet occurred for the execution and wide execution character encodings. Popular and commercially significant platforms such as Windows and z/OS continue to support a wide array of ASCII and EBCDIC based encodings for the execution character encoding as required for compatibility by their long time customers. Microsoft does not yet offer full support for UTF-8 as the execution encoding for its compiler (the recently introduced /utf-8 option does not affect the behavior of their standard library implementation, nor is UTF-8 selectable as the execution encoding at run-time via environment settings or by calling setlocale()). Standarizing UTF-8 as the execution character encoding is simply not an option for the foreseeable future.
The char16_t and char32_t encodings are currently implementation defined as well. However, all existing implementations use UTF-16 and UTF-32 respectively for these encodings, thus their implementation definedness is not a constraint. P1041 proposes officially standardizing UTF-16 and UTF-32 for these encodings.
The execution and wide execution encodings are not static properties of programs, and therefore not fully known at compile-time. These encodings are determined at run-time and may be dynamically changed by calls to setlocale(). At compile-time, character and string literals are transcoded (in translation phase 5) from the source encoding to an encoding that is expected to be compatible with whatever encoding is selected at run-time. If the compile-time selected encoding turns out not to be compatible with the run-time encoding, then encoding confusion (mojibake) ensues.
The dynamic nature of these encodings is not theoretical. On Windows, the execution encoding is determined at program startup based on the current active code page. On POSIX platforms, the run-time encoding is determined by the LANG, LC_ALL, or LC_CTYPE environment variables. A recent proposal to WG14 (N2226) proposes allowing the current locale settings to vary by thread. Existing programs depend on the ability to dynamically change the execution encoding (within reason) in order for a server process to concurrently serve multiple clients with different locale settings. Attempting to restrict the ability to dynamically change the execution encoding would break existing code.
Since the char16_t and char32_t encodings are currently implementation defined, they too could vary at run-time. However, as noted earlier, all implementations currently use UTF-16 and UTF-32 and do not support such variance. P1041 will solidify current practice and ensure these encodings are known at comple-time.
On POSIX derived systems, the primary interface to the operating system is via the ordinary character encoding. This contrasts with Windows where the primary interface is via the wide encoding and interfaces defined in terms of char are implemented as wrappers around their wide counterparts. Unfortunately, such wrappers are often poor substitutes for use of their wide cousins due to transcoding limitations; it is common that the ordinary execution encoding is unable to represent all of the characters supported by the wide execution encoding.
The designers of the C++17 filesystem library had to wrestle with this issue and addressed it via abstraction; std::filesystem::path has an implementation defined value_type that reflects the primary operating system encoding. Member functions provide access to paths transcoded to any one of the five standard mandated encodings (ordinary, wide, UTF-8, char16_t, and char32_t). This design serves as a useful precedent for future design.
The wide execution encoding was introduced to provide relief from the constraints of the (typically 8-bit) char based ordinary execution encodings by enabling a single large character set and trivial encoding that avoided the need for multibyte encoding and ISO-2022 style character set switching escape sequences. Unfortunately, the size of wchar_t, the character set, and its encoding were all left as implementation defined properties resulting in significant implementation variance. The present situation is that the wide execution encoding is only widely used on Windows where its implementation is actually non-conforming (https://github.com/sg16-unicode/sg16/issues/9).
Pointers to char may be used to inspect the underlying representation of objects of any type with the consequence that lvalues of type char alias with other types. This restricts the ability of the compiler to optimize code that uses char. std::byte was introduced in C++17 as an alternative type to use when char's aliasing abilities are desired, but it will be a long time, if ever, before we can deprecate and remove char's aliasing features.
ICU powers Unicode support in most portable C++ programs today due to its long history, impressive feature set, and friendly license. When considering standardizing Unicode related reatures, we must keep in mind that the Unicode standard is a large and complicated specification, and many C++ implementors simply cannot afford to reimplement what ICU provides. In practice this means that we'll need to ensure that proposals for new Unicode features are implementable using ICU.
Mistakes happen and will continue to happen. Following a few common guidelines will help to ensure we don't stray too far off course and help to minimize mistakes. The guidelines here are in no way specific to Unicode or text processing, but represent areas where mistakes would be easy to make.
C++ has some catching up to do when it comes to Unicode support. This means that there is ample opportunity to investigate and learn from features added to other languages. A great example of following this guideline is found in the P1097 proposal to add named character escapes to character and string literals.
C and C++ continue to diverge and that is ok when there is good reason for it (e.g., to enable better type safety and overloading). However, gratuitous departure creates unnecessary interoperability and software composition challenges. Where it makes sense, proposing features that are applicable for C to WG14 will help to keep the common subset of the languages as large as it can reasonably be. P1097 is again a great example of a feature that would be appropriate to propose for inclusion in C.
Given the constraints above, how can we best integrate support for Unicode following time honored traditions of C++ design including the zero overhead principle, ensuring a transition path, and enabling software composition? How do we ensure a design that programmers will want to use? The following explores design choices that SG16 participants have been discussing.
The ordinary and wide execution encodings are not going away; they will remain the bridge that text must cross when interfacing with the operating system and with users. Unless otherwise specified, I/O performed using char and wchar_t based interfaces in portable programs must abide by the encodings indicated by locale settings. But internally, it is desirable to work with a limited number of encodings (preferably only one) that are known at compile time, and optimized for accordingly. This suggests a design in which transcoding is performed from dynamically determined external encodings to an internal encoding at program boundaries; when reading files, standard input/output streams, command line options, environment variables, etc... This is standard practice today.
There are two primary candidates for use as internal encodings today: UTF-8 and UTF-16. The former is commonly used on POSIX based platforms while the latter remains the primary system encoding on Windows. There is no encoding that is the best internal encoding for all programs. We face a choice here: do we design for a single well known (though possibly implementation defined) internal encoding? Or do we continue the current practice of each program choosing its own internal encoding? Active SG16 participants have not yet reached consensus on these questions.
Use of the type system to ensure that transcoding is properly performed at program boundaries helps to prevent errors that lead to mojibake. Such errors can be subtle and only manifest in relatively rare situations, making them difficult to discover in testing. For example, failure to correctly transcode input from ISO-8859-1 to UTF-8 only results in negative symptoms when the input contains characters outside the ASCII range.
This is where the char8_t proposal (P0482) comes in to play. Having a distinct type for UTF-8 text, like we do for UTF-16 and UTF-32, enables use of any of UTF-8, UTF-16, or UTF-32 as a statically known internal encoding, without the implementation defined signedness and aliasing concerns of char, and with protection against accidental interchange with char or wchar_t based interfaces without proper transcoding having been performed first. Solid support in the type system, combined with statically known encodings, provides the flexibility needed to design safe and generic text handling interfaces, including ones that can support constexpr evaluation. Why might constexpr evaluation be interesting? Consider the std::embed proposal (P1040) and the ability to process a file loaded at compile time.
Distinct code unit types (char8_t, char16_t, char32_t, etc...) usable for the internal encoding is a good start, but std::basic_string isn't a great foundation for working with Unicode text since no support is provided for code point or grapheme cluster based enumeration. The text_view proposal (P0244) provides a method for layering encoding aware code point support on top of std::basic_string or any other string like type that provides a range of code units. SG16 has been discussing the addition of a std::text family of types that provide similar capabilities, but that also own the underlying data. Zach Laine has been prototyping such a type in his Boost.Text library.
Introducing new types that potentially compete with std::string and std::string_view creates a possible problem for software composition. How do components that traffic in std::string vs std::text interact? Discussions in SG16 have identified several strategies for dealing with this: 1) std::text could be convertible to std::string_view and, potentially, const std::string & if it holds an actual std::string object, and 2) std::text and std::string could allow their buffers to be transferred back and forth (and potentially to other string types).
New text containers and views help to address support for UTF encoding and decoding, but Unicode provides far more than a large character set and methods for encoding it. Unicode algorithms provide support for enumerating grapheme clusters, word breaks, line breaks, performing language sensitive collation, handling bidirectional text, case mapping, and more. Exposing interfaces for these algorithms is necessary to claim complete Unicode support. Exposing these in a generic form that allows their use with the large number of string types used in practice is necessary to enable their adoption. Enabling them to be used with segmented data types (e.g., ropes) is a desirable feature.
Per the general design discussion above, the following directives identify activities for SG16 to focus on. Papers exploring and proposing features within their scope are encouraged.
This is the topic that SG16 participants have so far spent the most time discussing. We have consensus on the desire for new std::text and std::text_view types/templates, but do not yet have papers that explore or propose particular designs. Our discussions have focused on questions such as:
There is much existing practice to consider here. Historically, most string classes have either provided code unit access (like std::string or code point access (possibly with means for code unit access as well). Swift took the bold move of making extended grapheme clusters the basic element of Swift strings. There are many design options and tradeoffs to consider. Papers exploring the design options are strongly encouraged.
SG16 participants have not yet spent much time discussing interfaces to Unicode algorithms, though Zach Laine has blazed a trail by implementing support for all of them in his Boost.Text library. Papers exploring requirements would be helpful here. Some questions to explore:
We've got a start on this with P1097, but there are no doubt many text handling features in other languages that would be desirable in C++. Papers welcome.
C++ currently includes interfaces for transcoding between the ordinary and wide execution encodings and between the UTF-8, UTF-16, and UTF-32 encodings, but not between these two sets of encodings. This poses a challenge for support of the external/internal encoding model.
Portably handling command line arguments (that may include file names that are not well formed for the current locale encoding) and environment variables (likewise) accurately can be challenging. The design employed for std::filesystem::path to provide access to native data as well as access to that data transcoded to various encodings could be applied to solve portability issues with command line arguments and environment variables.
An open question is whether transcoding between external and internal encodings should be performed implicitly (convenient, but hidden costs) or explicitly (less convenient, but with apparent costs).
While not an SG16 priority, it will sometimes be necessary to resolve existing issues or improve wording to accommodate new features. Issues that pertain to SG16 are currently tracked in our github repo at https://github.com/sg16-unicode/sg16/issues.
The C++ standard currently lacks the necessary foundations for obtaining or displaying Unicode text to computer users. Until that changes, addressing user input and graphical rendering of text will remain out of scope for SG16.