That's what I'm thinking. Raw data to scalar values for decode, vice versa for encode, and connect them for transcode, or attempted transcode. 

Maybe include some queries for if the mapping is known to be a pure transcoding. 

And with some ability to do fast path shortcuts for some pairs of encodings. 

This is the normal implementation, so it shouldn't be controversial. I also want to have hooks or capability for transliteration, which is really just more interesting error handling on the "to" side of charset mapping. 

Because this may be runtime determined, it probably means virtual functions and a base interface, but I think particular encodings can be types, with final functions, so if you are deterministic the compiler will devirtualize. 

On Sat, Mar 30, 2019, 18:29 Lyberta <lyberta@lyberta.net> wrote:
Steve Downey:
> I would like to standardise the encoding and decoding interfaces
What about transcoding? Do we want to require all encodings be
convertible to Unicode scalar values so we can have a universal
transcoding algorithm that will use scalar values under the hood?

_______________________________________________
SG16 Unicode mailing list
Unicode@isocpp.open-std.org
http://www.open-std.org/mailman/listinfo/unicode