Date: Fri, 06 Sep 2019 07:51:52 -0700
On Friday, 6 September 2019 05:46:51 PDT Niall Douglas wrote:
> > Which of the two inodes is the JSON file referring to?
>
> Absolutely right. If you delete the binary path representation, you get
> problems like this. But, equally, you have to allow third party tooling
> to modify the paths in the JSON for whatever reason. The cost is exactly
> the problem you describe.
I think that's dangerous. If the choice is to generate binary but allow
consumption of text, then I would recommend choosing Option 1 in the first
place and say "damn the torpedoes". We are, after all, talking about a corner
case.
> > Using the UTF-8 encoded text is Option 1 in my proposal. I don't have a
> > problem with it, but if adopted, then implementers need to understand the
> > problems shown above in the ls outputs will happen (note how there's a
> > second issue).
[snip]
> The text form is to handle different native filesystem encodings. A
> platform is permitted to have one native filesystem encoding in one
> program, and a different native filesystem encoding in another program.
> For example, ANSI vs UNICODE Windows programs. Both programs may work on
> the same JSON file. They need some common mechanism to communicate if
> they use dissimilar native filesystem encodings, and a UTF8-attempt as a
> fallback is as good as any.
Indeed. And Windows applications belonging to group (a) [see Windows analysis
section of my OP] must not use fopen() or equivalent currently-standardised
API. They must use _wfopen().
> > Which of the two inodes is the JSON file referring to?
>
> Absolutely right. If you delete the binary path representation, you get
> problems like this. But, equally, you have to allow third party tooling
> to modify the paths in the JSON for whatever reason. The cost is exactly
> the problem you describe.
I think that's dangerous. If the choice is to generate binary but allow
consumption of text, then I would recommend choosing Option 1 in the first
place and say "damn the torpedoes". We are, after all, talking about a corner
case.
> > Using the UTF-8 encoded text is Option 1 in my proposal. I don't have a
> > problem with it, but if adopted, then implementers need to understand the
> > problems shown above in the ls outputs will happen (note how there's a
> > second issue).
[snip]
> The text form is to handle different native filesystem encodings. A
> platform is permitted to have one native filesystem encoding in one
> program, and a different native filesystem encoding in another program.
> For example, ANSI vs UNICODE Windows programs. Both programs may work on
> the same JSON file. They need some common mechanism to communicate if
> they use dissimilar native filesystem encodings, and a UTF8-attempt as a
> fallback is as good as any.
Indeed. And Windows applications belonging to group (a) [see Windows analysis
section of my OP] must not use fopen() or equivalent currently-standardised
API. They must use _wfopen().
-- Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org Software Architect - Intel System Software Products
Received on 2019-09-06 16:51:56