Date: Thu, 11 Apr 2019 11:56:26 +0100
>> To that end, if your papers spoke more about how compilers might refuse
>> to compile problem pieces of code found in the real world, with example
>> potential diagnostic messages, rather than reduced examples showing UB
>> vs DB, I think that would help "sell" the value vs fear in these
>> proposals.
>
> A lot of effort has been going into detecting these sorts of bugs
> at compile time, but it's not possible to find all of them. In
> fact, those that are behind some of hardest cases of "working"
> code ceasing to work after a compiler upgrade and can be impossible
> to detect statically. Besides false negatives, their detection is
> also almost always subject to false positives, so issuing errors
> is not appropriate (at least not without something like -Werror).
Sure, but what I'm asking for here is that future compilers default to
refusing to compile ambiguous code. The programmer then has the
following options:
1. Make the code not ambiguous.
2. Annotate the problem parts of the code to say "trust me", like Rust's
unsafe blocks.
3. Mark entire sections of code (e.g. functions, classes, namespaces,
files) as being whitelisted.
I have seen nothing yet which suggests that this can't be done. There
*are* other, good, arguments why we should not, especially the breaking
of copy-paste idempotency. But I find those syntax concerns rather than
"is this a good idea?" concerns.
Now, if you are actually telling me this can't be usefully done because
the rate of false positives and negatives would be too high to be
practical, my next question is "what needs to be added to the language
to reduce that rate to near-zero?"
Because the alternative is slower, less optimised, more surprising
executable binaries. And I don't want that. I want to know that code
which compiles will work as I wrote it, now and in all future compilers,
and will get faster and more optimised without breaking surprise as
compilers improve. Which is not the case right now.
Niall
>> to compile problem pieces of code found in the real world, with example
>> potential diagnostic messages, rather than reduced examples showing UB
>> vs DB, I think that would help "sell" the value vs fear in these
>> proposals.
>
> A lot of effort has been going into detecting these sorts of bugs
> at compile time, but it's not possible to find all of them. In
> fact, those that are behind some of hardest cases of "working"
> code ceasing to work after a compiler upgrade and can be impossible
> to detect statically. Besides false negatives, their detection is
> also almost always subject to false positives, so issuing errors
> is not appropriate (at least not without something like -Werror).
Sure, but what I'm asking for here is that future compilers default to
refusing to compile ambiguous code. The programmer then has the
following options:
1. Make the code not ambiguous.
2. Annotate the problem parts of the code to say "trust me", like Rust's
unsafe blocks.
3. Mark entire sections of code (e.g. functions, classes, namespaces,
files) as being whitelisted.
I have seen nothing yet which suggests that this can't be done. There
*are* other, good, arguments why we should not, especially the breaking
of copy-paste idempotency. But I find those syntax concerns rather than
"is this a good idea?" concerns.
Now, if you are actually telling me this can't be usefully done because
the rate of false positives and negatives would be too high to be
practical, my next question is "what needs to be added to the language
to reduce that rate to near-zero?"
Because the alternative is slower, less optimised, more surprising
executable binaries. And I don't want that. I want to know that code
which compiles will work as I wrote it, now and in all future compilers,
and will get faster and more optimised without breaking surprise as
compilers improve. Which is not the case right now.
Niall
Received on 2019-04-11 12:56:35