Greeting again SG12,

Here's another email that I think got trapped for moderation yesterday but apparently can get through now.  Sorry for the additional noise.


On Mon, Jan 28, 2019 at 10:13 AM Scott Schurr <s.scott.schurr@gmail.com> wrote:
Hi John,

Nice to hear from you.  Thanks for your interest in the paper.

Regarding whether "can't happen" is a behavior, I'm expecting to hear varied well studied opinions (like yours) during the paper discussion.  My perspective is that "can't happen" cannot be defined in a mechanistic way.  It's not like, "The gear used to turn left but now turns right."  However I think that "can't happen" can be defined in the sense that the result of an explosion can be quite well defined.  Given an explosion of sufficient yield at a certain location there is a high probability that the building will collapse, but we're uncertain where the individual bricks will fall.  We still understand the outcome sufficiently well to be able to talk about it.

My feeling is that it's not that different from talking about race conditions.  We really don't know what the consequence of a race condition will be.  But we can define a race condition.

Thanks for your thoughts.

On Sat, Jan 26, 2019 at 2:13 AM John McFarlane <john@mcfarlane.name> wrote:
When I read that title, I also had the reaction "you have the wrong target" but in my case, I assumed the target was SG20. I think it's a great paper to send to that study group.

My main concern, though, boils down to the idea that "Can't happen" is somehow a behaviour. It seems to be suggested, for example, that overflow as it occurs in current implementations can realistically be defined. I'm not sure that's practical because such behaviour is highly sensitive to many non-obvious factors. Overflow at a single point in the code may produce wildly different 'behaviours' depending on factors such as: from where it's invoked, various toolchain options and minor revisions to the implementation. And even with this information at hand, the write-up might well be onerous to the implementer and worse than useless for the user because it would involve describing in much detail the optimization algorithms involved. So I don't think implementation-defined is a straight-forward solution to the problem.

Generally, I agree I'd like to see effort put toward thinking of the correct way to deliver the idea of UB. It's the wrong wording to give to users of the language. It's implementor speak. Like a bailiff using legalese to explain why somebody has just lost their home, it causes confusion and anger.

On Sat, 26 Jan 2019 at 08:54 Marc Glisse <marc.glisse@inria.fr> wrote:
Hello,

just a couple points missing from the paper:

1) with g++-7 -O2 -Wall, the motivating example on the left produces:

<source>: In function 'int32_t add_100_without_wrap(int32_t)':
<source>:8:3: warning: assuming signed overflow does not occur when assuming that (X + c) < X is always false [-Wstrict-overflow]
    if (ret < a)

However, we removed the warning from gcc-8 because it was too noisy and
impossible to work around when the optimization is what you actually want.

2) At least with gcc, -ftrapv doesn't really work. You need
-fsanitize=signed-integer-overflow -fsanitize-undefined-trap-on-error for
something roughly equivalent to what -ftrapv is supposed to do.


Now my opinion: you have the wrong target. Compilers that have a -fwrapv
option (or -ftrapv or ubsan or ...) already indirectly describe the
default behavior as undefined (and the standard already describes it as
undefined), so it is already documented. Adding a sentence or 2 in the
standard and on pages that nobody reads won't help. It seems that you want
to talk either to teachers, so they warn their students more about the
properties of signed overflow, or to compiler writers, to convince them to
change the default to -fwrapv or -ftrapv (I hope they don't) or add more
warnings.

--
Marc Glisse
_______________________________________________
ub mailing list
ub@isocpp.open-std.org
http://www.open-std.org/mailman/listinfo/ub