> >> Many portable C programs assume that signed integer overflow wraps around
> >> reliably using two's complement arithmetic.
> >
>
> I was looking for an adjective that mean the programs work on a wide
> variety of platforms, and "portable" seems more appropriate than
> "widely-used".
Maybe jus
A few comments:
> Many portable C programs assume that signed integer overflow wraps around
> reliably using two's complement arithmetic.
I'd replace "portable C programs" with "widely-used C programs". The normal
use of "portable" means that it conforms to the standard.
> Conversely, in at lea
> Well, while the effect of -fstrict-aliasing is hard to describe
> (TBAA _is_ a complex part of the standard), -fno-strict-aliasing
> rules are simple. All loads and stores alias each other if they
> cannot be proven not to alias by points-to analysis.
Yes, the rules are "simple", but are writte
> We do that with -fstrict-aliasing, which also changes language semantics.
Well, yes, but not quite in the same way. Indeed it's rather hard to
describe in what way it changes the language semantics but easier to
describe the effect it has on optimization. I think -fwrapv is the other
way aroun
> Then we have two switches:
>
> -fstandard
>
> which allows all optimizations (name can be changed, I
> don't care about the name)
>
> -fwrapv
>
> which changes the semantics to require wrapping in
> all cases (including loops)
What about -fno-wrapv for the first?
> | >|> for (i = 1; i < m; ++i)
> | >|> {
> | >|> if (i > 0)
> | >|> bar ();
> | >|> }
>
> I suspect part of Richard K.'s questions has been to determine, based
> on data, what improvements we actually gain from doing that kind of
> elimination predicated on undefined-ness o
> I can do this. What I also will do is improve VRP to still fold comparisons
> of the for a - 10 > 20 when it knows there is no overflow due to available
> range information for a (it doesn't do that right now).
I thought fold-const.c optimizes that right now and has been for a long time?
If tha
> I don't think -frisky is a good name for that option. A better name
> would be -fstrict.
Or -pedantic? ;-)
> 4) We permit an exception to occur if there is a signed overflow. If
>we can prove that some expression causes signed overflow, we are
>permitted to assume that that case will
> VRP as currently written adjust limits out to "infinity" of an
> appropriate sign for variables which are changed in loops. It then
> assumes that the (signed) variable will not wrap past that point,
> since that would constitute undefined signed overflow.
But isn't that fine since OTHER code i
> Here I'd like to demur, since I think it's useful to document
> something that users can rely on.
>
> I'm not asking that we document every possible wrapv-assuming code
> that happens to work. I'm only asking for enough so that users can
> easily write code that tests for signed integer overflow
> No offense, but all enabling wrapv at O2 or less would do is cause
> more bug reports about
> 1. Getting different program behavior between O2 and O3
> 2. Missed optimizations at O2
> It also doesn't fit with what we have chosen to differentiate
> optimization levels based on.
>
> IMHO, it's jus
> Richard Kenner wrote:
> > I'm not sure what you mean: there's the C standard.
> We have many standards, starting with K&Rv1 through the current draft.
> Which do you call, "the C standard"?
The current one. All others are "previous C standards&quo
> Currently our documentation on -fwrapv is rather short and does not
> provide examples or anything to provide such a feel:
>
> This option instructs the compiler to assume that signed arithmetic
> overflow of addition, subtraction and multiplication wraps around
> using twos-complement re
> More important, we don't yet have an easy way to characterize the
> cases where (2) would apply. For (2), we need a simple, documented
> rule that programmers can easily understand, so that they can easily
> verify that C code is safe
I'm not sure what you mean: there's the C standard. That sa
> Still, in practical terms, it is true that overflow
> being undefined is unpleasant. In Ada terms, it would
> have seemed better in the C standard to reign in the
> effect of overflow, for instance, merely saying that
> the result is an implementation defined value of the
> type, or the program i
> the seemingly prevalent attitude "but it is undefined; but it is not
> C" is the opinion of the majority of middle-end maintainers.
Does anybody DISAGREE with that "attitude"? It isn't valid C to assume that
signed overflow wraps. I've heard nobody argue that it is. The question
is how far we
> But didn't this thread get started by a real program that was broken
> by an optimization of loop invariants? Certainly I got a real bug
> report of a real problem, which you can see here:
>
> http://lists.gnu.org/archive/html/bug-gnulib/2006-12/msg00084.html
I just thought of something intere
> > But how would that happen here? If we constant-fold something that would
> > have overflowed by wrapping, we are ELIMINATING a signed overflow, not
> > INTRODUCING one. Or do I misunderstand what folding we're talking about
> > here?
>
> http://gcc.gnu.org/PR27116 is what led to the patch.
> > > Note that -fwrapv also _enables_ some transformations on signed
> > > integers that are disabled otherwise. We for example constant fold
> > > -CST for -fwrapv while we do not if signed overflow is undefined.
> > > Would you change those?
> >
> > I don't understand the rationale for not wra
> http://gcc.gnu.org/ml/gcc/2006-12/msg00607.html
>
> If this doesn't count as "optimization of loop invariants"
> then what would count?
One where the induction variable was updated additively, not
multiplicatively. When we talk about normal loop optimizations,
that's what we mean. I agree tha
> This isn't just about old code. If you're saying that old code with
> overflow checking can't be fixed (in a portable manner...), then new
> code will probably use the same tricks.
I said there's no "good" way, meaning as compact as the current tests. But
it's certainly easy to test for overfl
> Are you volunteering to audit the present cases and argue whether they
> fall in the "traditional" cases?
I'm certainly willing to *help*, but I'm sure there will be some cases
that will require discussion to get a consensus.
> Note that -fwrapv also _enables_ some transformations on signed
> i
> I think this is a fragile and not very practical approach. How do
> you define these "traditional" cases?
You don't need to define the "cases" in advance. Rather, you look at
each place where you'd be making an optimization based on the non-existance
of overflow and use knowlege of the impor
> Funny you should say that, because the Ada front-end likes to do this
> transformation, rather than leaving it to the back-end. For example:
>
> turns into
>
> if ((unsigned int) ((integer) x - 10) <= 10)
The front end isn't doing this: the routine "fold" in fold-const.c is.
True, it's bein
> This won't break the code. But I'm saying that if the compiler assumes
> wrapping, even in some particular cases (e.g. code that *looks like*
> "overflow check"), it could miss some potential optimizations. That
> is, it is not possible to avoid breaking overflow checks *and*
> optimizing everyth
> Well, that's not equivalent. For instance, MPFR has many conditions
> that evaluate to TRUE or FALSE on some/many implementations (mainly
> because the type sizes depend on the implementation), even without
> the assumption that an overflow cannot occur.
Can you give an example of such a conditi
> As I said earlier in this thread, people seem to think that the
> standards committee invented something new here in making overflow
> undefined, but I don't think that's the case.
I agree with that too.
However, it is also the case that between K&Rv1 and the ANSI C standard,
there was a langu
> Doing that in unsigned arithmetic is much more readable anyway.
If you're concerned about readability, you leave it as the two tests and
let the compiler worry about the optimal way to implement it.
> So I doubt that programmers would do that in signed arithmetic.
I kind of doubt that as wel
> If done in unsigned, this won't lead to any optimization, as unsigned
> arithmetic doesn't have overflows. So, if you write "a - 10" where a
> is unsigned, the compiler can't assume anything, whereas is a is
> signed, the compiler can assume that a >= INT_MIN + 10, reducing
> the range for a, and
> I would think that it would be a common consensus position that whatever
> the outcome of this debate, the result must be that the language we are
> supposed to write in is well defined.
Right. But "supposed to write in" is not the same as "what we do to avoid
breaking legacy code"!
I see it a
> In fact the wrap around range test is a standard idiom for "hand
> optimization" of range tests.
It's also one that GCC uses internally, but you do it in *unsigned* to
avoid the undefined overflow.
> And the idea that people were not used to thinking seriously about
> language semantics is very odd, this book was published in 1978,
> ten years after the algol-68 report, a year after the fortran
> 77 standard, long after the COBOL 74 standard, and a year before the
> PL/1 standard. It's not t
> And I doubt that GCC (or any compiler) could reliably detect code
> that checks for overflow.
It doesn't need to "detect" all such code: all it needs to do is
ensure that it doesn't BREAK such code. And that's a far easier
condition: you just have to avoid folding a condition into TRUE or FALSE
> The burden of proof ought to be on the guys proposing -O2
> optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
"break longstanding code". So far, all of the examples of code shown
that assumes wrapv semantics
> On the other hand, C does not have a way to tell the compiler:
>
> "this is my loop variable, it must not be modified inside the loop"
>
> neither you can say:
>
> "this is the upper bound of the loop, it must not be modified"
>
> either.
No, but the compiler can almost always trivia
> | though I vaguely
> | recall some complaints that you couldn't build v7 Unix if your compiler
> | generated integer overflow traps.
>
> this matches what I've told recently by some people who worked at bell
> labs, in the unix room.
I have
> > > I suppose there is
> > >
> > > *hv = (HOST_WIDE_INT) -(unsigned HOST_WIDE_INT) h1;
> > >
> > > to make it safe.
> >
> > Can't that conversion overflow?
>
> Not on a two's complement machine,
Then I'm confused about C's arithmetic rules. Suppose h1 is 1. It's cast
to unsigned, so
> Wait, though: K&Rv2 is post-C89.
Not completely: it's published in 1988, but the cover says "based on
draft-proposed ANSI C".
> Naturally K&Rv2 documents this, but if you want to know about
> traditional practice the relevant wording should come from K&Rv1,
> not v2.
>
> I don't know what K&R
> if (a - 10 < 20)
>
> Well that particular example is far fetched in code that people
> expect to run efficiently, but with a bit of effort you could
> come up with a more realistic example.
Not at all far-fetched. The normal way these things come up is macros:
#define DIGIT_TO_INT(D) (D - '0'
> I suppose there is
>
> *hv = (HOST_WIDE_INT) -(unsigned HOST_WIDE_INT) h1;
>
> to make it safe.
Can't that conversion overflow?
> Gaby said
>
> K&R C leaves arithmetic overflow undefined (except for unsigned
> types), in the sense that you get whatever the underlying hardware
> gives you. If it traps, you get trapped. If it wraps, you get wrapped.
>
> Is that really what the K&R book says, or just what compilers typical
> I am. I just now looked and found another example.
> gcc-4.3-20061223/gcc/fold-const.c's neg_double function
> contains this line:
>
> *hv = - h1;
OK, granted. But it is followed by code testing for overflow.
That means that (1) we can find these by looking for cases where we are
setti
> Note the interesting places in VRP where it assumes undefined signed
> overflow is in compare_values -- we use the undefinedness to fold
> comparisons.
Unfortunately, comparisons are the trickiest case because you have to
be careful to avoid deleting a comparison that exists to see if overflow
o
> What would you suggest this function to do, based on your comments?
I'm not familiar enough with VRP to answer at that level, but at a higher
level, what I'd advocate is that the *generator* of information would track
things both ways, assuming wrapping and non-wrapping semantics (of course, if
> [EMAIL PROTECTED] (Richard Kenner) writes:
>
> > Date: Sat, 30 Dec 2006 08:01:37 EST
> > I'd actually be very surprised if there were ANYPLACE where GCC has code
> > that's otherwise correct but which would malfunction if signed overflow
> > weren'
> Here's an example from the intprops module of gnulib.
These are interesting case.
Note that all the computations are constant-folded.
And I think this points the fact that we can "have our cake and eat it too"
in many cases. Everything we're seeing points to the fact that the cases
where as
> Where does GCC assume wrapv semantics?
The macro OVERFLOW_SUM_SIGN in fold-const.c.
> Not so appalling really, given that relying on wrapping is as has
> been pointed out in this thread, the most natural and convenient way
> of testing for overflow. It is really *quite* difficult to test for
> overflow while avoiding overflow, and this is something that is
> probably not in the le
> Paul Eggert wrote:
> > That's great, but GCC has had many other hands stirring the pot.
> > I daresay a careful scan would come up with many other examples of
> > undefined behavior due to signed integer overflow. (No doubt
> > you'll be appalled by them as well, but there they are.)
>
> That's
> But since you asked, I just now did a quick scan of
> gcc-4.3-20061223 (nothing fancy -- just 'grep -r') and the first
> example I found was this line of code in gcc/stor-layout.c's
> excess_unit_span function:
>
> /* Note that the calculation of OFFSET might overflow; we calculate it so
>
> Those questions are more for the opponents of -fwrapv, so
> I'll let them answer them. But why are they relevant?
> Having -fwrapv on by default shouldn't affect your SPEC
> score, since you can always compile with -fnowrapv if the
> application doesn't assume wraparound.
(1) If -fwrapv isn't t
> I'm not sure what data you're asking for.
Here's the data *I'd* like to see:
(1) What is the maximum performance loss that can be shown using a real
program (e.g,. one in SPEC) and some compiler (not necessarily GCC) when
one assumes wrapping semantics?
(2) In the current SPEC, how many prog
> Wrong. Many people have relied on that "feature" because they thought it
> was leagal and haven't had the time to check every piece of code they
> wrote for conformance with the holy standard. And they don't have the time
> now to walk trough the work of their lifetime to see where they did wrong
53 matches
Mail list logo