Ian Lance Taylor wrote:
I believe there is a comprehensible distinction between "compiler will
not assume that signed overflow is undefined behaviour" and "compiler
will cause all arithmetic to wrap around."
In any case, I have no plans to continue working on this. I described
my work in consi
Ian Lance Taylor wrote:
You're right, I shouldn't have said "implementation defined."
What will happen with -fno-strict-overflow is whatever the processor
ISA happens to do when a signed arithmetic operation overflows. For
ordinary machines it will just wrap.
Given that all ordinary machines
Ian Lance Taylor wrote:
The new option -fstrict-overflow tells gcc that it can assume the
strict signed overflow semantics prescribed by the language standard.
This option is enabled by default at -O2 and higher. Using
-fno-strict-overflow will tell gcc that it can not assume that signed
overfl
Laurent GUERBY wrote:
On Sun, 2006-12-31 at 12:04 -0500, Robert Dewar wrote:
Duncan Sands wrote:
The C front-end performs this transformation too. I'm not claiming that the
back-end optimizers would actually do something sensible if the front-end
didn't transform this code (in
Richard Kenner wrote:
A few comments:
Many portable C programs assume that signed integer overflow wraps around
reliably using two's complement arithmetic.
I'd replace "portable C programs" with "widely-used C programs". The normal
use of "portable" means that it conforms to the standard.
Richard Guenther wrote:
On 1/2/07, Richard Kenner <[EMAIL PROTECTED]> wrote:
We do that with -fstrict-aliasing, which also changes language semantics.
Well, yes, but not quite in the same way. Indeed it's rather hard to
describe in what way it changes the language semantics but easier to
descr
Richard Guenther wrote:
We do that with -fstrict-aliasing, which also changes language semantics.
-fstrict-aliasing is disabled for -O0 and -O1 and enabled for -O[23s].
Yes, and as others have said, this is a bad precedent, and should
not be followed further. Inclusion of -fwrapv would be much
Richard Kenner wrote:
What about -fno-wrapv for the first?
Actually I don't like this.
Why? Because it seems weird to have a flag that you
can turn on and off, but the default is neither on
nor off.
Richard Kenner wrote:
Then we have two switches:
-fstandard
which allows all optimizations (name can be changed, I
don't care about the name)
-fwrapv
which changes the semantics to require wrapping in
all cases (including loops)
What about -fno-wrapv for the first?
Possible .. my view was
Richard Guenther wrote:
On 1/1/07, Geert Bosch <[EMAIL PROTECTED]> wrote:
specfp.
I would support the proposal to enable -fwrapv for -O[01], but
not for -O2 as that is supposed to be "optimize for speed" and
as -O3 is not widely used to optimize for speed (in fact it may
make code slower). I
Ian Lance Taylor wrote:
I don't think -frisky is a good name for that option. A better name
would be -fstrict.
or perhaps -fstandard
which says "my program is 100% compliant ISO C. please mr. compiler
make any assumptions you like based on knowing this is the case. If
my claim that I am 100%
Andrew Pinski wrote:
Look at Fortran argument aliasing, we get almost no bugs about that
undefinedness. We have an option to change the way argument aliasing
works, in the same way we have an option for signed overflow. I don't
see why overflow will be any different from argument aliasing.
W
Geert Bosch wrote:
As undefined execution can result in arbitrary badness,
this is really at odds with the increasing need for many
programs to be secure. Since it is almost impossible to
prove that programs do not have signed integer overflow,
That seems a bit pessimistic, given the work Prax
Richard Kenner wrote:
the seemingly prevalent attitude "but it is undefined; but it is not
C" is the opinion of the majority of middle-end maintainers.
Does anybody DISAGREE with that "attitude"? It isn't valid C to assume that
signed overflow wraps. I've heard nobody argue that it is. The q
Bruce Korb wrote:
Changing that presumption without multiple years of -Wall warnings
is a Really, Really, Really Bad Idea.
I am still not ready to agree that this is a RRRBI for the case
of loop invariants. We have not seen ONE imaginary example, let
alone a real example, where the optimziatio
Paul Eggert wrote:
The question is not whether GCC should support wrapv
semantics; it already does, if you specify -fwrapv.
The question is merely whether wrapv should be the default
with optimization levels -O0 through -O2.
That over simplifies, because it presents things as though
there are
Duncan Sands wrote:
The C front-end performs this transformation too. I'm not claiming that the
back-end optimizers would actually do something sensible if the front-end
didn't transform this code (in fact they don't seem too), but since the
optimal way of doing the check presumably depends on
Richard Kenner wrote:
]
Essentially, there are three choices: with -fwrapv, you must preseve wrapping
semantics and do NONE of those optimizations; with -fno-wrapv, you can do ALL
of them; in the default cause, a heuristic can be used that attempts to
balance optimization quality against breakage
Vincent Lefevre wrote:
On 2006-12-31 10:08:32 -0500, Richard Kenner wrote:
Well, that's not equivalent. For instance, MPFR has many conditions
that evaluate to TRUE or FALSE on some/many implementations (mainly
because the type sizes depend on the implementation), even without
the assumption tha
Gerald Pfeifer wrote:
On Sun, 31 Dec 2006, Robert Dewar wrote:
If you do it in signed expecting wrapping, then the optimization
destroys your code. Yes, it is technically your fault, but this
business of telling users
"sorry, your code is non-standard, gcc won't handle it as you
expe
Vincent Lefevre wrote:
No, this isn't what I meant. The C standard doesn't assume wrapping,
so I don't either. If the compiler doesn't either, then it can do
some optimizations. Let's take a very simple example:
We perfectly understand that if the compiler does not assume
wrapping, but instead
Vincent Lefevre wrote:
If done in unsigned, this won't lead to any optimization, as unsigned
arithmetic doesn't have overflows. So, if you write "a - 10" where a
is unsigned, the compiler can't assume anything, whereas is a is
signed, the compiler can assume that a >= INT_MIN + 10, reducing
the
Richard Kenner wrote:
And the idea that people were not used to thinking seriously about
language semantics is very odd, this book was published in 1978,
ten years after the algol-68 report, a year after the fortran
77 standard, long after the COBOL 74 standard, and a year before the
PL/1 standa
Vincent Lefevre wrote:
My point was that if you see this in a source program, it is in
fact a possible candidiate for code that can be destroyed by
the optimization.
Well, only for non-portable code (i.e. code based on wrap). I also
suppose that this kind of code is used only to check for over
Richard Kenner wrote:
The burden of proof ought to be on the guys proposing -O2
optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
"break longstanding code". So far, all of the examples of code shown
that as
Richard Kenner wrote:
The burden of proof ought to be on the guys proposing -O2
optimizations that break longstanding code, not on the skeptics.
There's also a burden of proof that proposed optimizations will actually
"break longstanding code". So far, all of the examples of code shown
that as
Vincent Lefevre wrote:
On 2006-12-30 20:07:09 -0500, Robert Dewar wrote:
In my view, this comparison optimization should not have been put in
without justification given that it clearly does affect the semantics
of real code. Indeed if you really see code like
if (a - 10 < 20)
in pl
Richard Kenner wrote:
Wait, though: K&Rv2 is post-C89.
Not completely: it's published in 1988, but the cover says "based on
draft-proposed ANSI C".
Naturally K&Rv2 documents this, but if you want to know about
traditional practice the relevant wording should come from K&Rv1,
not v2.
I don't
Richard Kenner wrote:
I found my copy of K&R (Second Edition). Page 200: "The handling of overflow,
divide check, and other exceptions in expression evaluation is not defined
by the language. Most existing implementations of C ignore overflow in
evaluation of signed integral expressions and as
(first, sorry for breaking the thread, pilot error on my part)
Here is the message again properly threaded. The other broken
thread by dewar's evil twin can be discarded.
Andrew Pinski wrote:
It does buy you something for code like:
if (a - 10 < 20)
Well that particular example is far fetch
Gaby said
K&R C leaves arithmetic overflow undefined (except for unsigned
types), in the sense that you get whatever the underlying hardware
gives you. If it traps, you get trapped. If it wraps, you get wrapped.
Is that really what the K&R book says, or just what compilers typically
did? My mem
Andrew Pinski wrote:
It does buy you something for code like:
if (a - 10 < 20)
Well that particular example is far fetched in code that people
expect to run efficiently, but with a bit of effort you could
come up with a more realistic example.
Compilers are always full of such optimizations wh
Richard Kenner wrote:
Note the interesting places in VRP where it assumes undefined signed
overflow is in compare_values -- we use the undefinedness to fold
comparisons.
Unfortunately, comparisons are the trickiest case because you have to
be careful to avoid deleting a comparison that exists t
Andrew Pinski wrote:
-fwrapv-in-all-cases-except-loop-bounds
Again, please don't this the default for Fortran as integer
overflow has been undefined since at least 1977 so I don't think
it is a good idea for GCC in general anyways as evidence of Fortran.
-- Pinski
Well the question is whet
Gabriel Dos Reis wrote:
I have been looking into infer_loop_bounds_from_signedness() called
from infer_loop_bounds_from_undefined().
At some places, nowrap_type_p() is used but this function operates
only on types, so there will be too many false positive there; yet we
will miss warning through
(again I apologize for breaking the thread),
Here is the reply I sent properly threaded
Gaby said
> K&R C leaves arithmetic overflow undefined (except for unsigned
> types), in the sense that you get whatever the underlying hardware
> gives you. If it traps, you get trapped. If it wraps, you g
Richard Kenner wrote:
I can't speak for any other GCC developer, but I personally am quite
comfortable viewing any code that assumes wrapping semantics as broken
and needing fixing with the exception of these cases of checking for
overflow: there simply is no good way in C to do these checks in
Paul Eggert wrote:
For writing new code, it's easy: the C standard is all
that should be assumed. Old code should be modified, as
time allows, to be consistent with that standard.
This may be the policy of the GCC developers for the code
they maintain, but it's not a realistic policy for
ever
Richard Kenner wrote:
Not so appalling really, given that relying on wrapping is as has
been pointed out in this thread, the most natural and convenient way
of testing for overflow. It is really *quite* difficult to test for
overflow while avoiding overflow, and this is something that is
probably
Paul Eggert wrote:
Nor would I relish the prospect of keeping wrapv assumptions out of
GCC as other developers make further contributions, as the wrapv
assumption is so natural and pervasive.
It's neither natural not pervasive to me! I would never write code
that way
That's great, but GCC has
Paul Eggert wrote:
In practice, I expect that most C programmers probably
assume wrapv semantics, if only unconsciously. The minimal
C Standard may not entitle them to that assumption, but they
assume it anyway. Part of this is the Java influence no
doubt. Sorry, but that is just the way the
Daniel Berlin wrote:
I'm sure no matter what argument i come up with, you'll just explain it away.
The reality is the majority of our users seem to care more about
whether they have to write "typename" in front of certain declarations
than they do about signed integer overflow.
I have no idea
Andrew Pinski wrote:
On Fri, 2006-12-22 at 17:08 +, Dave Korn wrote:
Misaligned accesses *kill* your performance!
Maybe on x86, but on PPC, at least for the (current) Cell's PPU
misaligned accesses for most cases unaligned are optimal.
is that true across cache boundaries?
Thanks,
Andr
Dave Korn wrote:
On 22 December 2006 00:59, Denis Vlasenko wrote:
Or this, absolutely typical C code. i386 arch can compare
16 bits at a time here (luckily, no alighment worries on this arch):
Whaddaya mean, no alignment worries? Misaligned accesses *kill* your
performance!
is it really
Paul Brook wrote:
Who says the optimisation is valid? The language standard?
The example was given as something that's 100% safe to optimize. I'm
disagreeing with that assertion. The use I describe isn't that unlikely if
the code was written by someone with poor knowledge of C.
My point is
Paul Brook wrote:
On Friday 22 December 2006 00:58, Denis Vlasenko wrote:
On Tuesday 19 December 2006 23:39, Denis Vlasenko wrote:
There are a lot of 100.00% safe optimizations which gcc
can do. Value range propagation for bitwise operations, for one
Or this, absolutely typical C code. i386 ar
Gabriel Dos Reis wrote:
I don't believe this particular issue of optimization based on
"undefined behaviour" can be resolved by just telling people "hey
look, the C standard says it is undefined, therefore we can optimize.
And if you're not happy, just tell the compiler not to optimize".
For not
Zdenek Dvorak wrote:
actually, you do not even need (invalid) multithreaded programs to
realize that register allocation may change behavior of a program.
If the size of the stack is bounded, register allocation may
cause or prevent program from running out of stack, thus turning a
crashing prog
Dave Korn wrote:
Why isn't that just a buggy program with wilful disregard for the use of
correct synchronisation techniques?
Right, I think most people would agree it is.
But for sure, if you consider that making the code go faster is itself
a change in behavior, then obviously all optimiz
Andrew Pinski wrote:
Actually they will with multi threaded program, since you can have a case
where it works and now it is broken because one thread has speed up so much it
writes to a variable which had a copy on another thread's stack. So the
argument
about it being too strong is wrong beca
Richard B. Kreckel wrote:
By the same token it would be wise to refrain from turning on any
optimization that breaks programs which depend on wrapping signed
integers. Silently breaking LIA-1 semantics is imprudent.
I am not so sure about that conclusion, which I why I would like to
see more d
Paul Brook wrote:
Compiler can optimize it any way it wants,
as long as result is the same as unoptimized one.
We have an option for that. It's called -O0.
Pretty much all optimization will change the behavior of your program.
Now that's a bit TOO strong a statement, critical optimizations l
Denis Vlasenko wrote:
I want sane compiler. One in which N-bit integer variables stay exactly N-bit.
Without "magic" N+1 bit which is there "somethimes". a*2/2:
If I say "multiply by 2 and _after that_ divide by 2,
I meant that. Compiler can optimize it any way it wants,
as long as result is the
Paolo Bonzini wrote:
We've optimized expressions such as (a*2)/2 on the basis of overflow
being undefined for a very long time, not just loops.
What is (a*2)/2 optimized to? certainly it has the value a if you wrap,
so you are not necessarily depending on undefined here.
it's interesting that
Paolo Bonzini wrote:
No, it has not. For example, if a is 0x4000 in a 32 bit type and
arithmetic wraps, a*2 = -0x8000 (overflow), and hence (a*2)/2 =
-0x4000 = -1073741824.
Paolo
Yes indeed, my mistake, I was thinking unsigned :-(
and of course signed is the whole point here!
Florian Weimer wrote:
Something like:
GCC does not use the latitude given in C99 only to treat
certain aspects of signed @samp{<<} as undefined: If the right
operand @var{n} is non-negative and less than the width of the
left operand @var{val}, the resulting valu
Joseph S. Myers wrote:
On Tue, 19 Dec 2006, Florian Weimer wrote:
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
too man
Zdenek Dvorak wrote:
IMHO, using loops relying on the behavior of overflow of an
induction variable (*) is an ugly hack and whoever writes such a code
does not deserve for his program to work.
I suspect everyone would agree on this, and in practice I would
guess that
a) there are no programs
Florian Weimer wrote:
* Paolo Bonzini:
Interesting read. I agree with the proposed fix; however, note that
GCC does not make the result of overflowing signed left-shifts
undefined, exactly because in this case the overflow is relied upon by
too many existing programs
Is this documented somew
Andrew Haley wrote:
> I suspect the actual argument must be somewhere else.
I'm sure it is. The only purpose of my mail was to clarify what I
meant by "nonstandard", which in this case was "not strictly
conforming". I didn't intend to imply anything else.
But a compiler that implements wra
Gabriel Dos Reis wrote:
Andrew Haley <[EMAIL PROTECTED]> writes:
| Robert Dewar writes:
| > Andrew Haley wrote:
| >
| > > We've already defined `-fwrapv' for people who need nonstandard
| > > arithmetic.
| >
| > Nonstandard implies that the r
Andrew Haley wrote:
Robert Dewar writes:
> Andrew Haley wrote:
>
> > We've already defined `-fwrapv' for people who need nonstandard
> > arithmetic.
>
> Nonstandard implies that the result does not conform with the standard,
I don't think it d
Andrew Pinski wrote:
I don't have the number of times this shows up or how much it helps but
it does help out on being able to vectorize this loop.
Just to be clear, when I ask for quantitative data, it is precisely
data about "how much it helps". It is always easy enough to show
cases where t
Andrew Haley wrote:
Robert Dewar writes:
> Brooks Moses wrote:
>
> > Now, if your argument is that following the LIA-1 standard will
> > prevent optimizations that could otherwise be made if one
> > followed only the C standard, that's a reasonable argume
Brooks Moses wrote:
Now, if your argument is that following the LIA-1 standard will prevent
optimizations that could otherwise be made if one followed only the C
standard, that's a reasonable argument, but it should not be couched as
if it implies that preventing the optimizations would not be
65 matches
Mail list logo