Re: Getting declaration tree node using func. name

2006-12-20 Thread Rohit Arul Raj

On 12/19/06, Ferad Zyulkyarov <[EMAIL PROTECTED]> wrote:

tree fn_decl;
tree fn_id;

fn_id = get_identifier("test_fn_call");
fn_decl = lookup_name(fn_id); /* returns you a pointer to the function
declaration tree */


Hope this is what you are looking for.

On 12/19/06, Rohit Arul Raj <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I am working with GCC 4.1.1.
> By using the function name, is it possible to get the declaration tree
> node of that function.
>
> e.g. using  maybe_get_identifier("name"), i get the identifier node.
> similarly are there any functions or macros available to get the
> declaration tree node.
>
> Regards,
> Rohit
>


--
Ferad Zyulkyarov
Barcelona Supercomputing Center


Hi all,

This works fine without optimization.

tree fn_id, fn_decl;

fn_id = get_identifier(name);
fn_decl = lookup_name(fn_id);

if i make a function call, whose declaration is of type:
extern void abort();

fn_decl return NULL for all Optimizations (O1, O2, O3, Os)
if no optimization is specified, it gives a proper tree.

Can any body suggest what is the best way to get the Declaration tree
of functions with extern  declarations.

Regards,
Rohit


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Robert Dewar

Paul Schlie wrote:


As a compromise, I'd vote that no optimizations may alter program behavior
in any way not explicitly diagnosed in each instance of their application.


Sounds reasonable, but it is impossible and impractical! And I think
anyone familiar with compiler technology and optimization technology
understands why this is the case.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Andrew Haley
Robert Dewar writes:
 > Paul Brook wrote:
 > 
 > > As opposed to a buggy program with wilful disregard for signed overflow 
 > > semantics? ;-)
 > 
 > I know there is a smiley there, but in fact I think it is useful to
 > distinguish these two cases.

This is, I think, a very interesting comment.  I've thought about it
for a little while, and I don't understand why you think it is useful
to distinguish these two cases.

Is it simply that one error is likely to be more common than another?
Or is there some more fundamental reason?

Andrew.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Robert Dewar

Andrew Haley wrote:


Is it simply that one error is likely to be more common than another?
Or is there some more fundamental reason?


I think it is more fundamental. Yes, of course any optimization
will change resource utilization (space, time). An optimization
may well make a program larger, which means it no longer fits
in memory, or it may in some unfortunate case slow down the
execution of some program, e.g. due to cache effects from too
much inlining, and the program misses deadlines.

But these effects are very different from optimizations which
change the results in a way permitted by the standard as a
direct result of using the optimization.

For example if we write

  a = b * c + d;

an optimizer may choose to use a fused multiply-add with
subtly different rounding effects

or on the x86, keeping intermediate floats in 80 bits,
certainly permissible from the standard without LIA,
can subtly change results.

I do think this is a useful distinction.

We can't say "don't do any optimization that changes
resource utilization", that would be absurd. We can't
even say, warn when this optimization might change
resource utilization, that would be equally absurd.

But when we have an optimization that changes the
operational semantics of the particular operation
being optimized, we can say:

This change in behavior may be unexpected, or
undesirable, even if allowed by the standard.
Let's be sure the optimization is worth it before
enabling it by default.


Re: GCC optimizes integer overflow: bug or feature? (was: avoid integer overflow in mktime.m4)

2006-12-20 Thread Marcin Dalecki


On 2006-12-20, at 00:10, Richard B. Kreckel wrote:



C89 did not refer to IEEE 754 / IEC 60559. Yet, as far as I am aware,
-ffast-math or the implied optimizations have never been turned on  
by GCC

unless explicitly requested. That was a wise decision.

By the same token it would be wise to refrain from turning on any
optimization that breaks programs which depend on wrapping signed
integers.


Numerical stability of incomplete floating point representations are  
an entirely different
problem category then some simple integer tricks. In the first case  
the difficulties are inherent
to the incomplete representation of the calculation domain. In the  
second case it's just some
peculiarities of the underlying machine as well as the fact that the  
unsigned qualifier is not
used nearly enough frequently in common code. Or in other words: Z/32  
resp. 64 IS AN ALGEBRA but

float isn't! Thus this argument by analogy simply isn't valid.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Zdenek Dvorak
Hello,

> > Paul Brook wrote:
> > >> Compiler can optimize it any way it wants,
> > >> as long as result is the same as unoptimized one.
> > > 
> > > We have an option for that. It's called -O0.
> > > 
> > > Pretty much all optimization will change the behavior of your program.
> > 
> > Now that's a bit TOO strong a statement, critical optimizations like
> > register allocation and instruction scheduling will generally not change
> > the behavior of the program (though the basic decision to put something
> > in a register will, and *surely* no one suggests avoiding this critical
> > optimization).
> 
> Actually they will with multi threaded program, since you can have a case
> where it works and now it is broken because one thread has speed up so much it
> writes to a variable which had a copy on another thread's stack.  So the 
> argument
> about it being too strong is wrong because timming matters now a days.  
> Instruction
> scheduling can cause the same issue as it forces a write too early for 
> another thread
> to act on.

actually, you do not even need (invalid) multithreaded programs to
realize that register allocation may change behavior of a program.
If the size of the stack is bounded, register allocation may
cause or prevent program from running out of stack, thus turning a
crashing program to non-crashing one or vice versa.

Zdenek


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Andrew Haley
Denis Vlasenko writes:
 > On Tuesday 19 December 2006 20:05, Andrew Haley wrote:
 > > Denis Vlasenko writes:
 > >  > 
 > >  > I wrote this just a few days ago:
 > >  > 
 > >  > do {
 > >  > int32_t v1 = v << 1;
 > >  > if (v < 0) v1 ^= mask;
 > >  > v = v1;
 > >  > printf("%10u: %08x\n", c++, v);
 > >  > } while (v != 1);
 > >  > 
 > >  > I would become rather sad if this will stop compiling correctly.
 > > 
 > > I can understand the objections to do with "dusty deck" code that
 > > hasn't been looked at for aeons, but in the case of code that you
 > > wrote so recently, given that you understand the issue, why not simply
 > > use the standard idiom?
 > 
 > I want sane compiler.
 > One in which N-bit integer variables stay exactly N-bit.
 > Without "magic" N+1 bit which is there "somethimes". a*2/2:
 > If I say "multiply by 2 and _after that_ divide by 2,
 > I meant that. Compiler can optimize it any way it wants,
 > as long as result is the same as unoptimized one.

This kind of thinking was appropriate before standardization.  
But C changed.  C is no longer a kind of high-level assembly laguage:
it's defined by a standard, in terms of an abstract machine, and some
operations are not well-defined.  If you want your programs to do what
you expect, you'd better find out what that abstract machine does.

 > Above: v is a signed entity. I expect (v < 0) to be equal to
 > "most significant bit is set". It's not about standards.
 > It's about sanity.

You'd better find some other programming langauge that is defined the
way you want.

Andrew.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Robert Dewar

Zdenek Dvorak wrote:


actually, you do not even need (invalid) multithreaded programs to
realize that register allocation may change behavior of a program.
If the size of the stack is bounded, register allocation may
cause or prevent program from running out of stack, thus turning a
crashing program to non-crashing one or vice versa.


Again, I think it is useful to distinguish functional behavior changes
from changes that come from resource utilization. Yes, optimization
will always change resource utilization. That's the whole point, so
if you include resource utilization as a change in behavior, then
you lose the useful distinction between optimizations like

  a*2 => a+a

which could change the size of an executable

but which of itself is functionally neutral

from optimizations which are not functionally neutral for
programs with undefined or implementation defined semantics
(e.g. fast math).

Note that another possible effect of *any* optimization, is to
change the address of data and code throughout the program, and
of course this could wake up some implementation or buggy
behavior that depended on specific addresses of data or code.
Again, it is useful to exclude these indirect effects.


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Robert Dewar

[EMAIL PROTECTED] wrote:

it should be fairly easy to indicate each and every 
undefined/unspecified value/semantic
assumption being applied to both explicitly declared variables and 
implicit intermediate
results of programmer specified expressions and their resulting 
hopefully logically equivalent
replacements; as otherwise arguably the "optimization" has no basis of 
application.


No, it's not fairly easy. If you really think it *is* easy, try it
in some case like the example below and propose a patch.
 
For example:
 
warning: line 12,  int x = a << b ; statement ignored as (int)b expected 
value range is greater
; than the sizeof(a) 
thereby the resulting value is undefined,
; and thereby there's no 
reason to compute the expression
; as (int)x may have any 
value.


question from imobilien

2006-12-20 Thread Jan Eissfeld
Hi,

PR19978 reports that some overflow warnings are emitted multiple times. Like 
for example,

test.c:6: warning: integer overflow in expression
test.c:6: warning: integer overflow in expression
test.c:6: warning: integer overflow in expression

The current testsuite will match any number of those to a single {
dg-warning }. I don't know whether this is a known limitation, a bug
on the testsuite or it just needs some magic.

How could I test that exactly one warning was emitted?

thx

imobilien
http://www.immojungle.de 
__
"Ein Herz für Kinder" - Ihre Spende hilft! Aktion: www.deutschlandsegelt.de
Unser Dankeschön: Ihr Name auf dem Segel der 1. deutschen America's Cup-Yacht!



Built and installed gcc on powerpc-ibm-aix5.3.0.0

2006-12-20 Thread [EMAIL PROTECTED]
blitzen:/home/jonatha/packages/gcc-3.4.5$ config.guess
powerpc-ibm-aix5.3.0.0

blitzen:/home/jonatha/packages/gcc-3.4.5$ gcc -v
Reading specs from
/home/jonatha/build/bin/../lib/gcc/powerpc-ibm-aix5.3.0.0/3.4.5/specs
Configured with: ./configure --with-as=/usr/bin/as
--with-ld=/usr/bin-ld --disable-nls
--enable-languages=c --prefix=/home/jonatha/gcc/build
--enable-threads
--enable-version-specific-runtime-libs :
(reconfigured) ./configure
--prefix=/home/jonatha/build
--with-ld=/home/jonatha/mytools/ld : (reconfigured)
./configure --prefix=/build
--with-ld=/home/jonatha/mytools/ld
--with-as=/usr/bin/as --enable-languages=c
Thread model: aix
gcc version 3.4.5

Notes:

Couldn't find ./install-sh in bfd/po during make
install of binutils-2.15.  Manually changed INSTALL
to ${HOME}/packages/binutils-2.15/install.sh -c (from
./install.sh -c) in
$HOME/packages/binutils-2.15/Makefile.  Seems to work.

Used AIX 'as' rather than GNU 'as' since I am told
GNU 'as' is known not to work on AIX5L

Had to export DESTDIR for make install in gcc objdir
to work



Re: Built and installed gcc on powerpc-ibm-aix5.3.0.0

2006-12-20 Thread David Edelsohn
> [EMAIL PROTECTED] net writes:

jonathan> Configured with: ./configure --with-as=/usr/bin/as

jonathan> Had to export DESTDIR for make install in gcc objdir
jonathan> to work

Thanks for the notification.  You will have less problems building
and installing GCC if you do not build it in the source directory.

David



-fwrapv enables some optimizations

2006-12-20 Thread Bruno Haible
Hi,

The gcc docs say:

  `-fwrapv'
 ... This flag enables some optimizations and disables others.

-fwrapv turns some undefined behaviour (according to C99) into well-defined
behaviour, therefore it is obvious that it can disable some optimizations.

But the other way around? Without -fwrapv the compiler can assume more
about the program being compiled (namely that signed integer overflows
don't occur), and therefore has more freedom for optimizations. All
optimizations that are possible with -fwrapv should also be performed
without -fwrapv. Anything else is a missed optimization.

One such case is at http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30267
but there may be others lurking around.

Bruno


Re: question from imobilien

2006-12-20 Thread Ian Lance Taylor
Jan Eissfeld <[EMAIL PROTECTED]> writes:

> PR19978 reports that some overflow warnings are emitted multiple times. Like 
> for example,
> 
> test.c:6: warning: integer overflow in expression
> test.c:6: warning: integer overflow in expression
> test.c:6: warning: integer overflow in expression
> 
> The current testsuite will match any number of those to a single {
> dg-warning }. I don't know whether this is a known limitation, a bug
> on the testsuite or it just needs some magic.

This is a known limitation in the test harness.

Ian


Re: -fwrapv enables some optimizations

2006-12-20 Thread Joseph S. Myers
On Wed, 20 Dec 2006, Bruno Haible wrote:

> But the other way around? Without -fwrapv the compiler can assume more
> about the program being compiled (namely that signed integer overflows
> don't occur), and therefore has more freedom for optimizations. All
> optimizations that are possible with -fwrapv should also be performed
> without -fwrapv. Anything else is a missed optimization.

Indeed.  Fixing this may require it to be possible to mark individual 
operations with their overflow semantics (which will also be needed for 
LTO to handling inlining between translation units compiled with different 
options).  The problem is that some optimizations involve transforming an 
"overflow undefined" operation into an "overflow wraps" one, which is a 
valid transformation, but can't be represented in trees at present - but 
when the transformation is also valid for the initial operation being an 
"overflow wraps" one, the optimization can be done if -fwrapv.

When individual operations can be so marked, the optimizations in question 
can then be done if the original operation is either "overflow undefined" 
or "overflow wraps".

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Re: -fwrapv enables some optimizations

2006-12-20 Thread Joe Buck

On Wed, 20 Dec 2006, Bruno Haible wrote:
> > But the other way around? Without -fwrapv the compiler can assume more
> > about the program being compiled (namely that signed integer overflows
> > don't occur), and therefore has more freedom for optimizations. All
> > optimizations that are possible with -fwrapv should also be performed
> > without -fwrapv. Anything else is a missed optimization.

On Wed, Dec 20, 2006 at 03:50:23PM +, Joseph S. Myers wrote:
> Indeed.  Fixing this may require it to be possible to mark individual 
> operations with their overflow semantics (which will also be needed for 
> LTO to handling inlining between translation units compiled with different 
> options).  The problem is that some optimizations involve transforming an 
> "overflow undefined" operation into an "overflow wraps" one, which is a 
> valid transformation, but can't be represented in trees at present -

Doesn't it suffice just to change the type of the operation to unsigned?
For signed integers, overflow is undefined, but for unsigned integers,
overflow wraps.


Re: -fwrapv enables some optimizations

2006-12-20 Thread Paolo Bonzini



On Wed, Dec 20, 2006 at 03:50:23PM +, Joseph S. Myers wrote:
Indeed.  Fixing this may require it to be possible to mark individual 
operations with their overflow semantics (which will also be needed for 
LTO to handling inlining between translation units compiled with different 
options).  The problem is that some optimizations involve transforming an 
"overflow undefined" operation into an "overflow wraps" one, which is a 
valid transformation, but can't be represented in trees at present -


Doesn't it suffice just to change the type of the operation to unsigned?
For signed integers, overflow is undefined, but for unsigned integers,
overflow wraps.


You mean writing the hypothetical PLUS_WRAP_EXPR  (where a and b 
are ints) as (int) ((unsigned)a + (unsigned)b)?


That might work actually.  However, I don't know if the optimizers will 
be able to see through the casts and perform all the subsequent 
optimizations appropriately.


Paolo


Re: -fwrapv enables some optimizations

2006-12-20 Thread Tom Tromey
> "Paolo" == Paolo Bonzini <[EMAIL PROTECTED]> writes:

>> On Wed, Dec 20, 2006 at 03:50:23PM +, Joseph S. Myers wrote:
>> For signed integers, overflow is undefined, but for unsigned integers,
>> overflow wraps.

Paolo> You mean writing the hypothetical PLUS_WRAP_EXPR  (where a and b
Paolo> are ints) as (int) ((unsigned)a + (unsigned)b)?

Paolo> That might work actually.  However, I don't know if the optimizers
Paolo> will be able to see through the casts and perform all the subsequent
Paolo> optimizations appropriately.

I think we already have some problems with this... as I recall PR 21855
includes a problem in this area (though it is somewhat glossed over in
the PR itself).

FWIW LLVM recently moved to having unsigned operations rather than
unsigned types.  Here's a bit of info on it:

http://nondot.org/sabre/LLVMNotes/TypeSystemChanges.txt

Tom


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Matthew Woehlke

Dave Korn wrote:

On 20 December 2006 02:28, Andrew Pinski wrote:

Paul Brook wrote:

Pretty much all optimization will change the behavior of your program.


Now that's a bit TOO strong a statement, critical optimizations like
register allocation and instruction scheduling will generally not change
the behavior of the program (though the basic decision to put something
in a register will, and *surely* no one suggests avoiding this critical
optimization).


Actually they will with multi threaded program, since you can have a case
where it works and now it is broken because one thread has speed up so much
it writes to a variable which had a copy on another thread's stack. So the
argument about it being too strong is wrong because timing matters now a
days.  Instruction scheduling can cause the same issue as it forces a write
too early for another thread to act on.


Why isn't that just a buggy program with wilful disregard for the use of
correct synchronisation techniques?


Excuse me while I beat you on the head with a number of real-life 
examples of *lock-free* algorithms. :-) Particularly lock-free queues 
(google should help you find one that applies here), whose correct 
operation is critically dependent on the order in which the loads and 
stores are performed. This is a very real, entirely legitimate example 
where the compiler thinking it knows how to do my job better than I do 
is wrong.


We (in a major, commercial application) ran into exactly this issue. 
'asm volatile("lock orl $0,(%%esp)"::)' is your friend when this happens 
(it is a barrier across which neither the compiler nor CPU will reorder 
things). Failing that, no-op cross-library calls (that can't be inlined) 
seem to do the trick.


--
Matthew
Hi! I'm a .signature virus! Copy me into your ~/.signature, please!



RE: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Dave Korn
On 20 December 2006 16:25, Matthew Woehlke wrote:

> Dave Korn wrote:
>> On 20 December 2006 02:28, Andrew Pinski wrote:
 Paul Brook wrote:
> Pretty much all optimization will change the behavior of your program.
 
 Now that's a bit TOO strong a statement, critical optimizations like
 register allocation and instruction scheduling will generally not change
 the behavior of the program (though the basic decision to put something
 in a register will, and *surely* no one suggests avoiding this critical
 optimization).
>>> 
>>> Actually they will with multi threaded program, since you can have a case
>>> where it works and now it is broken because one thread has speed up so
>>> much it writes to a variable which had a copy on another thread's stack.
>>> So the argument about it being too strong is wrong because timing matters
>>> now a days.  Instruction scheduling can cause the same issue as it forces
>>> a write too early for another thread to act on.
>> 
>> Why isn't that just a buggy program with wilful disregard for the use of
>> correct synchronisation techniques?
> 
> Excuse me while I beat you on the head with a number of real-life
> examples of *lock-free* algorithms. :-) 

  Thanks, I've heard of them, in fact I've implemented many, debugged many,
and currently work with several every single day of the week.

> Particularly lock-free queues
> (google should help you find one that applies here), whose correct
> operation is critically dependent on the order in which the loads and
> stores are performed. 

  No, absolutely not.  Lock-free queues work by (for example) having a single
producer and a single consumer, storing the queue in a circular buffer, and
assigning ownership of the queue's head pointer to the producer and the
(chasing) tail pointer to the consumer.  The whole point about lock-free
algorithms is that they are guaranteed to work *regardless* of the order of
operations between the two asynchronous processes that interact with the
queue, given only the ability to perform an atomic read or write.  The
ordering is critical within a single thread of execution; e.g. you must fill
in all the details of the new entry you are adding to the queue /before/ you
increment the head pointer, whereas in a locking algorithm it wouldn't matter
if you incremented the head pointer first, then filled in the blank entry you
had just moved it past, because you'd do both within the lock.

  If you design a lock-free algorithm that relies on two threads being
precisely synchronised, it's not really lock-free, it just has 'implicit'
locking; and if your precise synchronisation depends on the precise
cycle-timing of low-level instruction sequences, you have made a very very
fragile design that works, if it does, by good fortune.

> This is a very real, entirely legitimate example
> where the compiler thinking it knows how to do my job better than I do
> is wrong.

  Nope, this is a very real example of a bug in your algorithmic design, or of
you misleading the compiler, or relying on undefined or implementation-defined
behaviour.
 
> We (in a major, commercial application) ran into exactly this issue.
> 'asm volatile("lock orl $0,(%%esp)"::)' is your friend when this happens
> (it is a barrier across which neither the compiler nor CPU will reorder
> things). Failing that, no-op cross-library calls (that can't be inlined)
> seem to do the trick.

  This simply means you have failed to correctly declare a variable volatile
that in fact /is/ likely to be spontaneously changed by a separate thread of
execution.

  And relying on a library call to act as a memory barrier is risky.  The only
way in which it works is because it takes long enough that there's enough time
for the CPU's load-store unit to have completed any posted writes, but if (for
a contrived example) you're using a cut-down embedded library and the
particular routine that you call happens to be a stub and just returns
immmediately, you've got a two-instruction call-return sequence that very
seriously is not a guarantee of externally-visible memory access ordering.
Now, the compiler *does* know not to optimise moves across library calls, but
you're investing too much trust in them if you think the CPU's internal units
know or care whether you're in a libcall or your mainline code.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Seongbae Park

On 12/20/06, Dave Korn <[EMAIL PROTECTED]> wrote:
...

> We (in a major, commercial application) ran into exactly this issue.
> 'asm volatile("lock orl $0,(%%esp)"::)' is your friend when this happens
> (it is a barrier across which neither the compiler nor CPU will reorder
> things). Failing that, no-op cross-library calls (that can't be inlined)
> seem to do the trick.

  This simply means you have failed to correctly declare a variable volatile
that in fact /is/ likely to be spontaneously changed by a separate thread of
execution.


The C or C++ standard doesn't define ANYTHING related to threads,
and thus anything related to threads is beyond the standard.
If you think volatile means something in an MT environment,
think again. You can deduce certain aspect (e.g. guaranteed
appearance of store or load), but nothing beyond that.
Add memory model to the mix, and you're way beyond what the language says,
and you need to rely on the non-standard non-portable facilities,
if provided at all.
Even in a single threaded environment, what exactly volatile means
is not quite clear in the standard (except for setjmp/longjmp related aspect).

I liked the following paper (for general users,
not for the compiler developers, mind you):

http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
--
#pragma ident "Seongbae Park, compiler, http://seongbae.blogspot.com";


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Matthew Woehlke

Dave Korn wrote:

Particularly lock-free queues whose correct
operation is critically dependent on the order in which the loads and
stores are performed. 


  No, absolutely not.  Lock-free queues work by (for example) having a single
producer and a single consumer, storing the queue in a circular buffer, and
assigning ownership of the queue's head pointer to the producer and the
(chasing) tail pointer to the consumer.
[snip]
The ordering is critical within a single thread of execution; e.g. you must
 fill in all the details of the new entry you are adding to the queue 

/before/

you increment the head pointer,


Exactly. Guess what the compiler did to us? :-) "Oh, no, I'm /sure/ it's 
OK if I re-order your code so that those assignments happen /after/ I 
handle this dinky little increment for you." Now our code may have been 
wrong in this instance (see below), but...


Let's be clear. Order matters. /You/ said so yourself. :-) And even if 
the programmer correctly tells the compiler what to do, what (unless the 
compiler inserts memory barriers) keeps the CPU from circumventing both?



That said, I've seen even stranger things, too. For example:

foo->bar = make_a_bar();
foo->bar->none = value;

being rendered as:

call make_a_bar
foo->bar->none = value
foo->bar = 

So what was wrong with my C code in that instance? :-)


This is a very real, entirely legitimate example
where the compiler thinking it knows how to do my job better than I do
is wrong.


  Nope, this is a very real example of a bug in your algorithmic design, or of
you misleading the compiler, or relying on undefined or implementation-defined
behaviour.
[snip]
  This simply means you have failed to correctly declare a variable volatile
that in fact /is/ likely to be spontaneously changed by a separate thread of
execution.


/That/ is very possible. I'm talking about /OLD/ code here, i.e. code 
that was written back in K&R days, back before there /was/ a volatile 
keyword. (Although I had understood that 'volatile' was a no-op in most 
modern compilers? Does it have the semantics that loads/stores of 
volatile variables are not re-ordered with respect to each other?)


At any rate, I don't recall now if making the variable in question 
'volatile' helped or not. Maybe this is an exercise in why changing 
long-standing semantics has an insidious and hard to correct effect. 
(Does that conversation sound familiar?)



  And relying on a library call to act as a memory barrier is risky.
[snip]
Now, the compiler *does* know not to optimise moves across library calls, but
you're investing too much trust in them if you think the CPU's internal units
know or care whether you're in a libcall or your mainline code.


You're preaching to the choir. Unfortunately adding proper assembly 
modules to our build system didn't go over so well, and I am /not/ going 
to try to write inline assembler that works on six-or-more different 
compilers. :-) So I know this is just a 'cross your fingers and hope it 
works' approach on non-x86 platforms. On x86 I use the correct, 
previously-quoted inline assembly, which as mentioned acts as a barrier 
for both the compiler /and/ the CPU. As you say, all it's really 
ensuring in other cases is that the compiler remains honest, but that 
was the intent, and I know it isn't perfect.


--
Matthew
Hi! I'm a .signature virus! Copy me into your ~/.signature, please!



Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Richard B. Kreckel

Marcin Dalecki wrote:

Numerical stability of incomplete floating point representations are 
an entirely different
problem category then some simple integer tricks. In the first case 
the difficulties are inherent
to the incomplete representation of the calculation domain. In the 
second case it's just some
peculiarities of the underlying machine as well as the fact that the 
unsigned qualifier is not
used nearly enough frequently in common code. Or in other words: Z/32 
resp. 64 IS AN ALGEBRA but

float isn't! Thus this argument by analogy simply isn't valid.



Sheesh! This argument is totally confused...

1) Z/Zn and, by isomorphism, unsigned types may be an algebra. But this 
entire discussion is about signed types, not unsigned types.
2) Signed types are not an algebra, they are not even a ring, at least 
when their elements are interpreted in the canonical way as integer 
numbers. (Heck, what are they?)
3) Making behavior partially undefined certainly does not help making it 
an algebra or any other well-defined mathematical structure.


Integral types are an incomplete representation of the calculation 
domain, which is the natural numbers. This corroborates the validity of 
the analogy with IEEE real arithmetic.


-richy.

--
Richard B. Kreckel




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Andreas Schwab
Matthew Woehlke <[EMAIL PROTECTED]> writes:

> That said, I've seen even stranger things, too. For example:
>
> foo->bar = make_a_bar();
> foo->bar->none = value;
>
> being rendered as:
>
> call make_a_bar
> foo->bar->none = value
> foo->bar = 

You are not describing a C compiler.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Marcin Dalecki


On 2006-12-20, at 22:48, Richard B. Kreckel wrote:

2) Signed types are not an algebra, they are not even a ring, at  
least when their elements are interpreted in the canonical way as  
integer numbers. (Heck, what are they?)


You are apparently using a different definition of an algebra or ring  
than the common one.


Integral types are an incomplete representation of the calculation  
domain, which is the natural numbers.


This is an arbitrary assumption. In fact most people simply are well  
aware of the fact that computer
don't to infinite arithmetics. You are apparently confusing natural  
numbers, which don't include negatives,
with integers. However it's a quite common mistake to forget how  
"bad" floats "model" real numbers.


This corroborates the validity of the analogy with IEEE real  
arithmetic.


And wrong assumptions lead to wrong conclusions.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread David Nicol

On 12/20/06, Marcin Dalecki <[EMAIL PROTECTED]> wrote:

You are apparently using a different definition of an algebra or ring
than the common one.


Fascinating discussion.  Pointers to canonical on-line definitions of
the terms "algebra" and "ring" as used in compiler design please?


running bprob.exp tests in a cross-testing environment

2006-12-20 Thread Ben Elliston
While testing a cross-compiler, I had to track down a handful of
failures in gcc.misc-tests/bprob.exp and g++.dg/bprop/bprob.exp.  The
test harness was reporting that the .gcda data files were not being
created after running the instrumented test case.

After some digging, I managed to work out why: the gcov runtime code
wants to create the .gcda file in the same directory that the object
file was created on the build system.  Unless the same directory
structure exists on the target, the gcov runtime code just skips writing
out the data file on exit.

I see a couple of solutions, but would like to discuss them here before
working on a patch:

1. have the bprob.exp test driver create the appropriate directory
   tree on the target (and remove it when finished); or

2. set GCOV_PREFIX when running the test case so that the home
   directory on the target is prefixed.  The test harness would
   need to also prefix the .gcda filename when fetching the data
   file from the target.

Thoughts?

Ben



Re: running bprob.exp tests in a cross-testing environment

2006-12-20 Thread Ben Elliston
On Thu, 2006-12-21 at 09:56 +1100, Ben Elliston wrote:

> After some digging, I managed to work out why: the gcov runtime code
> wants to create the .gcda file in the same directory that the object
> file was created on the build system.  Unless the same directory
> structure exists on the target, the gcov runtime code just skips writing
> out the data file on exit.

To be more precise, the gcov runtime first tries to create the required
path, but this is unlikely to succeed if it requires creating a new
directory under / (which only root can typically do).  If it cannot
create the full path before creating the data file, the gcov runtime
code will just silently fail.

Ben




Re: Profiling broken in GCC 4.1.0 for DJGPP

2006-12-20 Thread Laurynas Biveinis

2006/12/12, [EMAIL PROTECTED] <[EMAIL PROTECTED]>:


I've come across an issue with using the -pg switch for profiling on the
DJGPP DOS platform, using GCC 4.1.0.


I suggest that you send this report to [EMAIL PROTECTED], the
DJGPP port of GCC maintainers are much more likely to respond there.

--
Laurynas


Re: Profiling broken in GCC 4.1.0 for DJGPP

2006-12-20 Thread DJ Delorie

> I suggest that you send this report to [EMAIL PROTECTED], the
> DJGPP port of GCC maintainers are much more likely to respond there.

He did that first.


Re: Profiling broken in GCC 4.1.0 for DJGPP

2006-12-20 Thread Gordon . Schumacher
"Laurynas Biveinis" <[EMAIL PROTECTED]> wrote on 12/20/2006
04:05:00 PM:

# I suggest that you send this report to [EMAIL PROTECTED], the
# DJGPP port of GCC maintainers are much more likely to respond there.

I've actually already been in communication with DJ Delorie; I thought I
might get some additional information this way, though.



Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Richard B. Kreckel

Marcin Dalecki wrote:



On 2006-12-20, at 22:48, Richard B. Kreckel wrote:

2) Signed types are not an algebra, they are not even a ring, at 
least when their elements are interpreted in the canonical way as 
integer numbers. (Heck, what are they?)



You are apparently using a different definition of an algebra or ring 
than the common one.



What I was talking about was this: 
.


In the absence of a modulus (i.e. "wrapping") all the operations (the 
vector space's addition and the algebra's multiplication) run into 
problems as long as one maintains the canonical homomorphism (i.e. 
identification with integer numbers 0, 1, 5...)


Integral types are an incomplete representation of the calculation 
domain, which is the natural numbers.



This is an arbitrary assumption. In fact most people simply are well 
aware of the fact that computer

don't to infinite arithmetics.



But the same applies to floating point numbers. There, the situation is 
even better, because nowadays I can rely on a float or double being the 
representation defined in IEEE 754 because there is such overwhelming 
hardware support. The variety of int sizes encountered nowadays is 
greater. Case in point: During the last couple of years, I've not seen 
any nonstandard floating point storage representation. On the other 
hand, last year 16 bit ints were inflicted upon me (an embedded target), 
and on UNICOS-MAX I found the 64 bit ints were slightly irritating, too.


You are apparently confusing natural numbers, which don't include 
negatives,

with integers.



Right, I used the wrong term.

However it's a quite common mistake to forget how "bad" floats "model" 
real numbers.



And it's quite a common mistake to forget how "bad" finite ints "model" 
integer numbers.



This corroborates the validity of the analogy with IEEE real arithmetic.



And wrong assumptions lead to wrong conclusions.



Which assumption was wrong?

-richy.

--
Richard B. Kreckel




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Marcin Dalecki
But the same applies to floating point numbers. There, the  
situation is even better, because nowadays I can rely on a float or  
double being the representation defined in IEEE 754 because there  
is such overwhelming hardware support.


You better don't. Really! Please just realize for example the impact  
of the (in)famous 80 bit internal (over)precision of a

very common IEEE 754 implementation...

volatile float b = 1.;

if (1. / 3. == b / 3.) {
   printf("HALLO!\n")
} else {
   printf("SURPRISE SURPRISE!\n");
}

or just skim through http://gcc.gnu.org/bugzilla/show_bug.cgi?id=323

However it's a quite common mistake to forget how "bad" floats  
"model" real numbers.


And it's quite a common mistake to forget how "bad" finite ints  
"model" integer numbers.


No it isn't. Most people don't think in terms of infinite arithmetics  
when programming.
And I hold up that the difference between finite and infinite is  
actually quite a fundamental
concept. However quite a lot of people expect the floating  
arithmetics rouding to give them

well behaved results.

Marcin Dalecki




Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Gabriel Dos Reis
Paul Brook <[EMAIL PROTECTED]> writes:

| > Compiler can optimize it any way it wants,
| > as long as result is the same as unoptimized one.
| 
| We have an option for that. It's called -O0.
| 
| Pretty much all optimization will change the behavior of your program. The 
| important distinction is whether that difference is observable in valid 
| programs. The whole point of langage standards is that they define what 
| constitutes a valid program.

The problem is that what constitutes a valid program tends to differ
from what constitutes a useful program found in the wild.  The
question for GCC is how to accomodate for the useful uses without
getting far away from both conformance and non-conformance.

I don't believe this particular issue of optimization based on
"undefined behaviour" can be resolved by just telling people "hey
look, the C standard says it is undefined, therefore we can optimize.
And if you're not happy, just tell the compiler not to optimize".
For not all undefined behaviour are equal, and undefined behaviour is
also a latitude given to implementors to better serve their base
users.

-- Gaby


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Gabriel Dos Reis
"Dave Korn" <[EMAIL PROTECTED]> writes:

[...]

| > We (in a major, commercial application) ran into exactly this issue.
| > 'asm volatile("lock orl $0,(%%esp)"::)' is your friend when this happens
| > (it is a barrier across which neither the compiler nor CPU will reorder
| > things). Failing that, no-op cross-library calls (that can't be inlined)
| > seem to do the trick.
| 
|   This simply means you have failed to correctly declare a variable volatile
| that in fact /is/ likely to be spontaneously changed by a separate thread of
| execution.

Note however, that declaring the variable "volatile" is no guarantee
that things will actually work as "expected.  We have had that
discussion before :-)

-- Gaby


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Gabriel Dos Reis
Andrew Haley <[EMAIL PROTECTED]> writes:

[...]

| C is no longer a kind of high-level assembly laguage:
| it's defined by a standard, in terms of an abstract machine, and some
| operations are not well-defined. 

that does not mean C is not a kind of high-level assembly language.
:-/

-- Gaby


Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Ian Lance Taylor
Matthew Woehlke <[EMAIL PROTECTED]> writes:

> That said, I've seen even stranger things, too. For example:
> 
> foo->bar = make_a_bar();
> foo->bar->none = value;
> 
> being rendered as:
> 
> call make_a_bar
> foo->bar->none = value
> foo->bar = 

That would obviously be a bug in the compiler.


> /That/ is very possible. I'm talking about /OLD/ code here, i.e. code 
> that was written back in K&R days, back before there /was/ a volatile 
> keyword. (Although I had understood that 'volatile' was a no-op in most 
> modern compilers? Does it have the semantics that loads/stores of 
> volatile variables are not re-ordered with respect to each other?)

volatile should not be a no-op in any C compiler.  volatile variables
must be accessed strictly according to the rules of the virtual
machine.  For example, if the code reads the variable twie, the
program must read the variable twice; it can't cache the value after
the first read.  volatile variables are reasonably useful for
accessing memory mapped devices, in which the memory may change in
ways which the compiler does not see.  They are not particularly
useful, by themselves, for multi-processor systems, because they
provide no guarantees about synchronization between processors.


> >   And relying on a library call to act as a memory barrier is risky.
> > [snip]
> > Now, the compiler *does* know not to optimise moves across library calls, 
> > but
> > you're investing too much trust in them if you think the CPU's internal 
> > units
> > know or care whether you're in a libcall or your mainline code.
> 
> You're preaching to the choir. Unfortunately adding proper assembly 
> modules to our build system didn't go over so well, and I am /not/ going 
> to try to write inline assembler that works on six-or-more different 
> compilers. :-) So I know this is just a 'cross your fingers and hope it 
> works' approach on non-x86 platforms. On x86 I use the correct, 
> previously-quoted inline assembly, which as mentioned acts as a barrier 
> for both the compiler /and/ the CPU. As you say, all it's really 
> ensuring in other cases is that the compiler remains honest, but that 
> was the intent, and I know it isn't perfect.

Current versions of gcc provide a set of synchronization primitives
which may help write code which works correctly on multi-processor
systems.

Ian


RE: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Dave Korn
On 20 December 2006 20:16, Seongbae Park wrote:

> On 12/20/06, Dave Korn <[EMAIL PROTECTED]> wrote:
> ...
>>> We (in a major, commercial application) ran into exactly this issue.
>>> 'asm volatile("lock orl $0,(%%esp)"::)' is your friend when this happens
>>> (it is a barrier across which neither the compiler nor CPU will reorder
>>> things). Failing that, no-op cross-library calls (that can't be inlined)
>>> seem to do the trick.
>> 
>>   This simply means you have failed to correctly declare a variable
>> volatile that in fact /is/ likely to be spontaneously changed by a
>> separate thread of execution.
> 
> The C or C++ standard doesn't define ANYTHING related to threads,
> and thus anything related to threads is beyond the standard.
> If you think volatile means something in an MT environment,
> think again. 

  No no, I fully appreciate that; what volatile means is simply that the
object may change outside the control or knowledge of the compiler, the
particular mechanism does not matter at all, and that is all that is relevant
for the purpose of my argument.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Paul Brook
On Thursday 21 December 2006 02:38, Gabriel Dos Reis wrote:
> Paul Brook <[EMAIL PROTECTED]> writes:
> | > Compiler can optimize it any way it wants,
> | > as long as result is the same as unoptimized one.
> |
> | We have an option for that. It's called -O0.
> |
> | Pretty much all optimization will change the behavior of your program.
> | The important distinction is whether that difference is observable in
> | valid programs. The whole point of langage standards is that they define
> | what constitutes a valid program.
>
> The problem is that what constitutes a valid program tends to differ
> from what constitutes a useful program found in the wild.  The
> question for GCC is how to accomodate for the useful uses without
> getting far away from both conformance and non-conformance.

I never said any different. If fact you'll notice that later in the same email 
(which you have cut) I said pretty much the same thing.

The post I was replying to wasn't suggesting that we should support specific  
cases of nonconforming code, it was saying that we shouldn't change the 
behavior of *any* code.

Paul


RE: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Dave Korn
On 20 December 2006 21:42, Matthew Woehlke wrote:

> Dave Korn wrote:
>>> Particularly lock-free queues whose correct
>>> operation is critically dependent on the order in which the loads and
>>> stores are performed.
>> 
>>   No, absolutely not.  Lock-free queues work by (for example) having a
>> single producer and a single consumer, storing the queue in a circular
>> buffer, and assigning ownership of the queue's head pointer to the
>> producer and the (chasing) tail pointer to the consumer.
>> [snip]
>> The ordering is critical within a single thread of execution; e.g. you must
>>  fill in all the details of the new entry you are adding to the queue
>> /before/ you increment the head pointer,
> 
> Exactly. Guess what the compiler did to us? :-) "Oh, no, I'm /sure/ it's
> OK if I re-order your code so that those assignments happen /after/ I
> handle this dinky little increment for you." Now our code may have been
> wrong in this instance 

  Exactly and unquestionably so.  You wrote wrong code, it worked not how you
expected it, but the compiler *did* do exactly as you told it to and there's
an end of it.  You left out 'volatile', you completely lied to and misled the
compiler!

> Let's be clear. Order matters. /You/ said so yourself. :-) And even if
> the programmer correctly tells the compiler what to do, what (unless the
> compiler inserts memory barriers) keeps the CPU from circumventing both?

  You wrote wrong code, it worked not how you expected it, but the compiler
*did* do exactly as you told it to and there's an end of it.  You left out
'volatile', you completely lied to and misled the compiler about what /it/
could assume about the behaviour of entities in the compiled universe.

> That said, I've seen even stranger things, too. For example:
> 
> foo->bar = make_a_bar();
> foo->bar->none = value;
> 
> being rendered as:
> 
> call make_a_bar
> foo->bar->none = value
> foo->bar = 
> 
> So what was wrong with my C code in that instance? :-)

  You'd have to show me the actual real code before I could tell you whether
there was a bug in your code or you hit a real bug in the compiler.  Either is
possible; the virtual machine definition in the standard AKA the 'as-if' rule
is what decides which is correct and which is wrong.

  C is no longer a glorified shorthand for assembly code.  It did /used to
be/, but every development since K'n'R days has taken it further from that.
At -O0, it still /almost/ is a glorified assembler.

>>> This is a very real, entirely legitimate example
>>> where the compiler thinking it knows how to do my job better than I do
>>> is wrong.
>> 
>>   Nope, this is a very real example of a bug in your algorithmic design,
>> or of you misleading the compiler, or relying on undefined or
>> implementation-defined behaviour. [snip]
>>   This simply means you have failed to correctly declare a variable
>> volatile that in fact /is/ likely to be spontaneously changed by a
>> separate thread of execution.
> 
> /That/ is very possible. I'm talking about /OLD/ code here, i.e. code
> that was written back in K&R days, back before there /was/ a volatile
> keyword. (Although I had understood that 'volatile' was a no-op in most
> modern compilers? Does it have the semantics that loads/stores of
> volatile variables are not re-ordered with respect to each other?)

  Exactly so, that's why 'volatile' was chosen as the keyword to extend 'asm'
in order to mean "don't reorder past here because it's unpredictable".

  As I said, C is no longer a glorified assembler language, and one of the
main motivations behind that progression is the non-portability of code such
as you describe above, the fact that making assumptions about the precise
details behind a compiled version of any particular code sequence is a mistake
because they'll only turn out to be right on some platforms and with some
compiler versions and not with others.  The compiler is *not* a glorified
assembler; if you really want guarantees about what codegen you get, use a .S
file and write assembler.  

  Or use a 20-year-old version of the compiler if what you *really* want is
the exact codegen it used to do 20 years ago.  It's open source, it's free,
you can get it and use it and keep on using it for ever and it will *always*
produce exactly what you expect by the way of codegen.

  These days, however, you have to accept that your code is not valid; that it
contains implicit assumptions that were only ever going to be true for the
particular compiler version and on the particular platform where it was first
written.  And that code needs maintenance, as a matter of routine.  If you
want to change one part of a system, you must change all the interacting parts
to match.  If you want your source code to work exactly how it did twenty
years ago, the rest of your toolchain better behave exactly the same as it did
twenty years ago.

  Of course, this leaves us a big problem.  How can anything ever change,
advance, or improve, if eithe

RE: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Dave Korn
On 21 December 2006 02:50, Gabriel Dos Reis wrote:

> Andrew Haley <[EMAIL PROTECTED]> writes:
> 
> [...]
> 
>> C is no longer a kind of high-level assembly laguage:
>> it's defined by a standard, in terms of an abstract machine, and some
>> operations are not well-defined.
> 
> that does not mean C is not a kind of high-level assembly language.
> :-/
> 
> -- Gaby

  I swear, I wrote my previous two mails before I read this exchange!

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: GCC optimizes integer overflow: bug or feature?

2006-12-20 Thread Robert Dewar

Gabriel Dos Reis wrote:


I don't believe this particular issue of optimization based on
"undefined behaviour" can be resolved by just telling people "hey
look, the C standard says it is undefined, therefore we can optimize.
And if you're not happy, just tell the compiler not to optimize".
For not all undefined behaviour are equal, and undefined behaviour is
also a latitude given to implementors to better serve their base
users.


Right, and better service is a combination of doing what is expected
and generating fast code, and sometimes the two do conflict, and then
we have to do the best we can to balance the conlicting goals.

My concern is that in achieving this balance, we really need to have
data to show that we are indeed generating significantly faster code.
Compiler writers tend to want to optimize the last instruction out
in any situation, without necessarily showing it is significant to
do so.


Re: -fwrapv enables some optimizations

2006-12-20 Thread Geert Bosch


On Dec 20, 2006, at 09:38, Bruno Haible wrote:

But the other way around? Without -fwrapv the compiler can assume more
about the program being compiled (namely that signed integer overflows
don't occur), and therefore has more freedom for optimizations. All
optimizations that are possible with -fwrapv should also be performed
without -fwrapv. Anything else is a missed optimization.


This is completely wrong. Making operations undefined is a two-edged
sword. At the one hand, you can make more assumptions, but there's
also the issue that when you want to rewrite expressions, you have
to be more careful to not introduce undefined behavior when there
was none before.

The canonical example is addition of signed integers. This operation
is associative with -wrapv, but not without.

So
  a = b + C1 + c + C2;
could be written as
  a = b + c + (C1 + C2);
where the constant addition is performed at compile time.
With signed addition overflowing, you can't do any reassociation,
because this might introduce overflows where none existed before.

Probably we would want to lower many expressions to unsigned
eventually, but the question of when and where to do it emphasizes
that you can only take advantage of undefined behavior if you make
sure you don't introduce any.

Sometimes I think it is far better to have a default of -fwrapv for
at -O1 and possibly -Os. Sure, this would disable some powerful
optimizations, especially those involving loops, but it would in
practise be very useful to get reasonably good optimization for programs
with minimizing the number of programs with undefined behavior.
Also, it would allow some new optimizations, so total loss of
performance may be quite acceptable.

As -fwrapv only transforms programs with undefined behavior into
programs with implementation-defined behavior, so nobody can
possibly complain about their programs suddenly doing something
different.

Also, for safety-critical program and certification, it is essential
to be able to reason about program behavior. Limiting the set of
programs with erroneous or undefined execution is essential.
If you want to prove that a program doesn't cause undefined behavior,
it is very helpful signed integer overflow to be defined, even if
it's just implementation defined. That would be a huge selling-point
for GCC.

  -Geert


Re: -fwrapv enables some optimizations

2006-12-20 Thread Robert Dewar

Geert Bosch wrote:


This is completely wrong. Making operations undefined is a two-edged
sword. At the one hand, you can make more assumptions, but there's
also the issue that when you want to rewrite expressions, you have
to be more careful to not introduce undefined behavior when there
was none before.


No, I think you miss the point


The canonical example is addition of signed integers. This operation
is associative with -wrapv, but not without.

So
   a = b + C1 + c + C2;
could be written as
   a = b + c + (C1 + C2);
where the constant addition is performed at compile time.
With signed addition overflowing, you can't do any reassociation,
because this might introduce overflows where none existed before.


Yes, but the overflows are harmless given that we know the
code we can generate will actuallly wrap, so it is just fine
to reassociate freely without -fwrapv, since the result will
be the same. Overflow is not an issue. If the final result
has overflowed, then the original program is for sure undefined.
If the final result has not overflowed, then intermediate values
may or may not have overflowed. There are two cases, either the
original canonical order caused overflow, in which case giving
the right result for the overall calculation is fine (though
not required), or it did not, in which case giving the right
result is also fine (and required)


Sometimes I think it is far better to have a default of -fwrapv for
at -O1 and possibly -Os. Sure, this would disable some powerful
optimizations, especially those involving loops, but it would in
practise be very useful to get reasonably good optimization for programs
with minimizing the number of programs with undefined behavior.
Also, it would allow some new optimizations, so total loss of
performance may be quite acceptable.


This is not right, on a machine where in fact addition wraps,
-fwrapv can never enable optimizations that would otherwise not
be possible. I must say when I first read this claim, I had
exactly the same initial reaction as Geert, but then I thought
about it more and realized the claim is indeed correct.


Also, for safety-critical program and certification, it is essential
to be able to reason about program behavior. Limiting the set of
programs with erroneous or undefined execution is essential.


I don't see this at all, you have to prove that you don't have overflows
in any case if the semantics of your program does not expect overflows.
This is what you have to do in any Ada program. I don't see that
enabling -fwrapv makes such proofs easier or harder.


If you want to prove that a program doesn't cause undefined behavior,
it is very helpful signed integer overflow to be defined, even if
it's just implementation defined. That would be a huge selling-point
for GCC.


I don't see this


   -Geert




Reload Pass

2006-12-20 Thread Rajkishore Barik
Hi All,

Does anyone know of any document describing (in details) the reload phase 
of GCC?

I am planning to write a linear scan reload for GCC (one that does not 
take reg_renumber
but takes instruction specific register allocation and move information). 
Can anyone point me
to some existing code/literature for this in GCC? 


regards,
Raj