Mainline not slushy

2006-01-26 Thread Mark Mitchell
There was some discussion over the weekend about putting mainline into a
slush mode to deal with recent breakage.  I never formally instituted
the slush, but there seems to have some self-healing activity, and a lot
of the problems are gone.

As was pointed out earlier, having multiple major projects merged at
once is asking for trouble.  Please try to allow a day or so for one
major change to prove out before committing the next.

In any event, I don't think there's any reason to institute a slush at
this point; we'll just proceed in Stage 2, with the earlier exception
for previously registered Stage 1 projects that have not yet been
comitted for whatever reason.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Porting problem from GCC-4.0 to GCC 4.1

2006-01-26 Thread Richard Sandiford
"Shrirang Khishti" <[EMAIL PROTECTED]> writes:
>I have ported GCC-4.0 for a new target. Initially I started porting
> with GCC-3.4 and then shifted to GCC-4.0 without any problems. Now I
> want to port same code for GCC-4.1 . As there are some structural
> differences in GCC-4.0 and gcc-4.1 back-ends specially regarding
> addition of .opt file. I have removed all the macros related to
> TARGET_SWITCHES and made appropriate entries in .opt file. With this and
> some other changes I build cc1. I have also updated TARGET_DEFAULT macro
> in target.h file.  But problem that I found is that if I don't specify
> any target specific option target_flags are remaining zero. They are not
> getting set as expected for DEFAULT_TARGET and so it is giving me ICE.
>If I explicitly give target specific option which is on DEFAULT, then
> I am not having any problems.
>Anything is missing from my side? 
> Any help will be appreciated

It sounds like you need to use TARGET_DEFAULT_TARGET_FLAGS.

Richard


Which program can I use to see VCG dumping from GCC

2006-01-26 Thread Jie Zhang
Hi,

In this page , it's said that
"If you view these files using a suitable program, you'll get output
similar to the following." However, when I use xvcg to view
test.c.01.sibling.vcg, xvcg errors:

Wait.aLine 5: attribute T_Co_hidden currently not implemented !
...aLine 406: attribute T_Co_hidden currently not implemented !
.eSegmentation fault

I'm using latest Ubuntu Dapper. gcc version is "gcc (GCC) 4.0.3
20060115 (prerelease) (Ubuntu 4.0.2-7ubuntu1)". test.c is just the
example used in the above HTML page. Which other program should I use
to view the VCG dump?

Thanks,
Jie


Re: RTL alias analysis

2006-01-26 Thread Richard Guenther
On 1/25/06, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
> On Jan 22, 2006, Richard Guenther <[EMAIL PROTECTED]> wrote:
>
> > On 1/22/06, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
> >> I don't think it is any different.  GCC's exception for unions only
> >> applies if the object is accessed using the union type.  So they are
> >> indeed equivalent.  The scary thing is that I don't think they
> >> actually violate the ISO C strict aliasing rules, but they might still
> >> be mis-optimized by GCC, assuming I understand correctly what GCC
> >> does.
>
> > ISO C says that the following violates aliasing rules:
>
> > int foo(float f) { union { int i; float f; } u; i.f = f; return i.i; }
>
> Yes, but this was not what the example I quoted from Richard Sandiford
> was about.  The example only accessed a memory region using its
> effective type:
>
> > int ii; double dd; void foo (int *ip, double *dp) {
> >   *ip = 15; ii = *ip; *dp = 1.5; dd = *dp; }
> > void test (void) { union { int i; double d; } u;
> >   foo (&u.i, &u.d); }
>
> So it is perfectly valid, but if GCC reorders the read from *ip past
> the store to *dp, it turns the valid program into one that misbehaves.

*ip = 15; ii = *ip; *dp = 1.5; dd = *dp;
Here^^^
you are accessing memory of type integer as type double.  And gcc will
happily reorder the read from *ip with the store to *dp based on TBAA
unless it inlines the function and applies the "special" gcc rules about unions.
So this example is invalid, too.

> Unless the undefined behavior is taking the address of both u.i and
> u.d, when only one of them is well-defined at that point.  I don't
> think taking the address of a member of a union counts as accessing
> the stored value of the object, since if the member is volatile, a
> read from memory is not expected.

Taking the address does not count as access, but dereferencing it may
violate aliasing rules.

Richard.


Re: How to reverse patch reversal in cfgcleanup.c (Was: RFA: re-instate struct_equiv code)

2006-01-26 Thread Bernd Schmidt

Joern RENNECKE wrote:
For easier reviewing, I have attached the diff to the cfgcleanup version 
previous to the patch backout.


Thanks.  Let me see if I understood the problem - please correct me if I 
describe anything incorrectly.


The previous cross jumping code didn't care about register liveness, 
since it just checked for identical instruction streams.  The new, more 
clever code, requires that regsets are identical at the end of the 
blocks we're trying to match.  In addition, cross-jumping can modify 
blocks, requiring us to update life information (by calling a global 
update_life_info in struct_equiv_init), which is the really expensive 
part that caused the slowdown (how often did we end up updating life info?).


The new patch prevents multiple updates by introducing 
STRUCT_EQUIV_SUSPEND_UPDATE.  However, I don't see how this is safe 
given that cross jumping will modify basic blocks and change the set of 
registers live at their ends.


Is there a way to keep life info accurate when doing the cross jump (so 
we don't set any dirty flags)?  Or, possibly, change the algorithm so 
that it visits blocks in a different order - dirtying more blocks before 
doing a global life update?


I'm not sure what the best way to keep the svn history sane is.  When/if 
the patch is approved, should I first do an
svn merge -r108792:108791, check that in, and then apply the patch with 
the actual new stuff?


Maybe

svn diff -r108792:108791 |patch -p0
patch 

Re: Which program can I use to see VCG dumping from GCC

2006-01-26 Thread Jie Zhang
On 1/26/06, Jie Zhang <[EMAIL PROTECTED]> wrote:
> Hi,
>
> In this page , it's said that
> "If you view these files using a suitable program, you'll get output
> similar to the following." However, when I use xvcg to view
> test.c.01.sibling.vcg, xvcg errors:
>
> Wait.aLine 5: attribute T_Co_hidden currently not implemented !
> ...aLine 406: attribute T_Co_hidden currently not implemented !
> .eSegmentation fault
>
> I'm using latest Ubuntu Dapper. gcc version is "gcc (GCC) 4.0.3
> 20060115 (prerelease) (Ubuntu 4.0.2-7ubuntu1)". test.c is just the
> example used in the above HTML page. Which other program should I use
> to view the VCG dump?
>
Oops! It seems a bug of xvcg in Ubuntu. I built one from the source
package and it works well.

Jie


Re: RTL alias analysis

2006-01-26 Thread Alexandre Oliva
On Jan 26, 2006, Richard Guenther <[EMAIL PROTECTED]> wrote:

>> So it is perfectly valid, but if GCC reorders the read from *ip past
>> the store to *dp, it turns the valid program into one that misbehaves.

> *ip = 15; ii = *ip; *dp = 1.5; dd = *dp;
> Here^^^
> you are accessing memory of type integer as type double.

Yes, but that's not what the Standard defines as undefined.  If the
stored value is accessed with a type that differs from the effective
type, then you lose.  But here it's not accessing the stored type;
quite the contrary, it's overwriting it, giving the underlying memory
a new effective type.  Please read 6.5/6-7.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: RTL alias analysis

2006-01-26 Thread Michael Veksler
So, is union is a very useful feature in ISO C, without
gcc's extension? It seems that the only legal use of union
is to use the same type through the whole life of the object.

Here is the rationale:

Quoting Richard Guenther <[EMAIL PROTECTED]>:
> On 1/25/06, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
> > On Jan 22, 2006, Richard Guenther <[EMAIL PROTECTED]> wrote:
> >
[...]
> >
> > > int ii; double dd; void foo (int *ip, double *dp) {
> > >   *ip = 15; ii = *ip; *dp = 1.5; dd = *dp; }
> > > void test (void) { union { int i; double d; } u;
> > >   foo (&u.i, &u.d); }
> >
> > So it is perfectly valid, but if GCC reorders the read from *ip past
> > the store to *dp, it turns the valid program into one that misbehaves.
> 
> *ip = 15; ii = *ip; *dp = 1.5; dd = *dp;
> Here^^^
> you are accessing memory of type integer as type double.  And gcc will
> happily reorder the read from *ip with the store to *dp based on TBAA
> unless it inlines the function and applies the "special" gcc rules about
> unions.
> So this example is invalid, too.

So in theory, if there is a union of two non-char types:
  union { T1 v1, T2 v2} x;
it is illegal to access both x.v1 and x.v2 for the same variable x
anywhere in the whole program. If there exists a run (whole program run),
for which x.v1 is written (even if at the program's start), and later 
x.v2 is written and read (even if at program's end) then the compiler
may reorder the writes at will, and get different results.

Even bison/yacc are not safe, since they use an array of YYSTYPE (YYSTYPE
is normally a union), and it assigns different values to it. The same
entry in the stack may be accessed differently depending on the current
active rules. I don't think the standard committee intended to do that,
or did they?

GCC seems to do better WRT unions than ISO C. Even that is not perfect,
because it may break things like bison in non obvious ways:
 A bison rules may pass a pointer or a reference to a helper function
 (and lose the link to the original union).
 If functions are inline and bison's loop is unrolled or SMS-ed, 
 it is possible to reorder memory accesses from two perfectly valid rules,
 in an invalid way.

Is it the responsibility of bison to make sure this does not happen? How?
ISO C does not seem to provide such capabilities.
Does it mean that bison language cannot be made valid (without using some
new gcc extensions)?

--
  Michael


Re: Attribute data structure rewrite?

2006-01-26 Thread Joseph S. Myers
On Thu, 26 Jan 2006, Giovanni Bajo wrote:

> Geoffrey Keating <[EMAIL PROTECTED]> wrote:
> 
> >> re this mail:
> >> http://gcc.gnu.org/ml/gcc/2004-09/msg01357.html
> >> 
> >> do you still have the code around? Are you still willing to
> >> contribute it?
> >> Maybe you could upload it to a branch just to have it around in
> >> case someone is
> >> willing to update/finish it.
> > 
> > It's on the stree-branch, I think.  Yes, I'm still willing to
> > contribute it and would be very happy to see someone else update &
> > commit it.
> 
> svn log --stop-on-copy svn://gcc.gnu.org/svn/gcc/branches/stree-branch
> 
> shows me only stree-related commits, but not anything about attributes.

It's actually on static-tree-branch.  I included as a caveat in my C 
parser 4.1 project submission that

  whichever of this and the attribute changes on static-tree-branch goes 
  in first will necessitate (straightforward) changes in the other.

in the expectation that this branch would also be submitted for 4.1, but 
in the event it wasn't.

I think these changes should be suitable to go in during Stage 2 for 4.2 
if someone wishes to update them for current mainline.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: How to reverse patch reversal in cfgcleanup.c (Was: RFA: re-instate struct_equiv code)

2006-01-26 Thread Joern RENNECKE

Bernd Schmidt wrote:

 Thanks.  Let me see if I understood the problem - please correct me 
if I describe anything incorrectly.


The previous cross jumping code didn't care about register liveness, 
since it just checked for identical instruction streams.  The new, 
more clever code, requires that regsets are identical at the end of 
the blocks we're trying to match.  In addition, cross-jumping can 
modify blocks, requiring us to update life information (by calling a 
global update_life_info in struct_equiv_init), which is the really 
expensive part that caused the slowdown (how often did we end up 
updating life info?).


The code has always required that the set of sucessor blocks is 
identical, which implies that the regsets of live registers at the end 
of the blocks must be identical too.  The old code didn't require 
up-to-date liveness information.  The new code does.  As a sanity check, 
it verifies that the regsets are equal.
Because the new code as of December actually updated life information 
incorrectly, the global updates that were done had also quite a lot of 
work to do (and didn't really do it right, because of the presence of 
fake edges).


The new patch prevents multiple updates by introducing 
STRUCT_EQUIV_SUSPEND_UPDATE.  However, I don't see how this is safe 
given that cross jumping will modify basic blocks and change the set 
of registers live at their ends.


Is there a way to keep life info accurate when doing the cross jump 
(so we don't set any dirty flags)?  Or, possibly, change the algorithm 
so that it visits blocks in a different order - dirtying more blocks 
before doing a global life update?


As far as I can tell, the global_live_{start,end} information is now 
kept up to date, although I have to admit that I don't really understand 
what was meant with the comment

 /* We may have some registers visible trought the block.  */
that is placed before setting the dirty flag on the block.
The REG_DEAD / REG_UNUSED notes may get out date, but AFAICS they are 
not needed inside the loop that does the cross-jumping, and there is a 
global update afterwards.




I'm not sure what the best way to keep the svn history sane is.  
When/if the patch is approved, should I first do an
svn merge -r108792:108791, check that in, and then apply the patch 
with the actual new stuff?



Maybe

svn diff -r108792:108791 |patch -p0
patch 

That is the easy way, but my concern was with keeping the information 
from svn annotate as sane as possible.  Presumably a merge or svn copy 
could have preserved the old
history from before the patch back-out, but since another patch to 
cfgcleanup has gone in in the meantime, an svn copy is no longer a 
realistic option.


Re: How to reverse patch reversal in cfgcleanup.c (Was: RFA: re-instate struct_equiv code)

2006-01-26 Thread Daniel Berlin
On Thu, 2006-01-26 at 11:20 +0100, Bernd Schmidt wrote:
> Joern RENNECKE wrote:
> > For easier reviewing, I have attached the diff to the cfgcleanup version 
> > previous to the patch backout.
> 
> Thanks.  Let me see if I understood the problem - please correct me if I 
> describe anything incorrectly.
> 
> The previous cross jumping code didn't care about register liveness, 
> since it just checked for identical instruction streams.  The new, more 
> clever code, requires that regsets are identical at the end of the 
> blocks we're trying to match.  In addition, cross-jumping can modify 
> blocks, requiring us to update life information (by calling a global 
> update_life_info in struct_equiv_init), which is the really expensive 
> part that caused the slowdown (how often did we end up updating life info?).

We already update life info way too much, even without struct-equiv
(Before struct equiv, this was done because flow's dce relied on
register liveness, and we called it from everywhere under the sun,
usually deep within other functions nobody realized were doing it).  I
can tell you that *without* struct equiv, we update liveness at least 19
or 20 times per compile, on average. Some are much much higher, because
each cleanup_cfg iteration causes a life update.  This is a true time
sink in the compiler.

> 
> The new patch prevents multiple updates by introducing 
> STRUCT_EQUIV_SUSPEND_UPDATE.  However, I don't see how this is safe 
> given that cross jumping will modify basic blocks and change the set of 
> registers live at their ends.
> 
> Is there a way to keep life info accurate when doing the cross jump (so 
> we don't set any dirty flags)?  

Hard, but probably possible.  It's also probably expensive.

> Or, possibly, change the algorithm so 
> that it visits blocks in a different order - dirtying more blocks before 
> doing a global life update?

If Joern has found a way to avoid updating liveness until he is done
with the struct-equiv stuff (even if this means avoiding checking
certain blocks because the liveness may be out of date), IMHO that is
the best solution.

No algorithm that has to iterate, *and update global liveness on each
iteration*, will ever really be fast enough that you want it on all the
time.  IMHO.

We already do it with cleanup_cfg (where we do it because we run flow's
dce in the middle of cfg cleanups on each iteration), and it is one of
the slowest parts of the backend we've got right now.

> > I'm not sure what the best way to keep the svn history sane is.  When/if 
> > the patch is approved, should I first do an
> > svn merge -r108792:108791, check that in, and then apply the patch with 
> > the actual new stuff?
> 
> Maybe
> 
> svn diff -r108792:108791 |patch -p0
> patch  svn commit
> 

change the first command to svn merge -r108792:108791 and you've got it
right (the difference being that merge in reverse will properly
delete/readd files, and patch -p0 will not)
> 
> Bernd



Re: RTL alias analysis

2006-01-26 Thread Gabriel Dos Reis
Michael Veksler <[EMAIL PROTECTED]> writes:

| So, is union is a very useful feature in ISO C, without
| gcc's extension? It seems that the only legal use of union
| is to use the same type through the whole life of the object.
| 
| Here is the rationale:
| 
| Quoting Richard Guenther <[EMAIL PROTECTED]>:
| > On 1/25/06, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
| > > On Jan 22, 2006, Richard Guenther <[EMAIL PROTECTED]> wrote:
| > >
| [...]
| > >
| > > > int ii; double dd; void foo (int *ip, double *dp) {
| > > >   *ip = 15; ii = *ip; *dp = 1.5; dd = *dp; }
| > > > void test (void) { union { int i; double d; } u;
| > > >   foo (&u.i, &u.d); }
| > >
| > > So it is perfectly valid, but if GCC reorders the read from *ip past
| > > the store to *dp, it turns the valid program into one that misbehaves.
| > 
| > *ip = 15; ii = *ip; *dp = 1.5; dd = *dp;
| > Here^^^
| > you are accessing memory of type integer as type double.  And gcc will
| > happily reorder the read from *ip with the store to *dp based on TBAA
| > unless it inlines the function and applies the "special" gcc rules about
| > unions.
| > So this example is invalid, too.
| 
| So in theory, if there is a union of two non-char types:
|   union { T1 v1, T2 v2} x;
| it is illegal to access both x.v1 and x.v2 for the same variable x
| anywhere in the whole program.

I don't see anything in the ISO C standard that implies that.

This 

x.v1 = 384;
x.v2 = 94.08;
int v = x.v2;
x.v1 = v;

is valid fragment.

-- Gaby


Re: Attribute data structure rewrite?

2006-01-26 Thread Geoffrey Keating


On 25/01/2006, at 11:52 PM, Giovanni Bajo wrote:


svn log --stop-on-copy svn://gcc.gnu.org/svn/gcc/branches/stree-branch


I got my branches confused; it's on static-tree-branch.  Revision 88377.

smime.p7s
Description: S/MIME cryptographic signature


Re: RTL alias analysis

2006-01-26 Thread Alexandre Oliva
On Jan 26, 2006, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

> I don't see anything in the ISO C standard that implies that.

> This 

> x.v1 = 384;
> x.v2 = 94.08;
> int v = x.v2;
> x.v1 = v;

> is valid fragment.

But can you see anything in it that makes it undefined?

Failing that, regular assignment and access rules apply, and so it is
valid.

-- 
Alexandre Oliva http://www.lsd.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: Problem with gfortran or did I messsed up GMP installation?

2006-01-26 Thread Vincent Lefevre
Hi,

On 2006-01-25 13:10:50 -0600, Aleksandar Milivojevic wrote:
> Compile GMP 4.1.4:
> 
>  $ ../configure ABI=32 --prefix=/gcc-test --enable-mpfr

You shouldn't use the MPFR version distributed with GMP; it is very
old and buggy. It is much better to compile GMP without MPFR support
then compile MPFR 2.2.0 separately: http://www.mpfr.org/mpfr-current/

-- 
Vincent Lefèvre <[EMAIL PROTECTED]> - Web: 
100% accessible validated (X)HTML - Blog: 
Work: CR INRIA - computer arithmetic / SPACES project at LORIA


Re: RTL alias analysis

2006-01-26 Thread Gabriel Dos Reis
Alexandre Oliva <[EMAIL PROTECTED]> writes:

| On Jan 26, 2006, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:
| 
| > I don't see anything in the ISO C standard that implies that.
| 
| > This 
| 
| > x.v1 = 384;
| > x.v2 = 94.08;
| > int v = x.v2;
| > x.v1 = v;
| 
| > is valid fragment.
| 
| But can you see anything in it that makes it undefined?

Excessive snippage is a perilious excercise.  In this specific case,
your leaving out the context misled you.  I was specifically replying
to Michael Veksler's assertion:

  # So, is union is a very useful feature in ISO C, without
  # gcc's extension? It seems that the only legal use of union
  # is to use the same type through the whole life of the object.
  # 
  # Here is the rationale:
  # 
  # Quoting Richard Guenther <[EMAIL PROTECTED]>:
  # > On 1/25/06, Alexandre Oliva <[EMAIL PROTECTED]> wrote:
  # > > On Jan 22, 2006, Richard Guenther <[EMAIL PROTECTED]> wrote:
  # > >
  # [...]
  # > >
  # > > > int ii; double dd; void foo (int *ip, double *dp) {
  # > > >   *ip = 15; ii = *ip; *dp = 1.5; dd = *dp; }
  # > > > void test (void) { union { int i; double d; } u;
  # > > >   foo (&u.i, &u.d); }
  # > >
  # > > So it is perfectly valid, but if GCC reorders the read from *ip past
  # > > the store to *dp, it turns the valid program into one that misbehaves.
  # > 
  # > *ip = 15; ii = *ip; *dp = 1.5; dd = *dp;
  # > Here^^^
  # > you are accessing memory of type integer as type double.  And gcc will
  # > happily reorder the read from *ip with the store to *dp based on TBAA
  # > unless it inlines the function and applies the "special" gcc rules about
  # > unions.
  # > So this example is invalid, too.
  # 
  # So in theory, if there is a union of two non-char types:
  #   union { T1 v1, T2 v2} x;
  # it is illegal to access both x.v1 and x.v2 for the same variable x
  # anywhere in the whole program.

which I happen not to find any support for in the C standard.

| Failing that, regular assignment and access rules apply, and so it is
| valid.

I could not agree more; and in fact, that was my point.

-- Gaby


Re: x86-64 linux, gomp ICE in trunk

2006-01-26 Thread tbp
On 1/25/06, Diego Novillo <[EMAIL PROTECTED]> wrote:
> You'll need to do what this message suggests.  http://gcc.gnu.org/bugzilla/
Sorry for the lag.

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25983


Re: Problem with gfortran or did I messsed up GMP installation?

2006-01-26 Thread Aleksandar Milivojevic

Quoting Vincent Lefevre <[EMAIL PROTECTED]>:


Hi,

On 2006-01-25 13:10:50 -0600, Aleksandar Milivojevic wrote:

Compile GMP 4.1.4:

 $ ../configure ABI=32 --prefix=/gcc-test --enable-mpfr


You shouldn't use the MPFR version distributed with GMP; it is very
old and buggy. It is much better to compile GMP without MPFR support
then compile MPFR 2.2.0 separately: http://www.mpfr.org/mpfr-current/


Probably a good idea.  While on the subject, but not directly related 
to gcc as such, is there a way to compile gdb to use already installed 
bfd and opcodes libraries (from binutils package) instead of insisting 
on its own copy?




This message was sent using IMP, the Internet Messaging Program.




gcc-4.0-20060126 is now available

2006-01-26 Thread gccadmin
Snapshot gcc-4.0-20060126 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.0-20060126/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.0 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_0-branch 
revision 110282

You'll find:

gcc-4.0-20060126.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.0-20060126.tar.bz2 C front end and core compiler

gcc-ada-4.0-20060126.tar.bz2  Ada front end and runtime

gcc-fortran-4.0-20060126.tar.bz2  Fortran front end and runtime

gcc-g++-4.0-20060126.tar.bz2  C++ front end and runtime

gcc-java-4.0-20060126.tar.bz2 Java front end and runtime

gcc-objc-4.0-20060126.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.0-20060126.tar.bz2The GCC testsuite

Diffs from 4.0-20060119 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.0
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Declaration of a guard function for use on define_bypass

2006-01-26 Thread Peter Steinmetz

I'm using store_data_bypass_p from recog.c as the guard for a define_bypass
within a machine description.  I'm seeing the following warning/error that
I'd like to clean up.

cc1: warnings being treated as errors
insn-automata.c: In function 'internal_insn_latency':
insn-automata.c:53265: warning: implicit declaration of function
'store_data_bypass_p'

Anybody know what needs to be done to get recog.h included into the code
created by genautomata (either directly, or indirectly) to eliminate the
implicit declaration?

Thanks!

Pete



Re: Declaration of a guard function for use on define_bypass

2006-01-26 Thread Andreas Tobler

Peter Steinmetz wrote:

I'm using store_data_bypass_p from recog.c as the guard for a define_bypass
within a machine description.  I'm seeing the following warning/error that
I'd like to clean up.

cc1: warnings being treated as errors
insn-automata.c: In function 'internal_insn_latency':
insn-automata.c:53265: warning: implicit declaration of function
'store_data_bypass_p'

Anybody know what needs to be done to get recog.h included into the code
created by genautomata (either directly, or indirectly) to eliminate the
implicit declaration?


Should be fixed.

http://gcc.gnu.org/viewcvs?root=gcc&view=rev&rev=110274

Andreas



Corrupted Profile Information

2006-01-26 Thread djp

 I'm working on a series of profile-driven optimes with gcc-3.4.3.

 I need profile information available for the PRE phase (implemented in
gcc with gcse.c and lcm.c).

 However, gcc-3.4.3 does not provide profile information that early in the
compile, so I moved the call to `rest_of_handle_branch_prob()' (which in
turns calls branch_prob() which annotates the CFG with profile
information from gcda files) to above `rest_of_handle_gcse()'.

 For the most part, I seem to get good information. However, on certain
SPEC CPU 2000 benchmarks, the compiler stops with `counter mismatch'
errors and `counter contains illegal value' (typically negative value)
errors.

 I really need correct profile information before PRE. By moving
rest_of_handle_branch_prob() just before rest_of_handle_gcse() have I
violated some critical assumptions which is causing the profile
information to be occasionally corrupted ?

David P.


Reconsidering gcjx

2006-01-26 Thread Tom Tromey
Now that the GPL v3 looks as though it may be EPL-compatible, the time
has come to reconsider using the Eclipse java compiler ("ecj") as our
primary gcj front end.  This has both political and technical
ramifications, I discuss them below.

Steering committee members, please read through if you would.  I think
this requires some resolution at the SC/FSF level.

First, a brief note on gcjx.  I had intended gcjx to serve not only as
a cleanly written replacement for the current gcj, but also as a model
for how GCC front ends should be written in the future; in particular
I think writing it as a library and separating out the tree-generating
code from the bulk of the compiler remain good ideas.  I enjoyed, and
continue to enjoy, the writing of gcjx.  However, in this case I think
that pleasure must give way to the greater needs of efficiency and
cross-community cooperation.


Motivation.

The motivation for this investigation is simple: sharing code is
preferable to working in isolation.  In particular this change would
let us offload much of the front end maintenance onto a different
group.

Ecj has a good front end (much better than the current gcj) and decent
bytecode generation.  It is fully 1.5-compliant and, apparently, is
tested against the TCK by the upstream maintainers (us gcj developers
don't have TCK access).  It also has some improvements for 1.6 (stack
maps).  Upstream is very active.

gcjx by comparison is unfinished and really has just a single
full-time developer, me.


Technical approach.

Historically we've wanted to have a 'native' java-source-code-reading
compiler, that is, one which parses java sources and converts them
directly to trees.  From what I can remember this was based on 3
things:

* In the past the compiler handled loops built with LOOP_EXPR better
  than it handled loops built "by hand" out of GOTO_EXPRs.  My
  understanding is that this has changed since tree-ssa.  The issue
  here was that we made no attempt to rebuild a LOOP_EXPR from java
  bytecode.

* The .java front end could do a "constant array" optimization.  This
  optimization has not worked for quite some time (there's a PR).  In
  any case we could implement this for bytecode if it matters.

* The .java front end could more efficiently handle class literals.
  With the new 1.5 'ldc' bytecode extension, this is no longer a
  problem.

In other words, as far as I can remember, our old reasons for wanting
this are obsolete.

I think our technical approach should be to have ecj emit class files,
which would then be compiled by jc1.  In particular I think we could
change ecj to emit a single .jar file.  This has a few benefits: it
would give -save-temps meaning for gcj, it would let us more easily
drop ecj into the existing specs mechanism, and it would require very
few changes to the upstream compiler.

An alternative approach would be to directly link ecj to the gcc back
end.  However, this looks like significantly more work, requiring much
more hacking on the internals of the upstream compiler.  I suspect
that this won't be worth the effort.

In my preferred approach we would simply delete a portion of the
existing gcj and turn jc1 into a purely bytecode-based compiler.  Then
we would proceed to augment it with all the bits needed for proper 1.5
support.

ecj is written in java.  This will complicate the bootstrap process.
However, the situation will not be quite as severe as the Ada
situation, in that it ought to be possible to bootstrap gcj using any
java runtime, including mini ones such as JamVM -- at least, assuming
that the suggested implementation route is taken.


Politics.

I don't know whether the FSF or the GCC SC would let us import ecj,
even assuming it is actually GPL compatible.  SC members, please
discuss.

We don't know how upstream would react.  I think this is a fairly
minor risk.

It is unclear to me whether we must even rely on GPL v3 if we went
with the separate-ecj route.  Any comments here?  In the
exec-via-specs approach we're invoking ecj as a separate executable,
much the same way we exec 'as' or 'ld'.  Comments on this from
license-oriented folks would be appreciated.


Summary.

I think this would be the most efficient way to achieve 1.5 language
compatibility for gcj, and it would also make future language changes
less expensive.  Given the scope of the entire gcj project, especially
when the scarcity of resource devoted to it are taken into account,
this is significant enough to warrant the change.

Tom


Re: Reconsidering gcjx

2006-01-26 Thread Per Bothner

Technical approach.

Historically we've wanted to have a 'native' java-source-code-reading
compiler, that is, one which parses java sources and converts them
directly to trees.  From what I can remember this was based on 3
things:


A couple of other factors:

* Compile time.  It is at least potentially faster to compile directly
  to trees.  However, this is negated in many cases since you want to
  generate both bytecoe and native code.  (This include libgcj.)
  So generating bytecode first and then generating native code from
  the bytecode is actually faster.

* Debugging.  Historically Java degugging information is pretty limited.
  Even with the latest specifications there is (for example) no support
  for column numbers.  However, the classfile format is extensible, and
  so if needed we can define extra "attribute" sections.

* The .classfile format is quite inefficient.  For example there is
  no sharing of "symbols" between classes, so there is a lot of
  duplication.  However, this is a problem we share with everybody
  else, and it could be solved at the bytecode level, co-operating
  with other parties, iseally as a Java Specification Request.


An alternative approach would be to directly link ecj to the gcc back
end.  However, this looks like significantly more work, requiring much
more hacking on the internals of the upstream compiler.  I suspect
that this won't be worth the effort.


I think you're right.  It could be a project for somebody to tackle
later.


ecj is written in java.  This will complicate the bootstrap process.
However, the situation will not be quite as severe as the Ada
situation, in that it ought to be possible to bootstrap gcj using any
java runtime, including mini ones such as JamVM -- at least, assuming
that the suggested implementation route is taken.


I don't a "mini java runtime" would be useful.  We could offer two
bootstrap solution:
(1) An existing (installed) Java run-time, which would be an older
version of gcj.
(2) A bytecode versions of ecj.  This is only useful if we also make
available a bytecode version of libgcj, I think.
--
--Per Bothner
[EMAIL PROTECTED]   http://per.bothner.com/


Re: Problem with gfortran or did I messsed up GMP installation?

2006-01-26 Thread Aleksandar Milivojevic

Vincent Lefevre wrote:

You shouldn't use the MPFR version distributed with GMP; it is very
old and buggy. It is much better to compile GMP without MPFR support
then compile MPFR 2.2.0 separately: http://www.mpfr.org/mpfr-current/


Heh...  If MPFR 2.2.0 (fully patched) is configured with 
--enable-thread-safe option, GCC's configure script complains MPFR is 
non-functional.  Looking at config.log (that I already managed to 
remove, sorry), linker couldn't resolve some symbol with "__" and "tls" 
in name (don't remember exact name) from libmpfr.so.  Probably some 
library missing.  Seems that when MPFR is compiled without that option, 
GCC compiles (well, it's compiling as I type this, but I guess it should 
be OK).


Re: Future possible stack based optimization

2006-01-26 Thread Frediano Ziglio
Il giorno mer, 25/01/2006 alle 22.29 +0100, Marcel Cox ha scritto:
> >   I saw that stack instructions on Intel platform are not used that
> > much. I think this is a pity cause stack operations are small (size
> > optimization) and usually fast (from Pentium two consecutive push/pop
> > are executed together -> speed optimization). Consider this small
> > piece of code
> 
> 
> whether push(pop instructions or mov instructions are faster depends on
> the type of processor used. GCC is well aware of this. If you specify
> the desired processor with -mtune then GCC will use whatever is best
> for that processor. For example if you optimize for old Pentium
> processors, use -mtune=pentium and you will see that the compiler uses
> push/pop instructions even when not using -Os

Marcus,
  I tried many options with some gcc versions but I can confirm that gcc
do not use push in the way I suggest. Perhaps a smaller code will help

extern int foo1(int *a);

void foo2()
{
int x = 2;
foo1(&x);
}

should become something like

foo2:
# here is the optimization I suggested, 
# allocation and set with a single instruction
pushl   $2
# I don't understand why gcc compile 
# movl %esp, %eax  pushl %eax   here
pushl   %esp
callfoo1
# this can be subl $4, %esp or similar depending on 
# options you suggested
popl%ecx
ret

Is anybody working in this direction ??

freddy77




Re: issue with references to weak symbols in PIEs

2006-01-26 Thread H. J. Lu
FYI, it is a linker bug:

http://sourceware.org/bugzilla/show_bug.cgi?id=2218

I posted a patch for it.


H.J.