How to control the offset for stack operation?

2007-04-16 Thread Mohamed Shafi

hello all,

Depending on the machine mode the compiler will generate automatically
the offset required for the stack operation i.e for a machine with
word size is 32, for char type the offset is 1, for int type the
offset is 2 and so on..

Is there a way to control this ? i mean say for long long the offset
is 4 if long long is mapped to TI mode and i want the generate the
offset such that it is 2.

Is there a way to do this in gcc ?

Regards,
Shafi


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread François-Xavier Coudert

You want more bugs fixed, it would seem a better way would be to build
a better sense of community (Have bugfix-only days, etc) and encourage
it through good behavior, not through negative reinforcement.


I do agree with that in a general way, but I think there should also
be a real effort done by the various maintainers to make sure people
indeed fix the few PRs they created. Maintainers should be able to
say, "please think of fixing this PR before submitting a patch for
that feature". That doesn't introduce administrative overhead, because
maintainers should keep track of the various PRs and patches of their
area. I think it works already for some areas of the compiler, but
doesn't work fine for the "most common" areas.

A few examples of that (maybe I'm always quoting the same examples,
but those are the ones I know that impact my own work on GCC):
 -- how can bootstrap stay broken (with default configure options) on
i386-linux for 3 weeks?
 -- how could i386-netbsd bootstrap be broken for months (PR30058),
and i386-mingw still be broken after 6 months (PR30589), when the
cause of failure is well known?

These are not rethorical "How", or finger-pointing. I think these are
cases of failure we should analyze to understand what in our
development model allows them to happen.

FX


Re: How to control the offset for stack operation?

2007-04-16 Thread J.C. Pizarro

2007/4/16, Mohamed Shafi <[EMAIL PROTECTED]>:

hello all,

Depending on the machine mode the compiler will generate automatically
the offset required for the stack operation i.e for a machine with
word size is 32, for char type the offset is 1, for int type the
offset is 2 and so on..

Is there a way to control this ? i mean say for long long the offset
is 4 if long long is mapped to TI mode and i want the generate the
offset such that it is 2.

Is there a way to do this in gcc ?

Regards,
Shafi



For a x86 machine, the stack's offset always is multiple of 4 bytes.

long long is NOT 4 bytes, is 8 bytes!

Sincerely J.C. Pizarro :)


[wwwdocs] PATCH for Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Gerald Pfeifer
Installed.

Index: index.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/index.html,v
retrieving revision 1.607
diff -u -3 -p -r1.607 index.html
--- index.html  23 Mar 2007 08:31:00 -  1.607
+++ index.html  16 Apr 2007 08:51:28 -
@@ -128,7 +128,7 @@ mission statement.
   GCC 4.2.0 (changes)
 
   Status: Stage 3;
-  http://gcc.gnu.org/ml/gcc/2007-03/msg00865.html";>2007-03-22
+  http://gcc.gnu.org/ml/gcc/2007-04/msg00509.html";>2007-04-15
   (regression fixes & docs only).
   
   

Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread J.C. Pizarro

2007/4/16, François-Xavier Coudert <[EMAIL PROTECTED]> wrote:

> You want more bugs fixed, it would seem a better way would be to build
> a better sense of community (Have bugfix-only days, etc) and encourage
> it through good behavior, not through negative reinforcement.

I do agree with that in a general way, but I think there should also
be a real effort done by the various maintainers to make sure people
indeed fix the few PRs they created. Maintainers should be able to
say, "please think of fixing this PR before submitting a patch for
that feature". That doesn't introduce administrative overhead, because
maintainers should keep track of the various PRs and patches of their
area. I think it works already for some areas of the compiler, but
doesn't work fine for the "most common" areas.

A few examples of that (maybe I'm always quoting the same examples,
but those are the ones I know that impact my own work on GCC):
  -- how can bootstrap stay broken (with default configure options) on
i386-linux for 3 weeks?
  -- how could i386-netbsd bootstrap be broken for months (PR30058),
and i386-mingw still be broken after 6 months (PR30589), when the
cause of failure is well known?

These are not rethorical "How", or finger-pointing. I think these are
cases of failure we should analyze to understand what in our
development model allows them to happen.

FX



The "mea culpa" is to permit for long time to modify "configure" instead of
"configure.ac" or "configure.in" that is used by "autoconf" and/or "automake".

Another "mea culpa" is don't update the autoconf/automake versions when
the GCC''s scripts are using very obsolete/deprecated
autoconf/automake versions.

Currently, "autoconf" is less used because of bad practices of GCC.

I propose to have the following:

* several versions of autoconf/automake in /opt that are depended from the
current GCC's scripts. And to set PATH to corresponding /opt/autoXXX/bin:$PATH.

* to do diff bettween configure and the configure generated by autoconf/automake
with configure.ac

* with these diffs, to do modifications to configure.ac

* to repeat it for verifying of the scripts with recent versions of
autoconf/automake

Sincerely J.C. Pizarro


Re: How to control the offset for stack operation?

2007-04-16 Thread Mohamed Shafi

On 4/16/07, J.C. Pizarro <[EMAIL PROTECTED]> wrote:

2007/4/16, Mohamed Shafi <[EMAIL PROTECTED]>:
> hello all,
>
> Depending on the machine mode the compiler will generate automatically
> the offset required for the stack operation i.e for a machine with
> word size is 32, for char type the offset is 1, for int type the
> offset is 2 and so on..
>
> Is there a way to control this ? i mean say for long long the offset
> is 4 if long long is mapped to TI mode and i want the generate the
> offset such that it is 2.
>
> Is there a way to do this in gcc ?
>
> Regards,
> Shafi
>

For a x86 machine, the stack's offset always is multiple of 4 bytes.

long long is NOT 4 bytes, is 8 bytes!


  I was not talking about the size of long long but the offset i.e
4x32, required for stack operation.
I want gcc to generate the code such that the offset is 2 (64
bytes)and not 4 (128 bytes)

Is there a way to do this?



Sincerely J.C. Pizarro :)



Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread François-Xavier Coudert

The "mea culpa" is to permit for long time to modify "configure" instead of
"configure.ac" or "configure.in" that is used by "autoconf" and/or "automake".

[...]


I'm sorry, but I don't understand at all what you propose, what your
proposal is supposed to fix or how that is related to the mail you're
answering to.

FX


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Andrew Pinski

On 4/16/07, J.C. Pizarro <[EMAIL PROTECTED]> wrote:

The "mea culpa" is to permit for long time to modify "configure" instead of
"configure.ac" or "configure.in" that is used by "autoconf" and/or "automake".

Another "mea culpa" is don't update the autoconf/automake versions when
the GCC''s scripts are using very obsolete/deprecated
autoconf/automake versions.


What world are you living in?  Do you even look at the source?
Even though http://gcc.gnu.org/install/prerequisites.html has not been
updated, the toplevel actually uses autoconf 2.59 already and has
since 2007-02-09.  And how can you say 2.59 is obsolete when 90-99% of
the distros ship with that version?  Plus automake 1.9.6 is actually
the latest version of 1.9.x automake.

libtool on the other hand is the older version but that is in the
progress of being fixed, don't you read the mailing lists?



Currently, "autoconf" is less used because of bad practices of GCC.


Huh? What do you mean by that?
I don't know anyone who touches just configure and not use autoconf.
Yes at one point we had an issue with the toplevel needing an old
version of autoconf but that day has past for 2 months now.

Also usually what happened is that someone would regenerate the
toplevel configure with the incorrect version of autoconf and then
someone would notice that and just regenerate it.  Not a big issue.
The big issues are not with the configure scripts at all.  It has to
do with people abusing sometimes their power of maintainership or at
least that is how I see it.

Configure scripts are not even related to what FX is talking about.
You should look into the bug reports before saying something about the
configure scripts.  One of problem that FX is talking about is the
fall out due to the C99 extern inline patch which I had mentioned when
the patch was posted, it will break targets left and right.  The other
problem FX is talking about is the recent fallout due to enabling dfp
for x86-linux-gnu which was obviously not tested for all x86-linux-gnu
targets anyways :).

-- Pinski


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread J.C. Pizarro

2007/4/16, François-Xavier Coudert <[EMAIL PROTECTED]> wrote:

> The "mea culpa" is to permit for long time to modify "configure" instead of
> "configure.ac" or "configure.in" that is used by "autoconf" and/or "automake".
>
> [...]

I'm sorry, but I don't understand at all what you propose, what your
proposal is supposed to fix or how that is related to the mail you're
answering to.

FX



Snapshot GCC-4.3 uses Autoconf 2.59 and Automake-1.9.6
but why does it appear "generated by ... aclocal 1.9.5" when it uses 1.9.6?

libdecnumber/aclocal.m4:# generated automatically by aclocal 1.9.5 -*-
Autoconf -*-

I say that the generated scripts must to be updated automatically and
recursively
before than tarballing and distributing it, and the GCC site is doing
the wrong task.

The correct task is:
1) To update the generated configure scripts of the tarball before
than distributing it.
2) Or to remove the non-updated configure scripts.

Sincerely J.C. Pizarro


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread François-Xavier Coudert

libdecnumber/aclocal.m4:# generated automatically by aclocal 1.9.5 -*-
Autoconf -*-


That's a problem in the last regeneration of this file. I'm CCing M.
Meissner, H. J. Lu and M. Cornea, since they appear to have last
changed this file, although there's no ChangeLog entry for it in their
commit.

PS: it appears that it has been update two days ago by bonzini, and
the new version has been generated with aclocal 1.9.6.


1) To update the generated configure scripts of the tarball before
than distributing it.


It could be done, but there's the risk that an automated process like
that might introduce problems. I'd be more in favour of a nightly
tester that check the "Generated by" headers to see if anything has an
unexpected version number.


2) Or to remove the non-updated configure scripts.


That's a annoyance, because it would require the autotools to build
the GCC source, which is inconvenient.

FX


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread J.C. Pizarro

2007/4/16, Andrew Pinski <[EMAIL PROTECTED]> wrote:

On 4/16/07, J.C. Pizarro <[EMAIL PROTECTED]> wrote:
> The "mea culpa" is to permit for long time to modify "configure" instead of
> "configure.ac" or "configure.in" that is used by "autoconf" and/or "automake".
>
> Another "mea culpa" is don't update the autoconf/automake versions when
> the GCC''s scripts are using very obsolete/deprecated
> autoconf/automake versions.

What world are you living in?  Do you even look at the source?
Even though http://gcc.gnu.org/install/prerequisites.html has not been
updated, the toplevel actually uses autoconf 2.59 already and has
since 2007-02-09.  And how can you say 2.59 is obsolete when 90-99% of
the distros ship with that version?  Plus automake 1.9.6 is actually
the latest version of 1.9.x automake.


Since 2007-02-09, it's the problem, little time for a drastic modification.
So, this drastic modification could have lost arguments or flags or
modified incorrectly the behaviour between before and after.
Because of this, there is not time for releasing or iceing after of this.


libtool on the other hand is the older version but that is in the
progress of being fixed, don't you read the mailing lists?


> Currently, "autoconf" is less used because of bad practices of GCC.

Huh? What do you mean by that?
I don't know anyone who touches just configure and not use autoconf.
Yes at one point we had an issue with the toplevel needing an old
version of autoconf but that day has past for 2 months now.


By example, http://gcc.gnu.org/ml/gcc/2007-04/msg00525.html


...

-- Pinski



J.C. Pizarro :)


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread J.C. Pizarro

2007/4/16, François-Xavier Coudert <[EMAIL PROTECTED]> wrote:

> 1) To update the generated configure scripts of the tarball before
> than distributing it.

It could be done, but there's the risk that an automated process like
that might introduce problems. I'd be more in favour of a nightly
tester that check the "Generated by" headers to see if anything has an
unexpected version number.


if [ $? == 0 ]; then
  echo "OK. All configure script is generated."
else
  echo "Remove the old configure scripts XXX to non-updated_XXX"
fi

Is it complicated? I believe that not.


> 2) Or to remove the non-updated configure scripts.

That's a annoyance, because it would require the autotools to build
the GCC source, which is inconvenient.

FX



The GCC scripts use autotools but the site don't use autotools because
it says which is inconvenient. What???

Don't use autotools or do use autotools? yes or no? Or yes-and-no?

J.C. Pizarro


RE: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Dave Korn
On 16 April 2007 10:56, J.C. Pizarro wrote:


> The GCC scripts use autotools but the site don't use autotools because
> it says which is inconvenient. What???

Why don't you ever go and actually *find something out* about what
you're talking about before you spout nonsense all over the list?  This is not
a remedial class for people who can't be bothered to read the docs.

  Yes, gcc uses autoconf.  But the end-users who just want to compile gcc from
a tarball do not have to have autoconf installed, because we supply all the
generated files for them in the tarball.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: How to control the offset for stack operation?

2007-04-16 Thread J.C. Pizarro

2007/4/16, Mohamed Shafi <[EMAIL PROTECTED]> wrote:

> > Depending on the machine mode the compiler will generate automatically
> > the offset required for the stack operation i.e for a machine with
> > word size is 32, for char type the offset is 1, for int type the
> > offset is 2 and so on..

   I was not talking about the size of long long but the offset i.e
4x32, required for stack operation.
I want gcc to generate the code such that the offset is 2 (64
bytes)and not 4 (128 bytes)



Offset in bytes? Offset in 32-bit words?
Please, define offset? You confuse.

J.C. Pizarro


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread J.C. Pizarro

2007/4/16, Dave Korn <[EMAIL PROTECTED]> wrote:

On 16 April 2007 10:56, J.C. Pizarro wrote:


> The GCC scripts use autotools but the site don't use autotools because
> it says which is inconvenient. What???

Why don't you ever go and actually *find something out* about what
you're talking about before you spout nonsense all over the list?  This is not
a remedial class for people who can't be bothered to read the docs.

  Yes, gcc uses autoconf.  But the end-users who just want to compile gcc from
a tarball do not have to have autoconf installed, because we supply all the
generated files for them in the tarball.


cheers,
  DaveK
--
Can't think of a witty .sigline today




I follow,

The end-users who just want to compile gcc from a tarball do not
have to have autoconf installed, because we supply all the generated files
for them in the tarball. <- Well,

what is the matter if the generated files aren't updated?
The users will say many times broken situations like bootstrap doesn't
work or else.

J.C. Pizarro


generated files vs bootstrapping [Was: GCC 4.2.0 Status Report]

2007-04-16 Thread Andrew Pinski

On 4/16/07, J.C. Pizarro <[EMAIL PROTECTED]> wrote:

what is the matter if the generated files aren't updated?
The users will say many times broken situations like bootstrap doesn't
work or else.


99.9% of bootstrap failures are not related to generated files full stop.
Sounds like you are mixing two different ideas together and thinking
they are the same problem when in reality there are two different
issues.  I dare you to go through the bug reports and find out how
many bootstrap failures were due to a generated file being generated
using the wrong version of program.  And usually when those kind of
bootstrap problems happen, they are fixed in the next day or two while
other bootstrap problems take longer to fix.  So you are complaining
about a minor issue in the whole scheme of things.   So minor that it
could be fixed by anyone who has write access in less than an hour
(note I am not going to do it right now because it is 3am and I really
should be asleep and I don't have the correct versions of the programs
on this computer right now).  All someone has to do is figure out what
is the correct version as documented and then use that accross all the
files.  I would do it if I have time but I really don't and it is so
minor issue that I wonder why I am ever writting this email now.  Oh
right because this has no become offtopic.

-- Pinski


RE: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Dave Korn
On 16 April 2007 11:17, J.C. Pizarro wrote:

> I follow,

  No, not very well.

> The end-users who just want to compile gcc from a tarball do not
> have to have autoconf installed, because we supply all the generated files
> for them in the tarball. <- Well,
> 
> what is the matter if the generated files aren't updated?

  This has never happened as far as I know.  Can you point to a single release
that was ever sent out with out-of-date generated files?

> The users will say many times broken situations like bootstrap doesn't
> work or else.

  I haven't seen that happening either.  Releases get tested before they are
released.  Major failures get spotted.  Occasionally, there might be a bug
that causes a problem building on one of the less-used (and hence
less-well-tested) platforms, but this is caused by an actual bug in the
configury, and not by the generated files being out of date w.r.t the source
files from which they are generated; regenerating them would only do the same
thing again.

  If you have a counter-example of where this has /actually/ happened, I would
be interested to see it.


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: How to control the offset for stack operation?

2007-04-16 Thread Mohamed Shafi

On 4/16/07, J.C. Pizarro <[EMAIL PROTECTED]> wrote:

2007/4/16, Mohamed Shafi <[EMAIL PROTECTED]> wrote:
> > > Depending on the machine mode the compiler will generate automatically
> > > the offset required for the stack operation i.e for a machine with
> > > word size is 32, for char type the offset is 1, for int type the
> > > offset is 2 and so on..
>
>I was not talking about the size of long long but the offset i.e
> 4x32, required for stack operation.
> I want gcc to generate the code such that the offset is 2 (64
> bytes)and not 4 (128 bytes)
>

Offset in bytes? Offset in 32-bit words?
Please, define offset? You confuse.


Offset in 32-bit words.



J.C. Pizarro



Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread J.C. Pizarro

2007/4/16, Dave Korn <[EMAIL PROTECTED]> wrote:

On 16 April 2007 11:17, J.C. Pizarro wrote:

> I follow,

  No, not very well.

> The end-users who just want to compile gcc from a tarball do not
> have to have autoconf installed, because we supply all the generated files
> for them in the tarball. <- Well,
>
> what is the matter if the generated files aren't updated?

  This has never happened as far as I know.  Can you point to a single release
that was ever sent out with out-of-date generated files?

> The users will say many times broken situations like bootstrap doesn't
> work or else.

  I haven't seen that happening either.  Releases get tested before they are
released.  Major failures get spotted.  Occasionally, there might be a bug
that causes a problem building on one of the less-used (and hence
less-well-tested) platforms, but this is caused by an actual bug in the
configury, and not by the generated files being out of date w.r.t the source
files from which they are generated; regenerating them would only do the same
thing again.

  If you have a counter-example of where this has /actually/ happened, I would
be interested to see it.


cheers,
  DaveK
--
Can't think of a witty .sigline today




$ ./configure 
...

checking for i686-pc-linux-gnu-ld...
/usr/lib/gcc/i486-slackware-linux/3.4.6/../../../../i486-slackware-linux/bin/ld
# <-- i don't like this
...

$ grep "\-ld" configure   appears COMPILER_LD_FOR_TARGET

$ gcc --print-prog-name=ld
/usr/lib/gcc/i486-slackware-linux/3.4.6/../../../../i486-slackware-linux/bin/ld

This absolute path had broke me many times little time ago.

J.C. Pizarro


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Richard Kenner
> Also, beyond that, I would strongly suspect that these PRs haven't been 
> fixed in large part because they're difficult to track down, and 
> possibly if we knew what commit had introduced them, we'd be a good bit 
> farther along in fixing them, even without having the help of whoever 
> introduced them.

That's my feeling as well.  If we knew which checkin caused a P1
regression, there would be considerable "peer pressure" for the person
who checked in that patch to fix it.  I'm not clear that any "rule"
would be stronger.  I think the point is that we no longer *know*
which checkin caused it in most cases or that it happened so long ago
that that information is no longer technically relevant.


EH references

2007-04-16 Thread Paulo J. Matos

Hello all,

Is that any reference (paper, guide, whatever,) on how gcc is handling
exceptions in intermediate code? Is it based on a known (published)
method? Is it intuitive and explained somewhere?

I've looked at internal docs but it is not really explicit how it
works. I'm having a hard time understanding how RESX_PTR, FILTER_EXPR,
EH_FILTER_EXPR work together in gimple and how all of this is then
made to work in assembler.

References would be extremely valuable.

Cheers,

--
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK


Difference in DWARF Info generated by GCC 3.4.6 and GCC 4.1.1

2007-04-16 Thread Rohit Arul Raj

Hello all,

I ran a sample program with gcc 3.4.6 and gcc 4.1.1 compiler. I need
some clarifications regarding the DWARFinfo generated by these 2
compilers.

Sample Program:

#include 

int fun(const char*, ...);

/* Variadic function */
int fun(const char *raj,...)
{
 return 9;
}

int main()
{
 fun("Hello world",3,2);
 return 0;

}

# Readelf O/P for 3.4.6 ##

<1>: Abbrev Number: 6 (DW_TAG_subprogram)
DW_AT_sibling : <10e> 
DW_AT_external: 1   
DW_AT_name: fun 
DW_AT_decl_file   : 7   
DW_AT_decl_line   : 7   
DW_AT_prototyped  : 1   
DW_AT_type: <4c>  
DW_AT_low_pc  : 0   
DW_AT_high_pc : 0x14
DW_AT_frame_base  : 1 byte block: 5e(DW_OP_reg14)

<2>: Abbrev Number: 7 (DW_TAG_formal_parameter)
DW_AT_name: raj 
DW_AT_decl_file   : 7   
DW_AT_decl_line   : 6   
DW_AT_type:   
DW_AT_location: 2 byte block: 91 4  (DW_OP_fbreg: 4)

# Readelf O/P for 4.1.1 ##

1><103>: Abbrev Number: 6 (DW_TAG_subprogram)
DW_AT_sibling : <12e> 
DW_AT_external: 1   
DW_AT_name: fun 
DW_AT_decl_file   : 10  
DW_AT_decl_line   : 7   
DW_AT_prototyped  : 1   
DW_AT_type: <53>  
DW_AT_low_pc  : 0   
DW_AT_high_pc : 0x14
DW_AT_frame_base  : 1 byte block: 5f(DW_OP_reg15)

<2><11e>: Abbrev Number: 7 (DW_TAG_formal_parameter)
DW_AT_name: raj 
DW_AT_decl_file   : 10  
DW_AT_decl_line   : 6   
DW_AT_type:   
DW_AT_location: 2 byte block: 91 0  (DW_OP_fbreg: 0)

###

1. In DIE for fun, with 3.4.6, the frame base is taken in terms of
Frame Pointer (DW_OP_reg14), where is in 4.1.1, it is taken in terms
of Stack Pointer (DW_OP_reg15).

(For my backend, reg-no 14 is Frame Pointer and reg-no 15 is Stack Pointer)

Is this the expected behavior?

2. For the variable, const char *raj, the DIE for 3.4.6 has the
location mentioned as (fbreg  + 4 [offset] ) whereas for 4.1.1,
location is mentioned as (fbreg + 0).

Any specific reason for this behavior in GCC 4.1.1

Regards,
Rohit


[cygwin] Can't boostrap current gcc trunk with libjava: ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error: "_EXFUN" redefined

2007-04-16 Thread Christian Joensson

Windows XP Pro/SP2 cygwin Pentium M processor 2.13GHz system with packages:

binutils 20060817-1 2.17.50 20060817
bison2.3-1  2.3
cygwin   1.5.24-2   (with Dave Korn's stdio.h patch in
newlib cvs)
dejagnu  20021217-2 1.4.2.x
expect   20030128-1 5.26
gcc  3.4.4-3
gcc-ada  3.4.4-3
gcc-g++  3.4.4-3
gmp  4.2.1-1
make 3.81-1
mpfr 2.2.1-1
tcltk20060202-1 8.4
w32api   3.8-1

LAST_UPDATED: Mon Apr 16 08:51:08 UTC 2007 (revision 123860)

configured by ../gcc/configure, generated by GNU Autoconf 2.59,
 with options \" '--disable-nls' '--without-included-gettext'
'--enable-version-specific-runtime-libs' '--without-x'
'--enable-libgcj' '--with-system-zlib' '--enable-threads=posix'
'--enable-languages=c,ada,c++,fortran,java,objc,obj-c++,treelang'\"


/usr/local/src/trunk/objdir/./gcc/xgcc -shared-libgcc
-B/usr/local/src/trunk/objdir/./gcc -nostdinc++
-L/usr/local/src/trunk/objdir/i686-pc-cygwin/libstdc++-v3/src
-L/usr/local/src/trunk/objdir/i686-pc-cygwin/libstdc++-v3/src/.libs
-B/usr/local/i686-pc-cygwin/bin/ -B/usr/local/i686-pc-cygwin/lib/
-isystem /usr/local/i686-pc-cygwin/include -isystem
/usr/local/i686-pc-cygwin/sys-include -DHAVE_CONFIG_H -I.
-I../../../gcc/libjava -I./include -I./gcj -I../../../gcc/libjava
-Iinclude -I../../../gcc/libjava/include
-I../../../gcc/libjava/classpath/include -Iclasspath/include
-I../../../gcc/libjava/classpath/native/fdlibm
-I../../../gcc/libjava/../boehm-gc/include -I../boehm-gc/include
-I../../../gcc/libjava/libltdl -I../../../gcc/libjava/libltdl
-I../../../gcc/libjava/.././libjava/../gcc
-I../../../gcc/libjava/../libffi/include -I../libffi/include -fno-rtti
-fnon-call-exceptions -fdollars-in-identifiers -Wswitch-enum
-D_FILE_OFFSET_BITS=64 -ffloat-store -fomit-frame-pointer -Wextra
-Wall -D_GNU_SOURCE -DPREFIX=\"/usr/local\"
-DTOOLEXECLIBDIR=\"/usr/local/lib/gcc/i686-pc-cygwin/4.3.0\"
-DJAVA_HOME=\"/usr/local\"
-DBOOT_CLASS_PATH=\"/usr/local/share/java/libgcj-4.3.0.jar\"
-DJAVA_EXT_DIRS=\"/usr/local/share/java/ext\"
-DGCJ_ENDORSED_DIRS=\"/usr/local/share/java/gcj-endorsed\"
-DGCJ_VERSIONED_LIBDIR=\"/usr/local/lib/gcj-4.3.0\"
-DPATH_SEPARATOR=\":\" -DECJ_JAR_FILE=\"\"
-DLIBGCJ_DEFAULT_DATABASE=\"/usr/local/lib/gcj-4.3.0/classmap.db\"
-DLIBGCJ_DEFAULT_DATABASE_PATH_TAIL=\"gcj-4.3.0/classmap.db\" -g -O2
-MT java/lang/natVMDouble.lo -MD -MP -MF
java/lang/.deps/natVMDouble.Tpo -c
../../../gcc/libjava/java/lang/natVMDouble.cc -o
java/lang/natVMDouble.o
In file included from ../../../gcc/libjava/classpath/native/fdlibm/fdlibm.h:29,
from ../../../gcc/libjava/java/lang/natVMDouble.cc:27:
../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error:
"_EXFUN" redefined
In file included from /usr/include/stdlib.h:10,
from ../../../gcc/libjava/java/lang/natVMDouble.cc:14:
/usr/include/_ansi.h:36:1: error: this is the location of the previous
definition
make[3]: *** [java/lang/natVMDouble.lo] Error 1
make[3]: Leaving directory `/usr/local/src/trunk/objdir/i686-pc-cygwin/libjava'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/usr/local/src/trunk/objdir/i686-pc-cygwin/libjava'
make[1]: *** [all-target-libjava] Error 2
make[1]: Leaving directory `/usr/local/src/trunk/objdir'
make: *** [all] Error 2

anyone else see this?

--
Cheers,

/ChJ


Duplicate assembler function names in cgraph

2007-04-16 Thread Paulo J. Matos

Hello all,

I'm doing in my IPA pass:
for(node = cgraph_nodes; node; node = node->next) {
   reg_cgraph_node(IDENTIFIER_POINTER(DECL_ASSEMBLER_NAME(node->decl)));
}

to get all the function names in the cgraph. I'm adding them to a list
and I'm assuming that two nodes do not have the same
DECL_ASSEMBLER_NAME but I'm wrong. In a C++ file I'm getting two
functions with name _ZN4listIiE6appendEPS0_, DECL_NAME = append.
Why is this? The code is at
http://pastebin.ca/442691

Is there a way to transverse the cgraph but never going through the
same twice? Or should I just ignore the node if the function name is
already registered?

Cheers,
--
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Paolo Bonzini



Hi!

Initially I meant to optimize GCC, that includes runtime and memory 
usage, of course.


Sure.  I meant that we have testcases that are good to test your work 
on.  Profile GCC running them and fix the hotspots: this may show 
quadratic algorithms, and the like.


For example, see the patch and thread at 
http://permalink.gmane.org/gmane.comp.gcc.patches/137697


I hope this clarifies my earlier message!

Paolo


Builtin functions?

2007-04-16 Thread Paulo J. Matos

Hello all,

I'm going through the bodies of all user-defined functions. I'm using
as user-defined function as one that:
DECL_BUILT_IN(node) == 0.

Problem is that for  a function (derived from a C++ file) whose output
from my pass is (output is self-explanatory, I think):
Registering cgraph node:
_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
[operator<<]... SUCCESSFUL
Declared on 
/home/pmatos/gsvt-bin/lib/gcc/x86_64-unknown-linux-gnu/4.1.1/../../../../include/c++/4.1.1/bits/ostream.tcc,
line 735
Decl Node Code is function_decl
Registering output void ...SUCCESSFUL
Arguments: __out : reference_type, __s : pointer_type

Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault.

I wonder where my wrong assumption is. Any suggestions?

--
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK


Re: Duplicate assembler function names in cgraph

2007-04-16 Thread Jan Hubicka
Hi,
> Hello all,
> 
> I'm doing in my IPA pass:
> for(node = cgraph_nodes; node; node = node->next) {
>reg_cgraph_node(IDENTIFIER_POINTER(DECL_ASSEMBLER_NAME(node->decl)));
> }
> 
> to get all the function names in the cgraph. I'm adding them to a list
> and I'm assuming that two nodes do not have the same
> DECL_ASSEMBLER_NAME but I'm wrong. In a C++ file I'm getting two
> functions with name _ZN4listIiE6appendEPS0_, DECL_NAME = append.
> Why is this? The code is at
> http://pastebin.ca/442691

Callgraph is currently trying to avoid use of DECL_ASSEMBLER_NAME, the
motivation is that for C++, the DECL_ASSEMBLER_NAMEs are very long and
expensive and thus it is not good idea to compute them for symbols not
output to final assembly (DECL_ASSEMBLER_NAME triggers lazy construction
of the names).  So if you don't have good reason for using the names,
you should not do it.

Cgraph rely on frontend that there are no duplicate FUNCTION_DECLs
representing the same function (with same assembler node), that seems to
be broken in your testcase.  Would be possible to have a small tewstcase
reproducing the problem?

Honza
> 
> Is there a way to transverse the cgraph but never going through the
> same twice? Or should I just ignore the node if the function name is
> already registered?
> 
> Cheers,
> -- 
> Paulo Jorge Matos - pocm at soton.ac.uk
> http://www.personal.soton.ac.uk/pocm
> PhD Student @ ECS
> University of Southampton, UK


Re: Builtin functions?

2007-04-16 Thread Daniel Jacobowitz
On Mon, Apr 16, 2007 at 05:04:05PM +0100, Paulo J. Matos wrote:
> _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
> [operator<<]... SUCCESSFUL

> Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
> that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault.
> 
> I wonder where my wrong assumption is. Any suggestions?

What do you mean, it's built in?  It comes from a source file, so
almost by definition it isn't.

-- 
Daniel Jacobowitz
CodeSourcery


Re: Builtin functions?

2007-04-16 Thread Daniel Berlin

On 4/16/07, Paulo J. Matos <[EMAIL PROTECTED]> wrote:

Hello all,

I'm going through the bodies of all user-defined functions. I'm using
as user-defined function as one that:
DECL_BUILT_IN(node) == 0.




Problem is that for  a function (derived from a C++ file) whose output
from my pass is (output is self-explanatory, I think):
Registering cgraph node:
_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
[operator<<]... SUCCESSFUL
Declared on 
/home/pmatos/gsvt-bin/lib/gcc/x86_64-unknown-linux-gnu/4.1.1/../../../../include/c++/4.1.1/bits/ostream.tcc,
line 735
Decl Node Code is function_decl
Registering output void ...SUCCESSFUL
Arguments: __out : reference_type, __s : pointer_type

Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault


First, it's not built in, because it's defined in a source file.
Builtin functions are those defined by the compiler.

Second, we should make FOR_EACH_BB_FN never crash on empty tree functions.
It seems really rude to do otherwise.
Just because we don't have a body for a function doesn't mean we
should crash.  Users shouldn't have to be checking random things like
DECL_SAVED_TREE to determine if FOR_EACH_BB_FN will work (this is not
to say that they should be able to pass any random crap to it, but it
should be detecting if the function has a body)


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Steven Bosscher

On 4/16/07, Mark Mitchell <[EMAIL PROTECTED]> wrote:

29841  [4.2/4.3 regression] ICE with scheduling and __builtin_trap


Honza, PING!


31360  [4.2/4.3 Regression] rtl loop invariant is broken


Zdenek, PING!


The broader question of why there are so many 124 P3 or higher
regressions against 4.2.0 points to a more fundamental problem.


Quick bug breakdown:

46 c++ bugs:
   13 of these are assigned

33 missed-optimizations:
   7 of these are assigned

So that's 79 of 124 bugs, almost two thirds of all bugs.

Only 6 of the 124 bugs are reported against a compiler older than gcc
4.0, so most of these regressions are fairly recent.

Only 29 of 124 bugs are assigned to a developer, but some bugs have
been assigned since forever to the assignee without any sign progress
towards a fix, ever. So in reality, even fewer than 29 bugs have an
active assignee.  That's less than one fourth of all "serious
regressions" being taken seriously.

Then again, all things considering it seems to me that having 33
missed optimizations as regressions only makes things look terribly
bad, while in reality the situation is really not so bad at all. The
usual discussion about bikeshed regressions vs. real significant
progress: With 33 missed optimizations, GCC 4.2 still produces
measurably better scores on the usual benchmarks.

So maybe the fundamental problem is that we just have bugs that look
more serious than they really are. Certainly some of the missed
optimizations are pretty severe, but the majority of these reports is
just silly.



Despite the fact that virtually all of the bugs open against 4.2.0 are
also open against 4.1 or 4.3 -- or both! -- there seems to be little
interest in fixing them.


Interest in fixing regression typically peaks in the days after the
Release Manager posts a bug overview / release status. We haven't had
very many updates for this release... ;-)



Some have suggested that I try to solve this by closing GCC 4.3
development until 4.2.0 is done.  I've considered that, but I don't
think it's a good idea.  In practice, this whole software freedom thing
means that people can go off and do things on their own anyhow; people
who are more motivated to add features than fix bugs are likely just to
keep doing that, and wait for mainline to reopen.


I think that, as the Release Manager, you should just near-spam the
list to death with weekly updates, and keep pushing people to fix
bugs. I think most of the time, people don't fix their bugs simply
because they're too involved with the fun stuff to even think about
fixing bugs. That's how it works for me, at least.

I think the release manager should try to get people to do the hard
work of identifying the cause of bugs, which is usually not really
hard at all.  For example:

* Janis has more than once been asked to reghunt a regression, and
she's always been very helpful and quick to respond, in my experience.

* Very few people know how to use Janis' scripts, so to encourage
people to use them, the release manager could write a wiki page with a
HOWTO for these scripts (or ask someone to do it).  Regression hunting
should only be easier now, with SVN's atomic commits. But the easier
and more accessible you make it for people to use the available tools,
the harder it gets for people to justify ignoring their bugs to "the
rest of us".

* Maintainers of certain areas of the compiler may not be sufficiently
aware of some bug in their part of the compiler. For example, only one
of the three preprocessor bugs is assigned to a preprocessor
maintainer, even though in all three preprocessor bugs a maintainer is
in the CC. It's just that the bugs have been inactive for some time.
(And in this particular case of preprocessor bugs, it's not even been
established whether PR30805 is a bug at all, but it is marked as a P2
"regression" anyway)

In summary, I just strongly encourage a more active release manager...

As of course you understand, this is intended as constructive criticism.

Gr.
Steven


Re: How to control the offset for stack operation?

2007-04-16 Thread Ian Lance Taylor
"Mohamed Shafi" <[EMAIL PROTECTED]> writes:

> Depending on the machine mode the compiler will generate automatically
> the offset required for the stack operation i.e for a machine with
> word size is 32, for char type the offset is 1, for int type the
> offset is 2 and so on..
> 
> Is there a way to control this ? i mean say for long long the offset
> is 4 if long long is mapped to TI mode and i want the generate the
> offset such that it is 2.
> 
> Is there a way to do this in gcc ?

I assume you mean that when loading a 4-byte value the stack offset is
automatically multiplied by 4.  Don't think about types at this level;
all that matters is the size of the reference, which in gcc is
described as the mode.

Use a special code in the insn pattern (e.g., %L0), and adjust the
offset in the target specific PRINT_OPERAND function.

Ian


Re: Builtin functions?

2007-04-16 Thread Paulo J. Matos

On 4/16/07, Daniel Jacobowitz <[EMAIL PROTECTED]> wrote:

On Mon, Apr 16, 2007 at 05:04:05PM +0100, Paulo J. Matos wrote:
> _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
> [operator<<]... SUCCESSFUL

> Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
> that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault.
>
> I wonder where my wrong assumption is. Any suggestions?

What do you mean, it's built in?  It comes from a source file, so
almost by definition it isn't.



Ok, sorry, didn't know that. Thought BUILT IN meant in this context,
not in the programmers source files. :)


--
Daniel Jacobowitz
CodeSourcery




--
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK


Re: Builtin functions?

2007-04-16 Thread Paulo J. Matos

On 4/16/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:


First, it's not built in, because it's defined in a source file.
Builtin functions are those defined by the compiler.

Second, we should make FOR_EACH_BB_FN never crash on empty tree functions.
It seems really rude to do otherwise.
Just because we don't have a body for a function doesn't mean we
should crash.  Users shouldn't have to be checking random things like
DECL_SAVED_TREE to determine if FOR_EACH_BB_FN will work (this is not
to say that they should be able to pass any random crap to it, but it
should be detecting if the function has a body)



Is there a way to check if the function was or not defined by the
user, i.e., it comes from the users source file?

Cheers,
--
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK


Re: Builtin functions?

2007-04-16 Thread Jan Hubicka
> On 4/16/07, Paulo J. Matos <[EMAIL PROTECTED]> wrote:
> >Hello all,
> >
> >I'm going through the bodies of all user-defined functions. I'm using
> >as user-defined function as one that:
> >DECL_BUILT_IN(node) == 0.
> 
> >
> >Problem is that for  a function (derived from a C++ file) whose output
> >from my pass is (output is self-explanatory, I think):
> >Registering cgraph node:
> >_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
> >[operator<<]... SUCCESSFUL
> >Declared on 
> >/home/pmatos/gsvt-bin/lib/gcc/x86_64-unknown-linux-gnu/4.1.1/../../../../include/c++/4.1.1/bits/ostream.tcc,
> >line 735
> >Decl Node Code is function_decl
> >Registering output void ...SUCCESSFUL
> >Arguments: __out : reference_type, __s : pointer_type
> >
> >Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
> >that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault
> 
> First, it's not built in, because it's defined in a source file.
> Builtin functions are those defined by the compiler.
> 
> Second, we should make FOR_EACH_BB_FN never crash on empty tree functions.
> It seems really rude to do otherwise.

Well, it works on empty functions, but why would you ever want to walk
body of function that is not there?
cgraph_function_body_availability should be checked by IPA passes
to see what bodies are there and what can or can not change by
linking...

Honza
> Just because we don't have a body for a function doesn't mean we
> should crash.  Users shouldn't have to be checking random things like
> DECL_SAVED_TREE to determine if FOR_EACH_BB_FN will work (this is not
> to say that they should be able to pass any random crap to it, but it
> should be detecting if the function has a body)


Re: Builtin functions?

2007-04-16 Thread Jan Hubicka
> On 4/16/07, Daniel Berlin <[EMAIL PROTECTED]> wrote:
> >
> >First, it's not built in, because it's defined in a source file.
> >Builtin functions are those defined by the compiler.
> >
> >Second, we should make FOR_EACH_BB_FN never crash on empty tree functions.
> >It seems really rude to do otherwise.
> >Just because we don't have a body for a function doesn't mean we
> >should crash.  Users shouldn't have to be checking random things like
> >DECL_SAVED_TREE to determine if FOR_EACH_BB_FN will work (this is not
> >to say that they should be able to pass any random crap to it, but it
> >should be detecting if the function has a body)
> >
> 
> Is there a way to check if the function was or not defined by the
> user, i.e., it comes from the users source file?

cgraph_function_body_availability is your friend ;)
Honza
> 
> Cheers,
> -- 
> Paulo Jorge Matos - pocm at soton.ac.uk
> http://www.personal.soton.ac.uk/pocm
> PhD Student @ ECS
> University of Southampton, UK


Re: Duplicate assembler function names in cgraph

2007-04-16 Thread Paulo J. Matos

On 4/16/07, Jan Hubicka <[EMAIL PROTECTED]> wrote:

Hi,
> Hello all,
>
> I'm doing in my IPA pass:
> for(node = cgraph_nodes; node; node = node->next) {
>reg_cgraph_node(IDENTIFIER_POINTER(DECL_ASSEMBLER_NAME(node->decl)));
> }
>
> to get all the function names in the cgraph. I'm adding them to a list
> and I'm assuming that two nodes do not have the same
> DECL_ASSEMBLER_NAME but I'm wrong. In a C++ file I'm getting two
> functions with name _ZN4listIiE6appendEPS0_, DECL_NAME = append.
> Why is this? The code is at
> http://pastebin.ca/442691

Callgraph is currently trying to avoid use of DECL_ASSEMBLER_NAME, the
motivation is that for C++, the DECL_ASSEMBLER_NAMEs are very long and
expensive and thus it is not good idea to compute them for symbols not
output to final assembly (DECL_ASSEMBLER_NAME triggers lazy construction
of the names).  So if you don't have good reason for using the names,
you should not do it.


My only reason to use DECL_ASSEMBLER_NAME is, when I'm transversing
cgraph_nodes, to have an ID for the nodes I've already 'analyzed'.



Cgraph rely on frontend that there are no duplicate FUNCTION_DECLs
representing the same function (with same assembler node), that seems to
be broken in your testcase.  Would be possible to have a small tewstcase
reproducing the problem?



Sure, however, I'm developing over 4.1.1, still you might still have
the error on current head, (I know 4.1.1 is quite old). What do you
mean by a test case? Do you want a short version of my IPA pass which
shows up the issue?

Cheers,

Paulo Matos


Honza
>
> Is there a way to transverse the cgraph but never going through the
> same twice? Or should I just ignore the node if the function name is
> already registered?
>
> Cheers,
> --
> Paulo Jorge Matos - pocm at soton.ac.uk
> http://www.personal.soton.ac.uk/pocm
> PhD Student @ ECS
> University of Southampton, UK




--
Paulo Jorge Matos - pocm at soton.ac.uk
http://www.personal.soton.ac.uk/pocm
PhD Student @ ECS
University of Southampton, UK


RE: Builtin functions?

2007-04-16 Thread Dave Korn
On 16 April 2007 17:31, Daniel Jacobowitz wrote:

> On Mon, Apr 16, 2007 at 05:04:05PM +0100, Paulo J. Matos wrote:
>> _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
>> [operator<<]... SUCCESSFUL
> 
>> Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
>> that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault.
>> 
>> I wonder where my wrong assumption is. Any suggestions?
> 
> What do you mean, it's built in?  It comes from a source file, so
> almost by definition it isn't.


  Perhaps Paulo wants to know if the definition originated in a system header
file?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Duplicate assembler function names in cgraph

2007-04-16 Thread Jan Hubicka
> On 4/16/07, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> >Hi,
> >> Hello all,
> >>
> >> I'm doing in my IPA pass:
> >> for(node = cgraph_nodes; node; node = node->next) {
> >>reg_cgraph_node(IDENTIFIER_POINTER(DECL_ASSEMBLER_NAME(node->decl)));
> >> }
> >>
> >> to get all the function names in the cgraph. I'm adding them to a list
> >> and I'm assuming that two nodes do not have the same
> >> DECL_ASSEMBLER_NAME but I'm wrong. In a C++ file I'm getting two
> >> functions with name _ZN4listIiE6appendEPS0_, DECL_NAME = append.
> >> Why is this? The code is at
> >> http://pastebin.ca/442691
> >
> >Callgraph is currently trying to avoid use of DECL_ASSEMBLER_NAME, the
> >motivation is that for C++, the DECL_ASSEMBLER_NAMEs are very long and
> >expensive and thus it is not good idea to compute them for symbols not
> >output to final assembly (DECL_ASSEMBLER_NAME triggers lazy construction
> >of the names).  So if you don't have good reason for using the names,
> >you should not do it.
> 
> My only reason to use DECL_ASSEMBLER_NAME is, when I'm transversing
> cgraph_nodes, to have an ID for the nodes I've already 'analyzed'.

Why you don't use something like cgraph->uid?
> 
> >
> >Cgraph rely on frontend that there are no duplicate FUNCTION_DECLs
> >representing the same function (with same assembler node), that seems to
> >be broken in your testcase.  Would be possible to have a small tewstcase
> >reproducing the problem?
> >
> 
> Sure, however, I'm developing over 4.1.1, still you might still have
> the error on current head, (I know 4.1.1 is quite old). What do you
> mean by a test case? Do you want a short version of my IPA pass which
> shows up the issue?
Either that or of you can just minimize the testcase (perhaps with
delta) so it can be inspected by hand, it is probably easiest for me ;)

Honza


Re: EH references

2007-04-16 Thread Ian Lance Taylor
"Paulo J. Matos" <[EMAIL PROTECTED]> writes:

> Is that any reference (paper, guide, whatever,) on how gcc is handling
> exceptions in intermediate code? Is it based on a known (published)
> method? Is it intuitive and explained somewhere?

I doubt it.  But if you pull together some information, it would be
good to document it on a wiki page or something like that.

> I've looked at internal docs but it is not really explicit how it
> works. I'm having a hard time understanding how RESX_PTR, FILTER_EXPR,
> EH_FILTER_EXPR work together in gimple and how all of this is then
> made to work in assembler.

All those codes disappear early in the middle-end, in the lower_eh
pass.  They are replaced with exception regions, and a hash table
mapping statements to exception regions.  An exception region
basically means a bit of code which is executed when an exception
occurs.

At the assembler level the mapping from statements to exception
regions is represented out-of-line in .eh_frame sections, along with
information about where registers and the return address are saved on
the stack.  The exception unwinder reads that data in order to unwind
the stack and restore registers.  The interaction is described, rather
tersely, at http://codesourcery.com/cxx-abi/ (ignore the comments
which suggest that is only used on Itanium).

Ian


Potential bug with g++ and OpenMP

2007-04-16 Thread Theodore Papadopoulo

The piece of code attached to this mail does not compile with 4.3.0 20070113
(sorry this is rather old, but that's what I had available). The 
architecture (although not relevant IMHO)

is i686-pc-linux-gnu.

[ Even though this is not relevant here, a similar error happens with 
the redhat version gcc-4.1.1 (although the message is slightly

different and more confusing). ]

I get the following error message:

-> g++ -fopenmp Test.C

Test.C: In constructor ‘R::R()’:
Test.C:18: error: invalid use of incomplete type ‘struct R::R()::B’
Test.C:9: error: declaration of ‘struct R::R()::B’

I really do not see why R::B is considered as an incomplete type at 
this point (this seems related
to the use of "typename M::E", remove the template on M and everything 
works fine).


Before cluttering the bug database, can someone confirm:
1) that indeed it looks like a bug (after all, I have been wrong 
before... and I'm only starting with OpenMP),

2) that it still affects mainline.

I'll try to re-test with an up-to-date compiler tonigh (svn update in 
progress, but that will take some time).


Thank's
// { dg-do compile }
// { dg-options "-fopenmp" }
//
// Copyright (C) 2007 Free Software Foundation, Inc.
// Contributed by Theodore.Papadopoulo 16 Apr 2007 <[EMAIL PROTECTED]>

#include 

template  struct A { A() {} };

struct M { typedef double E; };

template 
struct R{
R() {
typedef A B;
B b;
#pragma omp parallel for firstprivate(b) schedule(guided)
for (int t=0;t<10;++t);
}
};

int
main() {
R r;
return 0;
}


Re: Difference in DWARF Info generated by GCC 3.4.6 and GCC 4.1.1

2007-04-16 Thread Ian Lance Taylor
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes:

> 1. In DIE for fun, with 3.4.6, the frame base is taken in terms of
> Frame Pointer (DW_OP_reg14), where is in 4.1.1, it is taken in terms
> of Stack Pointer (DW_OP_reg15).
> 
> (For my backend, reg-no 14 is Frame Pointer and reg-no 15 is Stack Pointer)
> 
> Is this the expected behavior?

It's correct if it matches the generated code.  It is possible that
gcc 4.1.1 was able to eliminate the frame pointer in a case where gcc
3.4.6 was not.

> 2. For the variable, const char *raj, the DIE for 3.4.6 has the
> location mentioned as (fbreg  + 4 [offset] ) whereas for 4.1.1,
> location is mentioned as (fbreg + 0).
> 
> Any specific reason for this behavior in GCC 4.1.1

Again, it is presumably reflecting the generated code.

Ian


Re: Builtin functions?

2007-04-16 Thread Daniel Berlin

On 4/16/07, Jan Hubicka <[EMAIL PROTECTED]> wrote:

> On 4/16/07, Paulo J. Matos <[EMAIL PROTECTED]> wrote:
> >Hello all,
> >
> >I'm going through the bodies of all user-defined functions. I'm using
> >as user-defined function as one that:
> >DECL_BUILT_IN(node) == 0.
>
> >
> >Problem is that for  a function (derived from a C++ file) whose output
> >from my pass is (output is self-explanatory, I think):
> >Registering cgraph node:
> >_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
> >[operator<<]... SUCCESSFUL
> >Declared on
> 
>/home/pmatos/gsvt-bin/lib/gcc/x86_64-unknown-linux-gnu/4.1.1/../../../../include/c++/4.1.1/bits/ostream.tcc,
> >line 735
> >Decl Node Code is function_decl
> >Registering output void ...SUCCESSFUL
> >Arguments: __out : reference_type, __s : pointer_type
> >
> >Well, this is definitely builtin but DECL_BUILT_IN == 0, which means
> >that when I do FOR_EACH_BB_FN, I'm getting a segmentation fault
>
> First, it's not built in, because it's defined in a source file.
> Builtin functions are those defined by the compiler.
>
> Second, we should make FOR_EACH_BB_FN never crash on empty tree functions.
> It seems really rude to do otherwise.

Well, it works on empty functions,


That seems inconsistent with what he just said :)

but why would you ever want to walk
body of function that is not there?


If you just want to scan every function you have around, the obvious
way to do it is

For each function
 FOR_EACH_BB_FN (function).

This is probably slightly slower than

For each function
 if cgraph_function_body_availability != NOT_AVAILABLE
   FOR_EACH_BB_FN (function)

But about 20x more intuitive.



cgraph_function_body_availability should be checked by IPA passes
to see what bodies are there and what can or can not change by
linking...

Again, this only matters if you care :)


Re: Builtin functions?

2007-04-16 Thread 'Daniel Jacobowitz'
On Mon, Apr 16, 2007 at 05:51:17PM +0100, Dave Korn wrote:
>   Perhaps Paulo wants to know if the definition originated in a system header
> file?

Yes, this is more likely to be useful.

-- 
Daniel Jacobowitz
CodeSourcery


Re: Potential bug with g++ and OpenMP

2007-04-16 Thread Ismail Dönmez
On Monday 16 April 2007 20:02:33 Theodore Papadopoulo wrote:
> The piece of code attached to this mail does not compile with 4.3.0
> 20070113 (sorry this is rather old, but that's what I had available). The
> architecture (although not relevant IMHO)
> is i686-pc-linux-gnu.
>
> [ Even though this is not relevant here, a similar error happens with
> the redhat version gcc-4.1.1 (although the message is slightly
> different and more confusing). ]
>
> I get the following error message:
>
> -> g++ -fopenmp Test.C
>
> Test.C: In constructor ‘R::R()’:
> Test.C:18: error: invalid use of incomplete type ‘struct R::R()::B’
> Test.C:9: error: declaration of ‘struct R::R()::B’

FWIW it gives the same error with 

gcc version 4.2.0 20070317 (prerelease) and 
gcc version 4.3.0 20070414 (experimental)

Regards,
ismail

-- 
Life is a game, and if you aren't in it to win,
what the heck are you still doing here?

-- Linus Torvalds (talking about open source development)


signature.asc
Description: This is a digitally signed message part.


Re: Builtin functions?

2007-04-16 Thread Jan Hubicka
> 
> If you just want to scan every function you have around, the obvious
> way to do it is
> 
> For each function
>  FOR_EACH_BB_FN (function).
> 
> This is probably slightly slower than
> 
> For each function
>  if cgraph_function_body_availability != NOT_AVAILABLE
>FOR_EACH_BB_FN (function)
> 
> But about 20x more intuitive.

Well, what about
For each available function
  FOR_EACH_BB_FN (function).
At least that was my plan.  You are probably going to do other things to
the functions than just walking the CFG (looking into variables/SSA
names etc) and thus way you won't get crash either.
But I don't have strong feeling here, both alternatives seems OK for me.

Honza
> 
> 
> >cgraph_function_body_availability should be checked by IPA passes
> >to see what bodies are there and what can or can not change by
> >linking...
> Again, this only matters if you care :)


Re: EH references

2007-04-16 Thread Joe Buck
On Mon, Apr 16, 2007 at 12:40:17PM +0100, Paulo J. Matos wrote:
> Hello all,
> 
> Is that any reference (paper, guide, whatever,) on how gcc is handling
> exceptions in intermediate code? Is it based on a known (published)
> method? Is it intuitive and explained somewhere?

See

http://www.codesourcery.com/cxx-abi/abi-eh.html

Despite the fact that this document is called "Itanium C++ ABI", g++ uses
this approach on most platforms, including x86 (there is another
implementation supported by GCC, "SJLJ exception handling", based on
setjmp/longjmp.


Re: EH references

2007-04-16 Thread Joe Buck
On Mon, Apr 16, 2007 at 10:25:34AM -0700, Joe Buck wrote:
> See
> 
> http://www.codesourcery.com/cxx-abi/abi-eh.html
> 
> Despite the fact that this document is called "Itanium C++ ABI", g++ uses
> this approach on most platforms, including x86 (there is another
> implementation supported by GCC, "SJLJ exception handling", based on
> setjmp/longjmp.

I take that back: though much of this document is generally applicable,
there's ia64-specific stuff in the section I pointed to.  Nevertheless,
it should help give you some insight on how exceptions are implemented
that does apply more generally.


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Janis Johnson
On Mon, Apr 16, 2007 at 06:36:07PM +0200, Steven Bosscher wrote:
> * Very few people know how to use Janis' scripts, so to encourage
> people to use them, the release manager could write a wiki page with a
> HOWTO for these scripts (or ask someone to do it).  Regression hunting
> should only be easier now, with SVN's atomic commits. But the easier
> and more accessible you make it for people to use the available tools,
> the harder it gets for people to justify ignoring their bugs to "the
> rest of us".

The RM can encourage me to do this; I've already been meaning to for a
long time now.

My reghunt scripts have grown into a system that works well for me, but
I'd like to clean them up and document them so that others can use them.
What I've got now is very different from what I used with CVS.

I'd like at least two volunteers to help me with this cleanup and
documentation effort by using my current scripts on regressions for
open PRs and finding the places that are specific to my environment.
I can either put what I've got now into contrib/reghunt, or send a
tarball to the mailing list for people to use and check things in
after they're generally usable.

One silly thing holding me back is not quite knowing what needs
copyrights and license notices and what doesn't.  Some scripts are
large and slightly clever, others are short and obvious.

Janis


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Mark Mitchell
Janis Johnson wrote:
> On Mon, Apr 16, 2007 at 06:36:07PM +0200, Steven Bosscher wrote:
>> * Very few people know how to use Janis' scripts, so to encourage
>> people to use them, the release manager could write a wiki page with a
>> HOWTO for these scripts (or ask someone to do it).  Regression hunting
>> should only be easier now, with SVN's atomic commits. But the easier
>> and more accessible you make it for people to use the available tools,
>> the harder it gets for people to justify ignoring their bugs to "the
>> rest of us".
> 
> The RM can encourage me to do this; I've already been meaning to for a
> long time now.

You may certainly consider yourself encouraged. :-)

> One silly thing holding me back is not quite knowing what needs
> copyrights and license notices and what doesn't.  Some scripts are
> large and slightly clever, others are short and obvious.

For safety sake, we should probably get assignments on them.  I'm not
sure how hard it is to get IBM to bless contributing the scripts.  If
it's difficult, but IBM doesn't mind them being made public, perhaps we
could just put them somewhere on gcc.gnu.org, outside of the official
subversion tree.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [cygwin] Can't boostrap current gcc trunk with libjava: ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error: "_EXFUN" redefined

2007-04-16 Thread Tom Tromey
> "ChJ" == Christian Joensson <[EMAIL PROTECTED]> writes:

ChJ> In file included from 
../../../gcc/libjava/classpath/native/fdlibm/fdlibm.h:29,
ChJ>  from ../../../gcc/libjava/java/lang/natVMDouble.cc:27:
ChJ> ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error:
ChJ> "_EXFUN" redefined
ChJ> In file included from /usr/include/stdlib.h:10,
ChJ>  from ../../../gcc/libjava/java/lang/natVMDouble.cc:14:
ChJ> /usr/include/_ansi.h:36:1: error: this is the location of the previous
ChJ> definition

ChJ> anyone else see this?

That is new to me, but then I don't build on Cygwin.
Where does /usr/include/_ansi.h come from?

Anyway, try adding a "#undef _EXFUN" in the appropriate place in
mprec.h.  If that works for you, send it to me and I will check it in.

Tom


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Janis Johnson
On Mon, Apr 16, 2007 at 10:58:13AM -0700, Mark Mitchell wrote:
> Janis Johnson wrote:
> > On Mon, Apr 16, 2007 at 06:36:07PM +0200, Steven Bosscher wrote:
> >> * Very few people know how to use Janis' scripts, so to encourage
> >> people to use them, the release manager could write a wiki page with a
> >> HOWTO for these scripts (or ask someone to do it).  Regression hunting
> >> should only be easier now, with SVN's atomic commits. But the easier
> >> and more accessible you make it for people to use the available tools,
> >> the harder it gets for people to justify ignoring their bugs to "the
> >> rest of us".
> > 
> > The RM can encourage me to do this; I've already been meaning to for a
> > long time now.
> 
> You may certainly consider yourself encouraged. :-)

Gosh, thanks!

> > One silly thing holding me back is not quite knowing what needs
> > copyrights and license notices and what doesn't.  Some scripts are
> > large and slightly clever, others are short and obvious.
> 
> For safety sake, we should probably get assignments on them.  I'm not
> sure how hard it is to get IBM to bless contributing the scripts.  If
> it's difficult, but IBM doesn't mind them being made public, perhaps we
> could just put them somewhere on gcc.gnu.org, outside of the official
> subversion tree.

I have IBM permission to contribute them to GCC.  An earlier version for
CVS is in contrib/reghunt with formal FSF copyright and GPL statements.
I've sent later versions to gcc-patches as a way to get them to
particular people who wanted to try them out.  My inclination is to put
full copyright/license statements on the bigger ones and just "Copyright
FSF " on the small ones.

Janis


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Mark Mitchell
Janis Johnson wrote:

>>> The RM can encourage me to do this; I've already been meaning to for a
>>> long time now.
>> You may certainly consider yourself encouraged. :-)
> 
> Gosh, thanks!

:-)

> I have IBM permission to contribute them to GCC.  An earlier version for
> CVS is in contrib/reghunt with formal FSF copyright and GPL statements.
> I've sent later versions to gcc-patches as a way to get them to
> particular people who wanted to try them out.  My inclination is to put
> full copyright/license statements on the bigger ones and just "Copyright
> FSF " on the small ones.

I guess I'd tend to be conservative and put the full GPL notice on all
of them.  If it doesn't apply because the file is too small, whoever
wants to use it in some non-GPL way can assert that fact if they want.
Is there some reason that putting the GPL on them is bad/wrong?

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


RE: [cygwin] Can't boostrap current gcc trunk with libjava: ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error: "_EXFUN" redefined

2007-04-16 Thread Dave Korn
On 16 April 2007 18:49, [EMAIL PROTECTED] wrote:

>> "ChJ" == Christian Joensson <[EMAIL PROTECTED]> writes:
> 
> ChJ> In file included from
> ../../../gcc/libjava/classpath/native/fdlibm/fdlibm.h:29, ChJ> 
> from ../../../gcc/libjava/java/lang/natVMDouble.cc:27: 
> ChJ> ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error:
> ChJ> "_EXFUN" redefined
> ChJ> In file included from /usr/include/stdlib.h:10,
> ChJ>  from ../../../gcc/libjava/java/lang/natVMDouble.cc:14:
> ChJ> /usr/include/_ansi.h:36:1: error: this is the location of the previous
> ChJ> definition
> 
> ChJ> anyone else see this?
> 
> That is new to me, but then I don't build on Cygwin.
> Where does /usr/include/_ansi.h come from?

  Cygwin uses newlib.

> Anyway, try adding a "#undef _EXFUN" in the appropriate place in
> mprec.h.  If that works for you, send it to me and I will check it in.

  Pretty much bound to succeed.

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Tom Tromey
> "Janis" == Janis Johnson <[EMAIL PROTECTED]> writes:

>> * Very few people know how to use Janis' scripts, so to encourage
>> people to use them, the release manager could write a wiki page with a
>> HOWTO for these scripts (or ask someone to do it).  Regression hunting
>> should only be easier now, with SVN's atomic commits. But the easier
>> and more accessible you make it for people to use the available tools,
>> the harder it gets for people to justify ignoring their bugs to "the
>> rest of us".

Janis> The RM can encourage me to do this; I've already been meaning to for a
Janis> long time now.

Janis> My reghunt scripts have grown into a system that works well for me, but
Janis> I'd like to clean them up and document them so that others can use them.
Janis> What I've got now is very different from what I used with CVS.

I wonder whether there is a role for the gcc compile farm in this?
For instance perhaps "someone" could keep a set of builds there and
provide folks with a simple way to regression-test ... like a shell
script that takes a .i file, ssh's to the farm, and does a reghunt... ?

I think this would only be worthwhile if the farm has enough disk, and
if regression hunting is a fairly common activity.

Tom


Re: EH references

2007-04-16 Thread Tom Tromey
> "Ian" == Ian Lance Taylor <[EMAIL PROTECTED]> writes:

>> "Paulo J. Matos" <[EMAIL PROTECTED]> writes:
>> Is that any reference (paper, guide, whatever,) on how gcc is handling
>> exceptions in intermediate code? Is it based on a known (published)
>> method? Is it intuitive and explained somewhere?

Ian> I doubt it.  But if you pull together some information, it would be
Ian> good to document it on a wiki page or something like that.

I think somewhere in the source, probably tree.def, would be
preferable.

Long term I'd like us to go a step further and use documentation
comments in the source, and extract those into the manual.

Tom


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Tom Tromey
> "Steven" == Steven Bosscher <[EMAIL PROTECTED]> writes:

Steven> * Maintainers of certain areas of the compiler may not be
Steven> sufficiently aware of some bug in their part of the
Steven> compiler. For example, only one of the three preprocessor bugs
Steven> is assigned to a preprocessor maintainer, even though in all
Steven> three preprocessor bugs a maintainer is in the CC.

Thanks for pointing this out.  I wasn't paying attention to this
since, until very recently, I was not a preprocessor maintainer :-)

Steven> It's just that the bugs have been inactive for some time.
Steven> (And in this particular case of preprocessor bugs, it's not
Steven> even been established whether PR30805 is a bug at all, but it
Steven> is marked as a P2 "regression" anyway)

I'll submit the 30468 patch for 4.2 and (I suppose) 4.1.  It is small
and safe.

30786 is ICE-on-invalid.  30805 is ICE-on-unspecified.
I don't like ICEs but these don't seem like release-blockers to me.

30363 is a change in -traditional-cpp.  There's a patch, yay, but I
have not read it yet.

Tom


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Steven Bosscher

On 4/16/07, Janis Johnson <[EMAIL PROTECTED]> wrote:

I'd like at least two volunteers to help me with this cleanup and
documentation effort by using my current scripts on regressions for
open PRs and finding the places that are specific to my environment.


Since I brought this up, I guess I'm on the hook ;-)

Gr.
Steven


Re: EH references

2007-04-16 Thread Joseph S. Myers
On Mon, 16 Apr 2007, Tom Tromey wrote:

> Long term I'd like us to go a step further and use documentation
> comments in the source, and extract those into the manual.

This will need FSF approval first for copying text between GPL code and 
GFDL manuals, and FSF instructions on what wording to put in the license 
text on the source files to allow that.

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Dennis Weyland

Hi!

It's a pity that the application process has finished for some days... I 
was very motivated and have really good skills regarding efficient 
algorithms, complexity theory and compiler construction, but with not 
being accepted I would not have enough time for working on GCC since I 
have to earn money otherwise. Perhaps with a more detailled application 
my chances would have been better. Now i am only curious which projects 
have been accepted from GCC...


Dennis

Paolo Bonzini schrieb:



Hi!

Initially I meant to optimize GCC, that includes runtime and memory 
usage, of course.


Sure.  I meant that we have testcases that are good to test your work 
on.  Profile GCC running them and fix the hotspots: this may show 
quadratic algorithms, and the like.


For example, see the patch and thread at 
http://permalink.gmane.org/gmane.comp.gcc.patches/137697


I hope this clarifies my earlier message!

Paolo




Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Paolo Bonzini



30786 is ICE-on-invalid.  30805 is ICE-on-unspecified.
I don't like ICEs but these don't seem like release-blockers to me.


Anyway I attached prototype patches for these.  I don't have resources 
to test them for three weeks, so if anybody can beat me to it...


Paolo


Re: [cygwin] Can't boostrap current gcc trunk with libjava: ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error: "_EXFUN" redefined

2007-04-16 Thread Charles Wilson
Tom Tromey wrote:
> That is new to me, but then I don't build on Cygwin.
> Where does /usr/include/_ansi.h come from?
> 
> Anyway, try adding a "#undef _EXFUN" in the appropriate place in
> mprec.h.  If that works for you, send it to me and I will check it in.

Are you sure forcibly redefining _EXFUN (#undef + #define) is the right
thing?  Cygwin's (actually newlib's) /usr/include/_ansi.h does this:

#ifdef __CYGWIN__
#define _EXFUN(name, proto) __cdecl name proto
#define _EXPARM(name, proto)(* __cdecl name) proto
#else
#define _EXFUN(name, proto) name proto
#define _EXPARM(name, proto)(* name) proto
#endif

because _EXFUN symbols in the (newlib-originated, cygwin1.dll-supplied)
runtime are, in fact, __cdecl and not stdcall or fastcall.  So if the
user used -mrtd to switch the compiler's default calling convention,
symbols imported from cygwin1.dll and declared via the newlib headers
will still use the correct cdecl convention.

Unless, of course, somebody redefines _EXFUN...

Perhaps mprec.h should do something like the above (e.g. special case
for __CYGWIN__, retaining the __cdecl) rather than forcibly changing the
definition of the _EXFUN macro.  Redefining something to an identical
definition is not an error, right?  Next question: is it truly okay to
force EXFUN's in libjava to __cdecl?

Or , should libjava avoid the reserved name '_EXFUN' for its
macro, and use some other macro for this purpose?

--
Chuck


RE: [cygwin] Can't boostrap current gcc trunk with libjava: ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error: "_EXFUN" redefined

2007-04-16 Thread Dave Korn
On 16 April 2007 20:49, Charles Wilson wrote:


> Or , should libjava avoid the reserved name '_EXFUN' for its
> macro, and use some other macro for this purpose?


  The definition of _EXFUN in mprec.h is unconditionally:

#define _EXFUN(name, proto) name proto

  This looks like some slightly muddle-headed approach to make libjava
backwardly-compatible with K'n'R, which is ... less than necessary AFAICS.  I
don't know whether that mprec.h is an entirely internal header, or whether
it's an external library header that real-world apps might be #including and
using the _EXFUN definition from, but I would have guessed we can make do
without it altogether.

  How about we whip up a patch to just turn all the _EXFUN definitions into
real honest-to-goodness ansi prototypes?  Is it actually serving any real
purpose?


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



CompileFarm and reghunt Was: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Laurent GUERBY
We're a bit "short" on the current CompileFarm machines,
we have 5x16GB + 4x32GB (and as shown below it tends to
be used, I have to ping users from time to time to get GB
back :).

There is enough cpu power in the farm to build and check a version for
each commit (all languages including Ada) on up to two branches (I sent
a message a while ago about that) with a latency of about 8 hours IIRC.

We might be able to store only part of the compiler, or if this
proves really useful, I could just add a storage unit to the
farm with cheap & large current generation disks (machines are
unfortunately SCSI based).

As announced a few weeks ago, all official releases are already
installed on the CompileFarm (/n/b01/guerby/release/X.Y.Z/bin with X.Y.Z
in 3.4.6, 4.0.0-4, 4.1.0-2).

People interested should follow the procedure to get an account:
http://gcc.gnu.org/wiki/CompileFarm
<<
How to Get Involved ?
If you are a GCC developer and want access to the compileFarm for GCC
development and testing, or if you are a free software developer wishing
to set up automated testing of a piece of free software with the current
GCC development version (preferably with a test suite), please send 

 1. your ssh public key (HOME/.ssh/authorized_keys format) in
attachment and not inline in the email and 

 2. your prefered UNIX login 

to laurent at guerby dot net.
>>

Laurent

$ df -k
/dev/sda1 33554132  27412248   4437392  87% /
gcc07:/home   33554144  20245472  11604192  64% /n/07
gcc01:/mnt/disk01-2   35001536  15768736  17454816  48% /n/b01
gcc09:/home   33554144  14700320  17149344  47% /n/09
gcc01:/home   16753440  15679360223040  99% /n/01
gcc02:/home   16753440  10980064   4922336  70% /n/02
gcc03:/home   16753440   1750912  14151488  12% /n/03
gcc05:/home   16753440  13589376   2313024  86% /n/05
gcc06:/home   16753440   9586272   6316128  61% /n/06

On Mon, 2007-04-16 at 12:00 -0600, Tom Tromey wrote:
> I wonder whether there is a role for the gcc compile farm in this?
> For instance perhaps "someone" could keep a set of builds there and
> provide folks with a simple way to regression-test ... like a shell
> script that takes a .i file, ssh's to the farm, and does a reghunt... ?
> 
> I think this would only be worthwhile if the farm has enough disk, and
> if regression hunting is a fairly common activity.
> 
> Tom




Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Ian Lance Taylor
Dennis Weyland <[EMAIL PROTECTED]> writes:

> It's a pity that the application process has finished for some
> days... I was very motivated and have really good skills regarding
> efficient algorithms, complexity theory and compiler construction, but
> with not being accepted I would not have enough time for working on
> GCC since I have to earn money otherwise. Perhaps with a more
> detailled application my chances would have been better. Now i am only
> curious which projects have been accepted from GCC...

The list of accepted projects can be found here:
http://code.google.com/soc/gcc/about.html

Unfortunately Google only provides a limited number of slots for each
project.  The gcc project received 22 applications, of which Google
approved 8.

Thanks for your interest, and I encourage you to apply again next
year.

Ian


gcc-4.1-20070416 is now available

2007-04-16 Thread gccadmin
Snapshot gcc-4.1-20070416 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20070416/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.1 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_1-branch 
revision 123895

You'll find:

gcc-4.1-20070416.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.1-20070416.tar.bz2 C front end and core compiler

gcc-ada-4.1-20070416.tar.bz2  Ada front end and runtime

gcc-fortran-4.1-20070416.tar.bz2  Fortran front end and runtime

gcc-g++-4.1-20070416.tar.bz2  C++ front end and runtime

gcc-java-4.1-20070416.tar.bz2 Java front end and runtime

gcc-objc-4.1-20070416.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.1-20070416.tar.bz2The GCC testsuite

Diffs from 4.1-20070409 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.1
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Dennis Weyland

Hi!

That means GOOGLE did not approve my application? I thought it was GCC.
How can I get to know why they did not approve me? As far as i know, the
mentors can select which projects they want and which not, and not
google itsel. but it seems that i was wrong.

Well, I already completed one SoC successfully and so it is not that
bad. Perhaps I will apply again next year and if, I will apply for GCC.

Dennis

Ian Lance Taylor schrieb:

Dennis Weyland <[EMAIL PROTECTED]> writes:

  

It's a pity that the application process has finished for some
days... I was very motivated and have really good skills regarding
efficient algorithms, complexity theory and compiler construction, but
with not being accepted I would not have enough time for working on
GCC since I have to earn money otherwise. Perhaps with a more
detailled application my chances would have been better. Now i am only
curious which projects have been accepted from GCC...



The list of accepted projects can be found here:
http://code.google.com/soc/gcc/about.html

Unfortunately Google only provides a limited number of slots for each
project.  The gcc project received 22 applications, of which Google
approved 8.

Thanks for your interest, and I encourage you to apply again next
year.

Ian
  





Re: [cygwin] Can't boostrap current gcc trunk with libjava: ../../../gcc/libjava/classpath/native/fdlibm/mprec.h:297:1: error: "_EXFUN" redefined

2007-04-16 Thread Tom Tromey
> "Dave" == Dave Korn <[EMAIL PROTECTED]> writes:

Dave>   The definition of _EXFUN in mprec.h is unconditionally:
Dave> #define _EXFUN(name, proto) name proto

libjava, and subsequently Classpath, imported an old version of this
code, which was then hacked over randomly.

Dave> How about we whip up a patch to just turn all the _EXFUN
Dave> definitions into real honest-to-goodness ansi prototypes?  Is it
Dave> actually serving any real purpose?

I think just the hypothetical scenario of making it simpler to import
a newer version of fdlibm and/or mprec.  I think we may have imported
a new mprec at one point, but I don't recall us ever importing a newer
fdlibm.

Tom


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Ian Lance Taylor
[ I sent this reply before I realized that Dennis also send the note
  to [EMAIL PROTECTED]  Dennis, sorry for the repeat. ]

Dennis Weyland <[EMAIL PROTECTED]> writes:

> That means GOOGLE did not approve my application? I thought it was
> GCC. How can I get to know why they did not approve me? As far as i
> know, the mentors can select which projects they want and which not,
> and not google itsel. but it seems that i was wrong.

The mentors for each project rank the applications.  Google sets the
number of applications which will be granted to each project.  The gcc
mentors (of whom I am one) ranked the 22 applications in order.  Then
Google decided to approve 8 applications for gcc.  So the top 8 were
approved.  Your application was not one of the top 8.

Ian


REG_NO_CONFLICT vs lower-subreg

2007-04-16 Thread Bernd Schmidt
I've been converting the Blackfin port to take advantage of the new 
lower-subreg pass, which fortunately involves little more than deleting 
a few patterns.


One problem is that without an anddi3 expander, we generate poor initial 
RTL.  optabs knows it can do the operation piecewise, so it could 
generate the following sequence for an operation involving one memory 
operand:


 reg C = (mem:SI symbol+offset)
 reg A = reg B + reg C;
 reg Z = (mem:SI symbol+offset+4)
 reg X = reg Y + reg Z;

It then passes this to emit_no_conflict_block, which is supposed to wrap 
it in REG_NO_CONFLICT notes to help the register allocator.  This 
function pulls out any insns not contributing to the final result, and 
emits them first, leading to:


 reg C = (mem:SI symbol+offset)
 reg Z = (mem:SI symbol+offset+4)
 reg A = reg B + reg C;
 reg X = reg Y + reg Z;

which has higher register pressure (at least in the usual cases where C 
and Z are dead after their use) and causes more spills than the former 
sequence on certain testcases.  The sad thing is that all the 
REG_NO_CONFLICT notes later get removed by lower-subreg as they serve no 
purpose, but the suboptimal insn ordering survives until register 
allocation.


It would be nice to eliminate REG_NO_CONFLICT altogether, but a quick 
experiment with the i386 port showed that this idea is a non-starter for 
now (i386 still has insns operating on DImode, hence in some functions 
not all DImode registers get lowered, and lack of no_conflict blocks 
causes poor register allocation when that happens).


I suppose we could add a target macro to let individual ports turn off 
REG_NO_CONFLICT generation?  Any other ideas?



Bernd
--
This footer brought to you by insane German lawmakers.
Analog Devices GmbH  Wilhelm-Wagenfeld-Str. 6  80807 Muenchen
Registergericht Muenchen HRB 40368
Geschaeftsfuehrer Thomas Wessel, Vincent Roche, Joseph E. McDonough


Re: REG_NO_CONFLICT vs lower-subreg

2007-04-16 Thread Paolo Bonzini


I suppose we could add a target macro to let individual ports turn off 
REG_NO_CONFLICT generation?  Any other ideas?


A pass to reorder insns so that live ranges are shortened and register 
pressure is relieved.


Could be something like

for each bb
  for each insn
for each active insn
  if it has operands that are defined in the current insn
remove it from active insn list
emit it before current insn
if this insn has no operand that dies in this insn
  remove it from insn stream
  add it to active insn list
for each active insn
  if it has operands that are die after the current insn
remove it from active insn list
emit it after current insn
  emit all active insns

Paolo


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Dennis Weyland

I was a little bit disappointed that the first reply on this newsgroup
took so long. I just wanted to know which problems can be tackled and
completed in the SoC timeframe...
And i wonder why i only got 2 responses in the last two weeks in
contrast with todays conversation with more than 2 responses in one day.

Ian Lance Taylor schrieb:

Dennis Weyland <[EMAIL PROTECTED]> writes:

  

That means GOOGLE did not approve my application? I thought it was
GCC. How can I get to know why they did not approve me? As far as i
know, the mentors can select which projects they want and which not,
and not google itsel. but it seems that i was wrong.



The mentors for each project rank the applications.  Google sets the
number of applications which will be granted to each project.  The gcc
mentors (of whom I am one) ranked the 22 applications in order.  Then
Google decided to approve 8 applications for gcc.  So the top 8 were
approved.  Your application was not one of the top 8.

  

Well, I already completed one SoC successfully and so it is not that
bad. Perhaps I will apply again next year and if, I will apply for GCC.



Glad to hear it.

It's unfortunate that you didn't have net access at a critical time to
respond to the comment on your application.  More details on your
plans would have been helpful.

Ian
  





Re: REG_NO_CONFLICT vs lower-subreg

2007-04-16 Thread Ian Lance Taylor
Bernd Schmidt <[EMAIL PROTECTED]> writes:

> It would be nice to eliminate REG_NO_CONFLICT altogether, but a quick
> experiment with the i386 port showed that this idea is a non-starter
> for now (i386 still has insns operating on DImode, hence in some
> functions not all DImode registers get lowered, and lack of
> no_conflict blocks causes poor register allocation when that happens).
> 
> I suppose we could add a target macro to let individual ports turn off
> REG_NO_CONFLICT generation?  Any other ideas?

I have done the work needed to eliminate REG_NO_CONFLICT, but I've
stalled on it, mostly due to other work, partly because the patch
could usefully take advantage of the pending dataflow changes (instead
of using REG_SUBREG_DEAD notes).

For reference I've appended my current patch.  This includes changes
to the i386 backend as well as the generic changes to track subreg
liveness.  This patch does not itself eliminate REG_NO_CONFLICT, but
after implementing this patch REG_NO_CONFLICT blocks should become
useless.

Ian

Index: gcc/reload.c
===
--- gcc/reload.c	(revision 123904)
+++ gcc/reload.c	(working copy)
@@ -1513,71 +1513,118 @@ push_reload (rtx in, rtx out, rtx *inloc
   if (rld[i].reg_rtx == 0 && in != 0 && hard_regs_live_known)
 {
   rtx note;
-  int regno;
   enum machine_mode rel_mode = inmode;
 
   if (out && GET_MODE_SIZE (outmode) > GET_MODE_SIZE (inmode))
 	rel_mode = outmode;
 
   for (note = REG_NOTES (this_insn); note; note = XEXP (note, 1))
-	if (REG_NOTE_KIND (note) == REG_DEAD
-	&& REG_P (XEXP (note, 0))
-	&& (regno = REGNO (XEXP (note, 0))) < FIRST_PSEUDO_REGISTER
-	&& reg_mentioned_p (XEXP (note, 0), in)
-	/* Check that we don't use a hardreg for an uninitialized
-	   pseudo.  See also find_dummy_reload().  */
-	&& (ORIGINAL_REGNO (XEXP (note, 0)) < FIRST_PSEUDO_REGISTER
-		|| ! bitmap_bit_p (ENTRY_BLOCK_PTR->il.rtl->global_live_at_end,
-   ORIGINAL_REGNO (XEXP (note, 0
-	&& ! refers_to_regno_for_reload_p (regno,
-	   (regno
-		+ hard_regno_nregs[regno]
-  [rel_mode]),
-	   PATTERN (this_insn), inloc)
+	{
+	  int regno, byte;
+	  enum machine_mode reg_mode;
+	  bool found;
+
+	  if (REG_NOTE_KIND (note) != REG_DEAD
+	  || !REG_P (XEXP (note, 0)))
+	continue;
+
+	  regno = REGNO (XEXP (note, 0));
+	  if (!HARD_REGISTER_NUM_P (regno))
+	continue;
+
+	  reg_mode = GET_MODE (XEXP (note, 0));
+	  if (GET_MODE_SIZE (rel_mode) > GET_MODE_SIZE (reg_mode))
+	continue;
+
+	  /* Check that we don't use a hardreg for an uninitialized
+	 pseudo.  See also find_dummy_reload.  */
+	  if (!HARD_REGISTER_NUM_P (ORIGINAL_REGNO (XEXP (note, 0)))
+	  && bitmap_bit_p (ENTRY_BLOCK_PTR->il.rtl->global_live_at_end,
+			   ORIGINAL_REGNO (XEXP (note, 0
+	continue;
+
+	  found = false;
+
+	  /* Check each REL_MODE sized chunk of the register being
+	 killed.  */
+	  for (byte = 0;
+	   byte < GET_MODE_SIZE (reg_mode);
+	   byte += GET_MODE_SIZE (rel_mode))
+	{
+	  int subregno;
+	  unsigned int nregs, offs;
+
+	  if (byte == 0)
+		subregno = regno;
+	  else
+		{
+		  /* subreg_offset_representable_p should probably
+		 handle this case, but as of this writing it gets
+		 an assertion failure.  */
+		  if (GET_MODE_SIZE (reg_mode) % GET_MODE_SIZE (rel_mode) != 0)
+		break;
+
+		  if (!subreg_offset_representable_p (regno, reg_mode, byte,
+		  rel_mode))
+		continue;
+		  subregno = subreg_regno_offset (regno, reg_mode, byte,
+		  rel_mode);
+		}
+
+	  if (!HARD_REGNO_MODE_OK (subregno, inmode)
+		  || !HARD_REGNO_MODE_OK (subregno, outmode))
+		continue;
+
+	  nregs = hard_regno_nregs[subregno][rel_mode];
+	  if (!refers_to_regno_for_reload_p (subregno, subregno + nregs,
+		 in, NULL)
+		  || refers_to_regno_for_reload_p (subregno, subregno + nregs,
+		   PATTERN (this_insn), inloc))
+		continue;
+
 	/* If this is also an output reload, IN cannot be used as
 	   the reload register if it is set in this insn unless IN
 	   is also OUT.  */
-	&& (out == 0 || in == out
-		|| ! hard_reg_set_here_p (regno,
-	  (regno
-	   + hard_regno_nregs[regno]
-			 [rel_mode]),
+	  if (out != NULL_RTX
+		  && in != out
+		  && hard_reg_set_here_p (subregno, subregno + nregs,
 	  PATTERN (this_insn)))
-	/* ??? Why is this code so different from the previous?
-	   Is there any simple coherent way to describe the two together?
-	   What's going on here.  */
-	&& (in != out
-		|| (GET_CODE (in) == SUBREG
-		&& (((GET_MODE_SIZE (GET_MODE (in)) + (UNITS_PER_WORD - 1))
-			 / UNITS_PER_WORD)
-			== ((GET_MODE_SIZE (GET_MODE (SUBREG_REG (in)))
-			 + (UNITS_PER_WORD - 1)) / UNITS_PER_WORD
-	/* Make sure the operand fits in the reg that dies.  */
-	&& (GET_MODE_SIZE (rel_mode)
-		<= GET_MODE_SIZE (

Re: REG_NO_CONFLICT vs lower-subreg

2007-04-16 Thread Andrew Pinski

On 4/16/07, Paolo Bonzini <[EMAIL PROTECTED]> wrote:


> I suppose we could add a target macro to let individual ports turn off
> REG_NO_CONFLICT generation?  Any other ideas?

A pass to reorder insns so that live ranges are shortened and register
pressure is relieved.


I think Daniel Berlin had looked into this before and it only helped
x86 (small register set) and hurt all others.  I cannot find the
reference which mentions he did this right now.

-- Pinski


Re: REG_NO_CONFLICT vs lower-subreg

2007-04-16 Thread Ian Lance Taylor
Paolo Bonzini <[EMAIL PROTECTED]> writes:

> > I suppose we could add a target macro to let individual ports turn
> > off REG_NO_CONFLICT generation?  Any other ideas?
> 
> A pass to reorder insns so that live ranges are shortened and register
> pressure is relieved.

I think you could do this with the scheduler, with appropriate target
hooks.  In fact, it sounds like the anti-scheduler.

Ian


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Ian Lance Taylor
Dennis Weyland <[EMAIL PROTECTED]> writes:

> I was a little bit disappointed that the first reply on this newsgroup
> took so long. I just wanted to know which problems can be tackled and
> completed in the SoC timeframe...
> And i wonder why i only got 2 responses in the last two weeks in
> contrast with todays conversation with more than 2 responses in one day.

Our SoC project page links to a list of ideas on the GCC wiki, at
http://gcc.gnu.org/wiki/SummerOfCode

I'm sorry we didn't respond in a more useful fashion to your earlier
message.  As it turned out, we got a long list of very strong
applications for Summer of Code.  It would have been nice if we could
have accepted more.

Ian


Re: Questions/Comments regarding my SoC application

2007-04-16 Thread Ian Lance Taylor
Dennis Weyland <[EMAIL PROTECTED]> writes:

> I was a little bit disappointed that the first reply on this newsgroup
> took so long. I just wanted to know which problems can be tackled and
> completed in the SoC timeframe...
> And i wonder why i only got 2 responses in the last two weeks in
> contrast with todays conversation with more than 2 responses in one day.

Our SoC project page links to a list of ideas on the GCC wiki, at
http://gcc.gnu.org/wiki/SummerOfCode

I'm sorry we didn't respond in a more useful fashion to your earlier
message.  As it turned out, we got a long list of very strong
applications for Summer of Code.  It would have been nice if we could
have accepted more.

Ian


Re: REG_NO_CONFLICT vs lower-subreg

2007-04-16 Thread Paolo Bonzini

Ian Lance Taylor wrote:

Paolo Bonzini <[EMAIL PROTECTED]> writes:


I suppose we could add a target macro to let individual ports turn
off REG_NO_CONFLICT generation?  Any other ideas?

A pass to reorder insns so that live ranges are shortened and register
pressure is relieved.


I think you could do this with the scheduler, with appropriate target
hooks.  In fact, it sounds like the anti-scheduler.


Hum, searches for "antischeduler" did not bring any result.  Sounds like 
Cygnus old timers already have a patch for it??? ;-)


It would be curious indeed on the x86 to "antischedule" on the first 
scheduling pass (which is now disabled) and to "schedule" properly on 
the second.


Paolo


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Maxim Kuvyrkov

Steven Bosscher wrote:

On 4/16/07, Mark Mitchell <[EMAIL PROTECTED]> wrote:

29841  [4.2/4.3 regression] ICE with scheduling and __builtin_trap


Honza, PING!


There is a patch for this PR29841 in
http://gcc.gnu.org/ml/gcc-patches/2007-02/msg01134.html .  The problem 
is that I don't really know which maintainer ask to review it :(


--
Maxim


Re: GCC 4.2.0 Status Report (2007-04-15)

2007-04-16 Thread Steven Bosscher

On 4/17/07, Maxim Kuvyrkov <[EMAIL PROTECTED]> wrote:

There is a patch for this PR29841 in
http://gcc.gnu.org/ml/gcc-patches/2007-02/msg01134.html .  The problem
is that I don't really know which maintainer ask to review it :(


I think this patch needs re-testing (because of my cfglayout changes).
BARRIERs are never inside a basic block, so the patch looks obviously
correct to me. I think you should commit it as such if it passes
bootstrap/testing (preferably on two or three targets, and with
checking enabled of course).

Gr.
Steven