GCC 4.0.2 Status Report

2005-09-07 Thread Mark Mitchell

Simply put, it's time for another GCC 4.0.x release.

There are 48 critical bugs open against 4.0.1 and some nearly 200 
regressions.  I've not done a complete triage, so I can't say how many 
of these might be incorrectly targeted.  Fewer than 20 are wrong-code, 
which is still more than I'd like, but not completely unacceptable.


I don't see anything that makes me think that the current 4.0.x branch 
would be markedly worse than 4.0.1, and, clearly, a lot of bugs have 
been fixed since 4.0.1.  Therefore, I intend to make a 4.0.2 RC1 this 
weekend, unless subsequent triaging, or a heads-up from someone, forces 
me to conclude that something about 4.0.2 would make it unacceptable to 
current 4.0.1 users.


There's no special freeze for the 4.0 branch at this point; we'll leave 
it in regression-fixes only mode.  The branch will freeze when I create 
the first release candidate.


--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


RE: Extending functionality of -frepo

2005-09-07 Thread Noe Aljaz ITICMN
> Normal compiles instantiate items as determined by the 
> database.  It is a known waste of compile time to not so 
> instantiate such things, as we know they will be 
> instantiated.  So, the entire concept doesn't make much sense 
> to me, unless you only are only interested in the speedup 
> from those units that don't use any definitions.  Now, if we 
> know that no definition is needed in a file, having it 
> disappear the definitions is enticing, but after you 
> benchmark it against pch, I suspect you'll wonder why you bothered.

Maybe... I think the 'big_header', which is required by template
definitions, has a big impact here. And I find it hard to believe that
excluding a big chunk of code from compilation results in no speed-up.
Even when using precompiled headers, .pch files can get pretty big, and
they must still be loaded, which takes time. Furthermore, compiler must
keep all otherwise not required declarations/definitions in memory,
which (I suppose) also takes some time. But (potential) compilation
speed-up is just a side effect of this proposal. I apologize for
overstating these speed benefits.

The main point of this idea was to come up with a simple way to separate
compilation of the code generated by template instantiations and their
usage as much as possible (without actually implementing export).

> > 1. Code in foo.cpp normally compiles without including big_header.h 
> > (from bar.tpp). This makes it impossible for foo.cpp to become 
> > inadvertently dependant on the code from big_header.h.
> 
> One can already reap this benefit by the #ifdef/-D during 
> compilation, so I don't see any benefit.

True. I've achieved (almost) the same effect by using sed on .rpo files
(to add -D) and implementing proper dependencies in the makefile. But
such approach is 'messy' and not practical for larger projects.

> > 3. Definitions for the bar class template and the big_header are
only 
> > compiled by collect2 during template instantiations. If bar is 
> > commonly used in the code, compilation times would be significantly 
> > reduced.
> 
> Reduced when compared to pch?  With pch, all the common 
> things are instantiated once, and only once for the entire 
> project build.  I suspect compilation time would not be reduced.

I agree. Pch can be of big help here. But keep in mind that the size of
the pch file also matters. And including a big_header (that you did not
want to include in the first place) increases the pch file size.



The main question I guess is: How difficult is to implement this -frepo2
functionality?

If it is relatively simple and a patch is made, we can test all we want
and then decide whether it is worth having it in the compiler or not.

If it is difficult to implement, I believe it is not worth discussing it
any further since the benefits of having this feature would probably not
outweigh the implementation effort.

Currently, despite my lack of compiler writing experiences, I still
believe implementation should be trivial. Can some gcc guru confirm or
negate this?


GCC 4.1 Status Report (2005-09-06)

2005-09-07 Thread Mark Mitchell
Since August 21st, when I sent my last status report, we've reduce the 
number of bugs targeted at 4.1 from 271 to 250; about a bug a day.  77 
of these bugs are wrong-code, ice-on-valid-code, or rejects-valid, down 
from 91.  So, that suggests that the net progress is mostly coming from 
fixing the critical bugs, which is good.  24 open bugs are of the 
most-nasty wrong-code category.


So, my tentative conclusion is that the number of really nasty bugs 
isn't very bad, but that there's a lot of overall bugginess streaming 
in, probably just due to the large volume of changes.


I still think we need to drive the numbers lower before we branch.  I 
know everyone's eager to start on 4.2, but I bet we can fix a lot of 
these bugs with relatively small amounts of effort, if we focus on those 
problems.


--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Richard Sandiford
Further to Eric's good advice:

Do you already have access to a mips64 version of glibc?  (You said in
your message that you'd tried a native build, but I wasn't sure whether
that was using a 32-bit OS or a 64-bit OS)

It's tricky to build a toolchain and glibc in tandem (i.e. when neither
is available beforehand), and like Eric says, crosstool is probably the
best thing to use for that.  (Although to be honest, I don't use crosstool
myself, so I'm not 100% sure whether it supports mips64 or not.)

On the other hand, if you already have access to a 64-bit glibc, you
should be able to build the toolchain in the same way as any other
version of gcc.  So if you do have access to a 64-bit glibc: which
version(s) of gcc are you trying to build, and what errors do you get?

Richard


Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Jonathan Day
I don't have a mips64 version of glibc, I'm having to
build the entire toolchain from the ground up. (Yuck.)
I'm trying to build the entire toolchain as 64-bit
native, which is adding to my problems.

(Crosstool, for example, only supports 32-bit MIPS -
and even then the build matrix is a pretty shade of
red for the most part.)

I tried building GCC 4.0.1, glibc 2.3.5 and binutils
2.16.1 as per the Linux-MIPS guide. GCC built to the
bootstrap version, which I then installed. The
compiler could not then be used to compile anything,
though - crt1.o and crti.o were missing in action.

I repeated the procedure using the versions of GCC,
glibc and binutils in CVS. I get the same error.
Again, if someone has these files for a recent
toolchain for the SB1-Linux-Gnu target, I'd truly
appreciate a copy as that may be sufficient to make
some real progress.

Now, although I do not have 64-bit tools for MIPS, I
do have 32-bit tools. So, as a backup plan I tried to
build a cross-compiling 64-bit toolchain for the MIPS
box - my plan being to then compile a "native" 64-bit
toolchain with the cross-compiler, so I could then
kick the 32-bit tools off the system entirely.

Binutils is protesting this plan, complaining that
some of the object files are in the wrong format.
(I'll post actual error messages when I go into work
in about 6 hours.)

The bad news is that this might be a valid complaint.
One of the reasons I want to eliminate the old
toolchain completely is that a number of programs
(such as OpenMPI) have complained that the assembler
is producing buggy code. I'm hoping it is good enough
to compile a replacement, but it might not be.

If it isn't, then I will definitely need the
cross-compiler hosted on the Intel Linux box -OR-
known good MIPS64 binaries.

Now we get to the other problem. Binaries. Broadcom's
binaries are 32-bit and compiled against a Red Hat 7.2
distribution. The distribution might not matter - it
depends on what is assumed from that environment and
what external libraries were used. The 32-bits part is
a bigger problem, unless I can use it to generate a
proper 64-bit toolchain. It looks like it is also only
a bootstrap compiler, as they say on their SiByte
webpage that it won't compile Linux applications and
is only good for firmware development.

There's also a Debian MIPS distribution. Also 32-bit.
Probably a little more complete than a firmware
compiler and definitely one of the better options.

MIPS.com has a toolchain. For Red Hat 7.1 and glibc
2.2.3. It would take a few rounds to bring it up to
date. The fact that neither they nor Broadcom have
anything more recent disturbs me a bit as the thought
that springs to my mind (given the headaches I've had
so far) is that maybe they haven't been able to.

After giving the crosstools another go, and seeing if
I can get the Debian packages installed, I'm going to
see if OpenEmbedded offers any possibilities. Gentoo
would be nicer, as that doesn't need a pre-existing
system, but Gentoo don't do MIPS64 distributions.

Jonathan Day

--- Richard Sandiford <[EMAIL PROTECTED]>
wrote:

> Further to Eric's good advice:
> 
> Do you already have access to a mips64 version of
> glibc?  (You said in
> your message that you'd tried a native build, but I
> wasn't sure whether
> that was using a 32-bit OS or a 64-bit OS)
> 
> It's tricky to build a toolchain and glibc in tandem
> (i.e. when neither
> is available beforehand), and like Eric says,
> crosstool is probably the
> best thing to use for that.  (Although to be honest,
> I don't use crosstool
> myself, so I'm not 100% sure whether it supports
> mips64 or not.)
> 
> On the other hand, if you already have access to a
> 64-bit glibc, you
> should be able to build the toolchain in the same
> way as any other
> version of gcc.  So if you do have access to a
> 64-bit glibc: which
> version(s) of gcc are you trying to build, and what
> errors do you get?
> 
> Richard
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


Re: var_args for rs6000 backend

2005-09-07 Thread Yao Qi qi

From: Ian Lance Taylor 
To: "Yao qi" <[EMAIL PROTECTED]>
CC: gcc@gcc.gnu.org
Subject: Re: var_args for rs6000 backend
Date: 06 Sep 2005 11:05:38 -0700

"Yao qi" <[EMAIL PROTECTED]> writes:


These are partially documented in gcc/doc/tm.texi.  Unfortunately the
documentation is not particularly good.

These functions are all oriented around C stdarg.h functions.  In
brief, setup_incoming_varargs is called for a function which takes a
variable number of arguments, and does any required preparation
statements.  build_builtin_va_list builds the va_list data type.
va_start is called to implement the va_start macro.  gimplify_va_arg
is called to implement the va_arg macro.


Thanks for your detailed explanation, from which I got a general image  of
these routines.  I think the setup_incoming_varargs is the starting point of 
these

routines and I start my work from this function.
I divided setup_incoming_varargs() to *three* parts logically on the 
condation
that default ABI is ABI_V4,  If I make some mistakes here, feel free to tell 
me.


1 Deal with all the named arguments.
2 Calculate the number of GPR needed and generate RTX for copying GPR to 
MEM.

3 Generate RTX for copying FPR to MEM.

I met some problems when I go on further, so I want to display the symptoms 
here

before I continue my question.

New data types were added into GCC as well as new modes.   Argument passing 
and
return value works well, but when I pass  arguments of these new types of 
variable length,
and return a certain arguments in parameters list, the value is incorrect on 
PowerPC, but

works well on x86.  So I think it has something to do with PowerPC backend.

Now, the most difficult thing for me is error location.  I do not which part 
of PowerPC backend caused this problem.  I dumped the test case in RTL form 
with option -dr and in assembly form, but both of them are not informative 
enough to expose the error.


This "bug" may be relative to argument passing, stack alloction, new modes 
and register allocation, but I do not know how to get enought information 
about these fields.  Could anybody give me some suggestions about 
information collection for them?


I am lack of experience in gcc development, any comments about it are highly 
appreicate.

> I do not know what is the precondation if I want to do it?  May I need
> to  know the architecture of PowerPC and ABI for it?

You do indeed need to know the PowerPC ABIs to understand what these
routines are doing and why.

Got it , thanks for your advice here.


Ian



Best Regards

Yao Qi
Bejing Institute of Technology

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/




Re: GCC 4.0.2 Status Report

2005-09-07 Thread Paolo Bonzini


There's no special freeze for the 4.0 branch at this point; we'll leave 
it in regression-fixes only mode.  The branch will freeze when I create 
the first release candidate.


Some of your C++ fixes have been quite invasive.  Maybe it's too much 
haste to spin the rc before the bugs can be detected?


(I think it would have been good to apply them to 4.0 a week or two 
after they've been in mainline, btw).


Paolo


[4.2 projects] vectorization enhancements

2005-09-07 Thread Dorit Naishlos




Planned vectorization enhancements for 4.2:

1. Recognize reduction patterns (Dorit).
  Some computations have specialized target support and can be
vectorized more efficiently if the computation idiom is recognized  and
vectorized as a whole. This is especially true to idioms that involve
multiple types - multiple-types require packing/unpacking of vector
elements, unless the entire pattern is recognized. Examples for such
patterns are summation into a result wider the arguments ("widening sum"),
dot product, sum of absolute differences, and more. This project will
include (1) a pattern recognition engine, to be used for patterns that the
vectorizer can benefit from. (2) functions to recognize reduction patterns.
(3) extend the current reduction support to handle reduction patterns. (4)
more patterns that are not related to reduction, e.g. saturation.

* Delivery Date: Stage 1 of 4.2. Most of the above already implemented, and
most of that is already in autovect-branch.
* Benefits: More loops vectorized.

2. Vectorize interleaved data (Ira).
  Currently the vectorizer supports only computations with stride 1
(consecutive data elements). Some important computations access data with
stride other than 1 - for example complex data with the real and imaginary
parts interleaved - the stride in this case is 2. We want to extend the
vectorizer to support these computations. For that we will also need to
introduce new tree-codes/optabs.

* Delivery Date: Stage 2 of 4.2.
* Benefits: More loops vectorized.

3. Vectorize in the presence of multiple data types (Dorit).
  Currently the vectorizer supports loops that operate on a single data
type. In particular, the vectorizer doesn't support type casts, which in
vectorized form require packing/unpacking of data elements between vectors.
We want to extend the vectorizer to handle type conversions. This will
require introducing some of the new tree-codes/optabs we discussed last
year.

* Delivery Date: Stage 2 of 4.2
* Benefits: More loops vectorized.


Not sure when the rest of the items will be ready, and if they'll make it
for 4.2, but it's high on our todo list:

4. Vectorization of induction (Dorit).
  The vectorizer currently doesn't support vectorization of induction,
e.g. a[i] = i. We want to extend the vectorizer to handle such
computations. We already have some of the required steps implemented as
part of the reduction support.

* Delivery Date: unknown
* Benefits: More loops vectorized.

5. Versioning for aliasing (Dorit/Ira)
  It is often difficult/impossible to prove that two data-references in
the loop don't overlap (e.g. when they are accessed using pointers). It is
still possible to vectorize such loops using runtime dependence checks,
much like the runtime alignment checks that were recently committed to
mainline. I.e., use loop versioning and guard the vectorized version with a
runtime aliasing test.

* Delivery Date: unknown
* Benefits: More loops vectorized.

6. Cost model (Dorit/Ira).
  We are currently vectorizing whenever we can. This can often hurt
performance, for example, if the loop is very short, because of the
overheads involved in vectorization (e.g. alignment handling, loop peeling,
and epilog code for reduction). We also need the cost model to decide how
to vectorize - for example - there are different ways we can handle
alignment (versioning, peeling, misaligned vector accesses). We want to try
to estimate the costs involved in vectorization, and make a decision based
on that whether to vectorized or not, and how.

* Delivery Date: unknown
* Benefits: Improved performance when vectorizing.

7. Misaligned stores (Dorit/Ira).
  We currently don't handle misaligned stores. Instead we peel the loop
to force the alignment of the store. This works only for one misaligned
store; if there's more than one misaligned store and we can't prove that
all the stores in the loop have the same misalignment, we can't vectorize
the loop. We want to add the capability to vectorize misaligned stores.

* Delivery Date: unknown
* Benefits: More loops vectorized.

Personnel

* Dorit Nuzman
* Ira Rosen

Dependencies

None.

Modifications Required

All modifications are local to the vectorizer pass, except for adding
new tree-codes and optabs for the new patterns and misaligned stores.

dorit



Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Paul Koning
> "Jonathan" == Jonathan Day <[EMAIL PROTECTED]> writes:

 Jonathan> Hi, I am trying to compile a toolchain for a Broadcom SB1
 Jonathan> processor in big-endian mode with a host Operating System
 Jonathan> of Linux. (The SB1 is a MIPS64, but there is also a
 Jonathan> specific SB1 target.) So far, I'm running into error after
 Jonathan> error when attempting to build either directly on the board
 Jonathan> or attempting to build a cross-compiler (with a host of a
 Jonathan> Pentium 4 also running Linux). I've run through the various
 Jonathan> FAQs, HOWTOs and automated scripts, but they either don't
 Jonathan> do the combination I want or they won't work with the
 Jonathan> combination I want.

 Jonathan> A query on the MIPS mailing list produced a depressing
 Jonathan> result - one reply, saying they weren't sure it could even
 Jonathan> be done.

That's a bizarre answer.

 Jonathan> The various Linux distros for MIPS architectures were not a
 Jonathan> whole lot better - the most recent were 32-bit and even
 Jonathan> those weren't that recent. The newest MIPS64 binary of GCC
 Jonathan> I could find was back in the 2.95 era.

 Jonathan> My question is simple enough - has anyone built a toolchain
 Jonathan> for a MIPS64-Linux-GNU target? I noticed someone posting
 Jonathan> recently that they'd conquered this problem for MIPS64
 Jonathan> under the IRIX OS, so I'm guessing it must be possible
 Jonathan> under Linux.

I haven't built that one.

I have built mips64-elf and mips32-netbsdelf from source, for 3.3.3,
3.4.0, and 4.0.0 version sources.  No problems -- though admittedly I
was working from a crosstools build script created and debugged by one
of my colleagues.

   paul



Re: Language Changes in Bug-fix Releases?

2005-09-07 Thread Paul Koning
> "Mike" == Mike Stump <[EMAIL PROTECTED]> writes:

 Mike> On Sep 6, 2005, at 6:16 PM, Gabriel Dos Reis wrote:
 >> wrong-code generation that was fixed.

 Mike> Customers validate their app and are `happy' with the code
 Mike> generation, so this appears to not be a real an issue.  Failure
 Mike> to compile their app to them feels slightly more real.

The problem with "wrong code" bugs is that they are hard to find in
application testing.  It may be that customers are happy because their
testing has failed to uncover the bug, rather than that the bug hasn't
hit their application.

If they are happy with Vx.y.z, one obvious answer is "so don't
upgrade".  That's the normal embedded systems answer -- just because
your vendor is shipping version N+5 is no reason to move from version
N. 

   paul



Re: GCC 4.1 Status Report (2005-09-06)

2005-09-07 Thread DJ Delorie

> Since August 21st, when I sent my last status report, we've reduce the 
> number of bugs targeted at 4.1 from 271 to 250; about a bug a day.

On the gcc home page, we have a (now obsolete) link to the latest
status.  We also have a link to the definition of "stage 3".  Could we
add a direct link to the appropriate bugzilla query that tells us what
still needs to be fixed?  Perhaps off the words "bug fixes"?

We're also missing the status link for the 4.0.1 series.


Re: GCC 4.0.2 Status Report

2005-09-07 Thread Mark Mitchell

Paolo Bonzini wrote:


There's no special freeze for the 4.0 branch at this point; we'll 
leave it in regression-fixes only mode.  The branch will freeze when I 
create the first release candidate.



Some of your C++ fixes have been quite invasive.  Maybe it's too much 
haste to spin the rc before the bugs can be detected?


I think that's an overstaement.

The only one that worries me very much is the static data member change, 
and I'm much less worried now that we've dealt with a couple of problems.


The other one which I flagged when posting it is the 
statement-expression change, but that one is actually much more 
localized.  It's possible something's wrong with it, but, fundamentally, 
it's much simpler, and more solid, than what was there before.


We've got a fair amount of time between now and any actual release.  It 
will have been a week since the static data member patch before we do 
RC1, and then, well, we'll see what the pre-release testing shows.


(I think it would have been good to apply them to 4.0 a week or two 
after they've been in mainline, btw).


Thanks for the feedback.  I think your comments reflect some 
C++-centricity, asm ore dangerous patches go into the release branches 
routinely. Doing as you suggest adds rather a lot of overhead and, to 
the extent that the patches might cause problems, gets them exposed more 
immediately for testing.  Even on release branches we expect to have 
variations in quality; that's why we freeze in the period immediately 
before the release.


--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


unexpected link behaviour g++-4.0.1 on hppa2.0w-hp-hpux11.00

2005-09-07 Thread Rainer Emrich
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

I got an unexpected link behaviour linking statically.

linking dynamically:

g++ -Wall -g0 -O3 -o batch_mesh_1 batch_mesh_1.o
- -L/raid/tecosim/it/devel/install-test/hp/lib -ltecosim -lteclic

everything works fine and ldd batch_mesh_1 gives:
 =>
/usr/lib/libc.2 =>  /usr/lib/libc.2
/usr/lib/libdld.2 =>/usr/lib/libdld.2
/usr/lib/libc.2 =>  /usr/lib/libc.2
/usr/lib/libm.2 =>  /usr/lib/libm.2


linking statically:

g++ -Wall -g0 -O3 -static -o batch_mesh_1 batch_mesh_1.o
- -L/raid/tecosim/it/devel/install-test/hp/lib -ltecosim -lteclic
/usr/ccs/bin/ld: Unsatisfied symbols:
   pthread_once (first referenced in
/appl/shared/gcc/HP-UX/hppa2.0w-hp-hpux11.00/gcc-4.0.1-test/lib/libstdc++.a(locale_init.o))
(code)
   pthread_key_create (first referenced in
/appl/shared/gcc/HP-UX/hppa2.0w-hp-hpux11.00/gcc-4.0.1-test/lib/libstdc++.a(eh_globals.o))
(code)
   pthread_setspecific (first referenced in
/appl/shared/gcc/HP-UX/hppa2.0w-hp-hpux11.00/gcc-4.0.1-test/lib/libstdc++.a(eh_globals.o))
(code)
   pthread_mutex_unlock (first referenced in
/appl/shared/gcc/HP-UX/hppa2.0w-hp-hpux11.00/gcc-4.0.1-test/lib/libstdc++.a(locale_init.o))
(code)
   pthread_getspecific (first referenced in
/appl/shared/gcc/HP-UX/hppa2.0w-hp-hpux11.00/gcc-4.0.1-test/lib/libstdc++.a(eh_globals.o))
(code)
   pthread_mutex_lock (first referenced in
/appl/shared/gcc/HP-UX/hppa2.0w-hp-hpux11.00/gcc-4.0.1-test/lib/libstdc++.a(locale_init.o))
(code)

Here I have to add -lpthread manually, then the link succeeds:
g++ -Wall -g0 -O3 -static -o batch_mesh_1 batch_mesh_1.o
- -L/raid/tecosim/it/devel/install-test/hp/lib -ltecosim -lteclic -lpthread

and ldd batch_mesh_1 gives:
 =>
/usr/lib/libdld.2 =>/usr/lib/libdld.2
/usr/lib/libc.2 =>  /usr/lib/libc.2
/usr/lib/libdld.2 =>/usr/lib/libdld.2


Compiler version: 4.0.1
Platform: hppa2.0w-hp-hpux11.00
configure flags:
- --prefix=/SCRATCH/gcc-build/HP-UX/hppa2.0w-hp-hpux11.00/install
- --with-gnu-as
- --with-as=/SCRATCH/gcc-build/HP-UX/hppa2.0w-hp-hpux11.00/install/bin/as
- --with-ld=/usr/ccs/bin/ld --enable-threads=posix --disable-shared
- --disable-nls --with-gmp=/appl/shared/gnu/HP-UX/hppa2.0w-hp-hpux11.00
- --with-mpfr=/appl/shared/gnu/HP-UX/hppa2.0w-hp-hpux11.00
- --enable-languages=c,c++,f95,java,objc

binutils:
binutils-2.16.1

System:
HP-UX c3600-1 B.11.00 A 9000/785 unknown unknown HP-UX

Any comments?

Rainer

- --
Rainer Emrich
TECOSIM GmbH
Im Eichsfeld 3
65428 Rüsselsheim

Phone: +49(0)6142/8272 12
Mobile: +49(0)163/56 949 20
Fax.:   +49(0)6142/8272 49
Web: www.tecosim.com
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDHwok3s6elE6CYeURAogtAJ4zfKEHiiH76r6cxklU9W3B9qTXKACeIwfV
Xs4bUs5z3m9vvJMapqlgmjA=
=jp8S
-END PGP SIGNATURE-


Re: existing functionality questions

2005-09-07 Thread Michael Tegtmeyer
M-x grep access cp/*.[ch] will show you the existing methods of access 
control.  lookup_member would be a useful routine to set a breakpoint on and 
watch how it does it as well.


Thanks for the reply, this is a static analysis pass so am I wrong in 
thinking that most of the functionality provided in cp is non necessary? 
For example, the front end already took care of whether or not if an 
access to a member variable was legal, what about other member variables?


If that doesn't answer your question, I predict there exists another question 
which you'd love to ask us, but haven't, try asking that one instead (I call 
this the real question).


Actually, that was the real question, nothing more. I need to be able to 
determine what member fields of an object passed to a function are visible 
to that function during an optimization pass. Is there existing 
functionality somewhere to do that?


Thanks again for the help,
Mike


Re: var_args for rs6000 backend

2005-09-07 Thread Ian Lance Taylor
"Yao Qi qi" <[EMAIL PROTECTED]> writes:

> New data types were added into GCC as well as new modes.

It might help if you give a brief overview of what you are trying to
do (maybe you already have, and I forgot).  Also, I assume you are
working with mainline gcc.

> Argument
> passing and
> return value works well, but when I pass  arguments of these new types
> of variable length,
> and return a certain arguments in parameters list, the value is
> incorrect on PowerPC, but
> works well on x86.  So I think it has something to do with PowerPC backend.

Are you saying that you have new types which have variable length?
Those are certainly trickier to handle.  Standard C and C++ do not
pass any variable length types as arguments, so it's not surprising
that it will take some work.

PowerPC argument passing is significantly different from x86 argument
passing because on the x86 all arguments are passed on the stack
(modulo the -mregparm option which you are probably not using).  On
the PowerPC arguments are mostly passed in registers, overflowing to
the stack, and you need to worry about three different types of
registers: general registers, floating point registers, and vector
registers.

> Now, the most difficult thing for me is error location.  I do not
> which part of PowerPC backend caused this problem.  I dumped the test
> case in RTL form with option -dr and in assembly form, but both of
> them are not informative enough to expose the error.
> 
> This "bug" may be relative to argument passing, stack alloction, new
> modes and register allocation, but I do not know how to get enought
> information about these fields.  Could anybody give me some
> suggestions about information collection for them?

There aren't any particularly helpful debugging options here.  You
should know that this code often interacts with the function prologue
code, which is not inserted until the flow2 pass, so to see those
instructions in RTL you need -dw.

I often find it necessary to add debugging prints to these functions
to show where parameters are found in registers and/or on the stack,
and what kind of thing va_arg returns.  These prints most conveniently
take the form of
  if (dump_file)
fprintf (dump_file, ...);
to appear in the relevant file dumped by -da.

Ian


Re: GCC 4.0.2 Status Report

2005-09-07 Thread Andrew Pinski


On Sep 7, 2005, at 11:21 AM, Mark Mitchell wrote:


Paolo Bonzini wrote:
There's no special freeze for the 4.0 branch at this point; we'll 
leave it in regression-fixes only mode.  The branch will freeze when 
I create the first release candidate.
Some of your C++ fixes have been quite invasive.  Maybe it's too much 
haste to spin the rc before the bugs can be detected?


I think that's an overstaement.

The only one that worries me very much is the static data member 
change, and I'm much less worried now that we've dealt with a couple 
of problems.


But there are still issues.
PR 23691 is one of them which beaks boost.


-- Pinski



Re: existing functionality questions

2005-09-07 Thread Daniel Berlin

> Actually, that was the real question, nothing more. I need to be able to 
> determine what member fields of an object passed to a function are visible 
> to that function during an optimization pass. Is there existing 
> functionality somewhere to do that?

All of them, assuming you have a pointer to an object (since you can
cast it however you like, unfortunately. We've discussed this before).

If you just have a regular object passed by value, the fields accessible
are those in TYPE_FIELDS of the type of the object, and those fields
reachable through types in the TYPE_BINFOS (i don't remember whether we
represent access control in binfos)

--Dan



RE: var_args for rs6000 backend

2005-09-07 Thread Meissner, Michael
There was also a PowerPC NT ABI at one point, but since Windows NT on
PowerPC was stillborn, it was removed.

My point was if you are working on the ABI functions, you need to make
sure that the other ABIs (AIX, Darwin) don't get broken by any changes
you make (presumably you will make sure that you don't break the ABI you
are working on).  There are some subtle differences by the way between
the System V (aka Linux) and eABI as well (stack alignment, number of
registers for small data area), but most of those don't show in the ABI
functions you are looking at.

-Original Message-
From: Yao Qi qi [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 06, 2005 11:14 PM
To: Meissner, Michael
Cc: gcc@gcc.gnu.org
Subject: RE: var_args for rs6000 backend


>From: "Meissner, Michael" <[EMAIL PROTECTED]>
>To: "Yao qi" <[EMAIL PROTECTED]>
>CC: gcc@gcc.gnu.org
>Subject: RE: var_args for rs6000 backend
>Date: Tue, 6 Sep 2005 14:13:56 -0400
>
>And note Yao qi, that there are different ABIs on the rs6000, each of
>which has different conventions (ie, you will need to study the AIX ABI
>as well as the System V/eabi ABIs, and possibly other ABIs that are now
>used).

First, thanks for you suggestions.

Yes, I found there are at least *three* ABIs in
gcc/config/rs6000/rs6000.c,

205 /* ABI enumeration available for subtarget to use.  */
206 enum rs6000_abi rs6000_current_abi;

And in gcc/config/rs6000/rs6000.h, I found the defination,

   1223 /* Enumeration to give which calling sequence to use.  */
   1224 enum rs6000_abi {
   1225   ABI_NONE,
   1226   ABI_AIX,  /* IBM's AIX */
   1227   ABI_V4,   /* System V.4/eabi */
   1228   ABI_DARWIN/* Apple's Darwin (OS X kernel)
*/
   1229 };

I just have to concentrate on ABI_V4 if I work on gcc develoment on 
powerpc-linux, am I right ?
I have traced cc1 and found DEFAULT_ABI in setup_incoming_varargs() is 
ABI_V4.

Best Regards

Yao Qi
Bejing Institute of Technology

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/





Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Ian Lance Taylor
Jonathan Day <[EMAIL PROTECTED]> writes:

> My question is simple enough - has anyone built a
> toolchain for a MIPS64-Linux-GNU target?

Yes, I did, last year.

But I did it through a tedious iterative process--build the binutils,
build the compiler until it fails building libgcc, install parts of
it, build glibc enough to install the header files (with the kernel
header files there too), build the rest of the compiler, build the
rest of glibc.  Various things broke along the way and needed
patching.  I think it took me about a week, interspersed with other
things.

It's an interesting exercise for people who want to really learn how
all these tools work together.  If your main interest is in actually
getting something done, it's bound to be rather frustrating.

Sorry I can't be more helpful, except to say that it is indeed
possible.

Ian


Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Andrew Haley
Ian Lance Taylor writes:
 > Jonathan Day <[EMAIL PROTECTED]> writes:
 > 
 > > My question is simple enough - has anyone built a
 > > toolchain for a MIPS64-Linux-GNU target?
 > 
 > Yes, I did, last year.
 > 
 > But I did it through a tedious iterative process--build the binutils,
 > build the compiler until it fails building libgcc, install parts of
 > it, build glibc enough to install the header files (with the kernel
 > header files there too), build the rest of the compiler, build the
 > rest of glibc.  Various things broke along the way and needed
 > patching.  I think it took me about a week, interspersed with other
 > things.
 > 
 > It's an interesting exercise for people who want to really learn how
 > all these tools work together.  If your main interest is in actually
 > getting something done, it's bound to be rather frustrating.
 > 
 > Sorry I can't be more helpful, except to say that it is indeed
 > possible.

I wonder if Angela et al.'s magic script that automates all this stuff
is still around somewhere.  Maybe it'll still work, or maybe someone
has an up-to-date copy.

Andrew.



Re: existing functionality questions

2005-09-07 Thread Michael Tegtmeyer

If you just have a regular object passed by value, the fields accessible
are those in TYPE_FIELDS of the type of the object, and those fields
reachable through types in the TYPE_BINFOS (i don't remember whether we
represent access control in binfos)


Ah, I guess I am not actually wording this correctly:) I am able to get to 
all of the fields without any problem. I guess what I mean by visible is 
with regard to public vs private and context. In my example:


class A {
  public:
int pub_var;

void foo(/*implicit this* */) {...}

  private:
int private_var;
};


void bar(A *a) {...}


In a call to foo(). The implicit this pointer has type A with fields 
pub_var and private_var. (both accessable with TYPE_FIELD) The same is 
true for bar. In each case, TREE_PRIVATE is true on private_var. foo() can 
obviously access private_var because foo() is a member function whereas 
bar() is cannot. This is the visibilty that I'm refering to. Currently, 
I'm just comparing the DECL_CONTEXT of the argument and the context of the 
function but it breaks when it comes to inheritance.


Is there a better way of going about this?

Thanks again,
Mike


Re: existing functionality questions

2005-09-07 Thread Daniel Berlin
On Wed, 2005-09-07 at 13:25 -0400, Michael Tegtmeyer wrote:
> > If you just have a regular object passed by value, the fields accessible
> > are those in TYPE_FIELDS of the type of the object, and those fields
> > reachable through types in the TYPE_BINFOS (i don't remember whether we
> > represent access control in binfos)
> 
> Ah, I guess I am not actually wording this correctly:) I am able to get to 
> all of the fields without any problem. I guess what I mean by visible is 
> with regard to public vs private and context. In my example:
> 
> class A {
>public:
>  int pub_var;
> 
>  void foo(/*implicit this* */) {...}
> 
>private:
>  int private_var;
> };
> 
> 
> void bar(A *a) {...}
> 
> 
> In a call to foo(). The implicit this pointer has type A with fields 
> pub_var and private_var. (both accessable with TYPE_FIELD) The same is 
> true for bar. In each case, TREE_PRIVATE is true on private_var. foo() can 
> obviously access private_var because foo() is a member function whereas 
> bar() is cannot. This is the visibilty that I'm refering to. Currently, 
> I'm just comparing the DECL_CONTEXT of the argument and the context of the 
> function but it breaks when it comes to inheritance.
> 
> Is there a better way of going about this?

The middle end does not know language rules about access control, being
language independent, so no.  We only keep the info around for debug
info generation purposes.
If you want to try to emulate language specific access control rules in
the middle end IR, for some reason then you are going to have to do it
the hard way.

> 
> Thanks again,
> Mike



Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread David Daney

Andrew Haley wrote:

Ian Lance Taylor writes:
 > Jonathan Day <[EMAIL PROTECTED]> writes:
 > 
 > > My question is simple enough - has anyone built a

 > > toolchain for a MIPS64-Linux-GNU target?
 > 
 > Yes, I did, last year.
 > 
 > But I did it through a tedious iterative process--build the binutils,

 > build the compiler until it fails building libgcc, install parts of
 > it, build glibc enough to install the header files (with the kernel
 > header files there too), build the rest of the compiler, build the
 > rest of glibc.  Various things broke along the way and needed
 > patching.  I think it took me about a week, interspersed with other
 > things.
 > 
 > It's an interesting exercise for people who want to really learn how

 > all these tools work together.  If your main interest is in actually
 > getting something done, it's bound to be rather frustrating.
 > 
 > Sorry I can't be more helpful, except to say that it is indeed

 > possible.

I wonder if Angela et al.'s magic script that automates all this stuff
is still around somewhere.  Maybe it'll still work, or maybe someone
has an up-to-date copy.


Dan Kegel's crosstool does it for many different platform/tool version 
combinations (or so I have herd).  I think the problem is that it (and 
other solutions like it) have ad hoc hacks/patches for each combination 
to make it work and perhaps that mips64-linux-gnu is not well supported.


I did similar with mipsel-linux-gnu using headers lifted (and hacked) 
from glibc on i686-pc-linux-gnu as a starting point.


There is a definite chicken-and-egg problem here.  But once you have a 
working toolchain you never suffer from the problem again.  The result 
is that there is no motivation to solve it once you know enough to fix it.


David Daney


Re: Status of --enable-mapped-location

2005-09-07 Thread Per Bothner

Andrew Pinski wrote:
> Does anyone know of the status of --enable-mapped-location?  I tried to
> do a bootstrap and test and I got a lot of failures due to getting the
> wrong line number and file for the error message when dealing with 
macros.


I took a look.  The status doesn't seem to have changed much
- for good or ill.

There is a new set of failures from builtin-stringop-chk-1.c,
which is a new testcase.  These are the kind you mention.
I believe the issue is that --enable-mapped-location gives
us more precise error locations based on an individual token
rather than at the statement level.  If the token associated
with the error comes from a macro expansion text, then we
will see the location in the macro definition.

I don't think the implemented behavor is *wrong*, but it
probably isn't the most *useful*.

Ideally it would be nice to show the "macro nesting" just
like we do for "include nesting".  I.e. add "In macro expanded at"
lines.  That shouldn't be very difficult with the line-map
framework, though it would require more line_map entries in
the line_table.  For that reason, and perhaps because it might be
"too much information" perhaps it shouldn't be the default.

The default should probably be that positions from macro expansion
text should be replaced by the position of the macro call.  I don't
know how easily that is available, or if we need a new data structure.
Of course tokens that come from the application site (i.e. macro
parameters) should get the location of the application site, as I
believe the code currently does.  Finally, we have to handle macros
that call macros.

Other errors are pch-related, due to the pch machinery not saving and
restoring the line-table, as discussed in this thread:
http://gcc.gnu.org/ml/gcc/2005-03/msg01318.html

I get a bunch of failures in the libstdc++ tests, for example:
FAIL: 23_containers/map/operators/1_neg.cc  (test for errors, line 209)
The problem is the line number of the "candidates" is garbage.
This is nasty because it appears to be optimization-related: I tried
recompiling without optimization and the problem went away.
Trying to debug an optimized cc1plus I found that diagnostic.location
is trashed when print_z_candidates calls gettext here:
2444  str = _("candidates are:");
Perhaps there is an aliasing bug somewhere?

I really would welcome someone else looking at one or more
of these, as I'm really behind on other (paying) projects.
--
--Per Bothner
[EMAIL PROTECTED]   http://per.bothner.com/


Re: existing functionality questions

2005-09-07 Thread Mike Stump

On Sep 7, 2005, at 8:58 AM, Michael Tegtmeyer wrote:
Actually, that was the real question, nothing more. I need to be  
able to determine what member fields of an object passed to a  
function are visible to that function during an optimization pass.


Ah, now we get to the the start of the real question.


Is there existing functionality somewhere to do that?


No.

All of them is certainly safe, other answers require digging and  
thinking.  A few points to ponder might include: Do you want to know  
about fields that are accessed indirectly though implicit/explicit  
calls?  Do you want to know what fields are accessed by the compiler  
without the control of the user?  Do you want to know what fields are  
accessed directly in the body, but indirectly through a pointer to  
member?  Do you want to know what fields are accessed directly  
thought a regular pointer to non-member?




Re: existing functionality questions

2005-09-07 Thread Michael Tegtmeyer
All of them is certainly safe, other answers require digging and thinking.  A 
few points to ponder might include: Do you want to know about fields that are 
accessed indirectly though implicit/explicit calls?  Do you want to know what 
fields are accessed by the compiler without the control of the user?  Do you 
want to know what fields are accessed directly in the body, but indirectly 
through a pointer to member?  Do you want to know what fields are accessed 
directly thought a regular pointer to non-member?



This doesn't need to be that sophisticated. Perhaps oddly, I do not need 
to know what members were *actually* accessed, just what *could have been* 
accessed. So in this case, I do not even need to know what is going on at 
the call site (which obviously simplifies things). Since you pointed me to 
the front end, do you know if there is a simple way of walking the 
inheritance type hierarchy to determine this? Right now I'm just walking 
it through the ..._CONTEXTs but it seems clumsy and I'm not sure if it is 
because it really is clumsy or because I'm still learning my way around 
the internals an I don't know what I'm doing.


Mike


Re: Extending functionality of -frepo

2005-09-07 Thread Mike Stump

On Sep 7, 2005, at 12:36 AM, Noe Aljaz ITICMN wrote:

Maybe... I think the 'big_header', which is required by template
definitions, has a big impact here. And I find it hard to believe that
excluding a big chunk of code from compilation results in no speed-up.
Even when using precompiled headers, .pch files can get pretty big,  
and

they must still be loaded,


They do?  Odd, on my platform, we only mmap them.  If one doesn't  
touch them (and nothing else near them), they aren't loaded for me.



which takes time.


mmap is fairly quick (as compared to compilation speed).

The main question I guess is: How difficult is to implement this - 
frepo2 functionality?


Trivial enough, if you want to try it, assuming you want to put the  
#ifndef NO_TEMPLATE_INSTANTIATION into your code manually.


1 line to add an implicit -DNO_TEMPLATE_INSTANTIATION, 5 lines to add  
option to turn it off, 2 lines to add that option to the repo options  
list.


The code to add the templates bits that don't have definitions, only  
declarations would be the most work, but even that should mostly be  
copying already existing code.  About 30-50 lines, I'd guess.


If it is relatively simple and a patch is made, we can test all we  
want

and then decide whether it is worth having it in the compiler or not.


Feel free to do this if you want.  I think we should be able to  
provide enough hints and pointers to allow you to complete the code.


If you want to get started:

  if (warn_deprecated)
cpp_define (pfile, "__DEPRECATED");

c.opt:
Wdeprecated
C++ ObjC++ Var(warn_deprecated) Init(1)
Warn about deprecated compiler features

For adding the flag during collect2, see COLLECT_GCC_OPTIONS in  
collect2.c.


Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Kai Ruottu

David Daney wrote:


Ian Lance Taylor writes:
 > Jonathan Day <[EMAIL PROTECTED]> writes:
 >  > > My question is simple enough - has anyone built a
 > > toolchain for a MIPS64-Linux-GNU target?
 >  > Yes, I did, last year.
 >  > But I did it through a tedious iterative process--build the 
binutils,

 > build the compiler until it fails building libgcc, install parts of
 > it, build glibc enough to install the header files (with the kernel
 > header files there too), build the rest of the compiler, build the
 > rest of glibc.  Various things broke along the way and needed
 > patching.  I think it took me about a week, interspersed with other
 > things.


 My 'from scratch' method was quite the same a year or two again. So
when trying to update last July I already had some 'mips64-linux-gnu'
headers and libraries and could use them when updating the GCCs...

Dan Kegel's crosstool does it for many different platform/tool version 
combinations (or so I have herd).  I think the problem is that it (and 
other solutions like it) have ad hoc hacks/patches for each combination 
to make it work and perhaps that mips64-linux-gnu is not well supported.


 How one gets the first toolchain made shouldn't have the importance
many people think it has... My opinion (clashing badly with Dan's) is
that the first build has no importance at all, if one knows the basics
for Linux, for compiling and for other newbie-level things, one easily
succeeds to get the first toolchain. What Ian and I did, is mostly based
on 'trivial' understanding like :

 - the headers for 'mips-linux-gnu' can be similar to or even identical
   with those for 'mips64-linux-gnu', so if one has the previous, they
   are at least a very good starting point for getting the right
   headers.

 - searching with 'mips64' in the glibc's 'sysdeps' tree quite easily
   reveals the few places where there could be some different headers
   for 'mips64'

when trying to collect the required "target headers" for the GCC's 'libgcc'
compile.  Using '--disable-shared-libgcc' etc. options in the first GCC
build, enables one to succeed in the 'make all-gcc' with the bare target
headers. And lets one to continue with the glibc build...

 My estimate for a 'from scratch' build would be 4 hours if one already
has the required glibc version made for some other Linux, but of course
the 'mips' one in the 'mips64' case would be the best. And if one has
the required newbie-level know-how about GCC, compiling and the Linux
glibc components (crt*.o, libc.so.6, libc.so, ld.so.1 etc.). If one
hasn't, one can cause quite the same situation as those Greenpeace
people who put gasoline into the tank in a diesel car and were then
angry when they got no help in solving the mess they had caused with
their total ignoracy... This happened in Finnish Lapland and even the
children here know that using gasoline instead of diesel can be very
dangerous for the engine, or doing vice versa. My experience collected
from the crossgcc list tells that many cross-GCC builders will come to
the roads just as ignorant as the Greenpeacers. They hate cars so why
they would like to learn anything about them?  But why the GCC builders
hate GCC, Linux, glibc etc. and therefore don't want to learn anything
about them, has been an eternal mystery for me...

 Building from absolute scratch can be a challenge for many, but some
can think: "When At Last You Do Succeed, Never Try Again" and never any
more try that... And start to think if there even is any reason for that
during the first toolchain build.  People who build native-GCCs, never
(or very seldom) start from scratch, the target C library is already
installed and one only builds the new binutils and the new GCC when
wanting to produce a "self-made toolchain"...  The same idea works also
with cross-GCCs...

 If one hasn't the target C library, one can always borrow that or
something... A minimal 'glibc-2.3.5' for 'mips64-linux-gnu' probably
takes 1 - 2 Mbytes as a '.tar.gz' package so anyone who has a direct
net connection and has glibc-2.3.5 made for 'mips64-linux-gnu', can
email pack the base stuff and email it... Including myself. One only
needs to have the right 'lazy' attitude and ask someone to send...

I did similar with mipsel-linux-gnu using headers lifted (and hacked) 
from glibc on i686-pc-linux-gnu as a starting point.


There is a definite chicken-and-egg problem here.  But once you have a 
working toolchain you never suffer from the problem again.  The result 
is that there is no motivation to solve it once you know enough to fix it.


 David seems to have the same "When At Last You Do Succeed, Never Try
Again" attitude which I have...

 Okeydokey, I haven't any clue what the 'mips64-linux-gnu' target SHOULD
be... But I know what will be the result when one builds the toolchain
using the current defaults in GNU binutils, GCC and glibc-2.3.5. Let's
start with binutils, the 'ld -V' may show :

GNU ld version 2.16.91.0.1 20050622
  Supported emulat

Re: Language Changes in Bug-fix Releases?

2005-09-07 Thread Richard B. Kreckel
On 7 Sep 2005, Gabriel Dos Reis wrote:
> Mike Stump <[EMAIL PROTECTED]> writes:
> | I'll echo the generalized request that we try and avoid tightenings
> | on other than x.y.0 releases.
>
> I hear you.  In this specific case, it worths remembering people that
> the issue is not just an accept-invalid that was turned into
> reject-invalid, but wrong-code generation (in the sense that
> wrong-code was being genereted for *valid* program) that was fixed.

I'm unable to find which wrong-code generation PR was fixed by reading
this thread.  That applies to any of the two examples I posted.

Anyway, as I mentioned: If this broken code was a collateral damage of a
really serious bug, then it would be foolish to complain.  It's just that
I'm having difficulties imagining how accepting a friend declaration as a
forward declaration (which by the way worked since at least GCC 2.7.x) can
make your code accidentally fire that ballistic rocket.  (If it really
can, then you're having a truck load of other problems besides code
quality.)

Saludos
  -richy.
-- 
Richard B. Kreckel




Re: Question regarding compiling a toolchain for a Broadcom SB1

2005-09-07 Thread Kai Ruottu

Kai Ruottu wrote:


 How one gets the first toolchain made shouldn't have the importance
many people think it has... My opinion (clashing badly with Dan's) is
that the first build has no importance at all, if one knows the basics
for Linux, for compiling and for other newbie-level things, one easily
succeeds to get the first toolchain. What Ian and I did, is mostly based
on 'trivial' understanding like :


and so on...

 Please believe me, I saw my thoughts once again flying wildly and my
aim was to strip the message being a little shorter. But somehow I
clicked the "Send" instead of putting this into Drafts


Re: Possible bug in tree-vrp.c:vrp_visit_phi_node

2005-09-07 Thread Jeffrey A Law
On Mon, 2005-09-05 at 17:39 -0400, Richard Kenner wrote:
> Suppose lhs_vr is [64,64] and vr_result is ~[0,0].  It would seem
> that the code near the end of this function will malfunction.
> 
> Shouldn't the test be that both lhs_vr *and* vr_result are VR_RANGE?
> 
> This is causing an ACATS failure in c45651a and possibly others.
I don't necessarily see how it would cause an incorrect code
malfunction.  I did twiddle this code locally to allow tree-vrp.c
to better track anti-ranges, particularly singleton anti-ranges
such as ~[0,0].

Basically in the cases I was seeing, LHS_VR was a range and
VR_RESULT was a singleton anti-range.  We would ultimately create
a result range which looked like [-INF,+INF], which we could
interpret as VARYING.

Are you getting something different?

Jeff




Re: existing functionality questions

2005-09-07 Thread Mike Stump

On Sep 7, 2005, at 12:19 PM, Michael Tegtmeyer wrote:

This doesn't need to be that sophisticated.


So, the answer can be wrong and code generation won't be wrong?  I  
don't know what you mean by *could have been* accessed.  I don't even  
know what you mean by member.


So in this case, I do not even need to know what is going on at the  
call site (which obviously simplifies things). Since you pointed me  
to the front end, do you know if there is a simple way of walking  
the inheritance type hierarchy to determine this?


I don't know what `this' refers to, so hard to answer.


Right now I'm just walking it through the ..._CONTEXTs


TYPE_CONTEXT, DECL_CONTEXT, DECL_CLASS_CONTEXT, DECL_FIELD_CONTEXT or  
DECL_FRIEND_CONTEXT or DECL_FCONTEXT?  :-)



but it seems clumsy


No, not too clumsy, just wrong if you need accurate answers and the  
definition of member includes MI base class members.


You can check out dbxout.c:

tree binfo = TYPE_BINFO (type);
if (BINFO_N_BASE_BINFOS (binfo))
for (i = 0; BINFO_BASE_ITERATE (binfo, i, child); i++)

for hints on how to walk the class hierarchy in the backend.

#define TREE_PRIVATE(NODE) ((NODE)->common.private_flag)
#define TREE_PROTECTED(NODE) ((NODE)->common.protected_flag)

are how fields are marked as public, private and protected, though,  
if you need right answers, you will have to know C++ and its various  
rules, such as you can get at private members, as long as they are  
also found non-privately.


Introduction of GCC improvement work for Itanium via Gelato Federation

2005-09-07 Thread mksmith
GCC community,

As some of you may know, a group met this past January in Geneva
 to
discuss ways of improving GCC performance for Itanium. The group
identified three optimizations that should help significantly: 
  - Rotating Registers (including Swing Modulo Scheduling)
  - Superblock Scheduling 
  - Memory Disambiguation 

Since then we have been holding conference calls
 and held an update session at the San
Jose Gelato meeting in May.
 - Bob Kidd: Superblock Optimization
   
 - Mark Davis: Modulo Scheduling for GCC
   
 - Arutyun Avetisyan: Update, Improving GCC Instruction Scheduling:
   * 
   * 

Bob Kidd is in the process of testing a patch that moves the
superblock formation pass to the Tree-SSA level. The memory
disambiguation and rotating register/SMS support is stalled pending
alias analysis improvements. Additional information about the work can
be found at the Gelato GCC on Itanium Workgroup Wiki: gcc.gelato.org 

In addition, some of us are working to update the list of GCC projects
. See notes from August 25 call
. 

To raise awareness of this work, we will begin cross posting notes
from our periodic calls to the gcc list. We would like everyone to be
fully aware of what we are trying to accomplish. We ask for your
patience and understanding as we join your community. Please note that
several of us are from a university environment. 

Although many of us are involved, or have been involved, with other
compiler projects, the focus of the Gelato GCC Improvement Group is to
work *with* the GCC community *and* the GCC community *process* to
improve GCC for Itanium. 

Some of the other projects which individuals are currently, or have
been, involved with include OpenIMPACT , ORC
, icc
,
LLVM , and a new initiative to produce a hybrid
compiler based on the GCC front-end and the ORC back-end. (HP will
present on this HP-led project in the general session at the next
Gelato meeting.) 

Lastly I would like to invite anyone who is interested to attend our
next Gelato meeting in Brazil which will have several sessions set
aside to discuss the GCC on Itanium improvement work. 
   www.gelato.org/community/events/oct2005/index.php
   www.gelato.org/community/events/oct2005/agenda.php
After an update on each area, we will discuss how we can contribute to
the aliasing work. Dan Berlin recently posted the improved-aliasing
branch TODO list which will help focus the discussion.

Mark



Re: GCC 4.0.2 Status Report

2005-09-07 Thread Mark Mitchell

Andrew Pinski wrote:


But there are still issues.
PR 23691 is one of them which beaks boost.


I wasn't aware that there were still issues.  Please assign me to  PRs 
that represent things I've broken; I'll fix them, or at least explicitly 
unassign myself if I feel unfairly blamed.


In any case, PR 23691 and PR 23771 are duplicates, and I'm testing a 
patch that will fix both of them.


It's true that the presence of this additional problem is a bit 
depressing, but I do think that we're closing in.  The original fix was 
both important and conceptually correct; we just need to sort out the 
details now.


--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304