Help : Instruction Scheduling

2005-02-26 Thread Sachin Sonawane
Hi,
 I am studying the behaviour of GCC Instruction Scheduler for Pentium 
processor and another RICS processor called ABACUS. In the generated 
assembly code for Pentium, I would like to see the no-ops. Since Pentium 
processor performs hardware-pipeline-stall appropriately, Compiler does 
not insert on-ops.
 But I want to see the Assembly code with no-ops. How do I go for it? 
Which construct in .md file or any other file do I need to set for it?

Thanks in advance...
--
Regards,
---
|Sachin Vijaykumar Sonawane|Hostel-12, R.No.-A107, |[EMAIL PROTECTED]  | 
|M.Tech.-CSE-IITB, |Mobile-9819506594, |[EMAIL PROTECTED] |
|Roll.No.-03305039,|www.cse.iitb.ac.in/sachinvs|[EMAIL PROTECTED]| 
---



Re: GNU INTERCAL front-end for GCC?

2005-02-26 Thread Toon Moene
Sam Lauber wrote:
(2) -> Some of us would like 

DO .1 <- #0
to be translated into 

movl $0, v1
I've no idea for what target that would be valid assembler, but for the 
VAX it would be:

movzwl  #0, r1
.1 is a 16-bit variable, and nowhere in the INTERCAL documentation I've 
found evidence that assigning to a 16-bit variable means: Set the lower 
16 bits of this 32-bit variable to .

BTW, success !
--
Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands
Maintainer, GNU Fortran 77: http://gcc.gnu.org/onlinedocs/g77_news.html
A maintainer of GNU Fortran 95: http://gcc.gnu.org/fortran/


which trapping operation integer types are required in libgcc2 ?

2005-02-26 Thread Paul Schlie
Sorry if this should be obvious, but:

- which integer target types are required to be supported by libgcc2's
  trapping arithmetic implementation? (i.e. are all supported integer
  types required to have arithmetic trapping operation counterparts?)

- under what circumstances are they utilized in lieu their non-trapping
  counterparts? (for example, are they required by C ?)

Thanks, any insight appreciated.




about gcc -XLinker -M

2005-02-26 Thread gan_xiao_jun
Hi,

gcc -XLinker -M test.c 2>test.map
would output some usful information about locating
function to lib and ...
The detail analyze of them would be very useful.

Where can I find some introduce document about them?

Thanks in advance.
gan



__ 
Do you Yahoo!? 
Yahoo! Mail - now with 250MB free storage. Learn more.
http://info.mail.yahoo.com/mail_250


Re: GNU INTERCAL front-end for GCC?

2005-02-26 Thread Sam Lauber
> > (2) -> Some of us would like DO .1 <- #0
> >
> > to be translated into movl $0, v1
v1 is the name of a variable.  Needed because the manual 
says that each variable namespace (meshes, spots, tails, 
what-have-yous, hybrids) has 65535 variables!!!  I don't 
know of any machine that has 65535+65535+65535+65535 regs.  
We have to use asm variables.  
> I've no idea for what target that would be valid assembler, but for 
x86.  
> the VAX it would be:
> 
>   movzwl  #0, r1
> 
> .1 is a 16-bit variable, and nowhere in the INTERCAL documentation 
> I've found evidence that assigning to a 16-bit variable means: Set 
> the lower 16 bits of this 32-bit variable to .
> 
> BTW, success !
-- 
_
Web-based SMS services available at http://www.operamail.com.
From your mailbox to local or overseas cell phones.

Powered by Outblaze


Re: GNU INTERCAL front-end for GCC?

2005-02-26 Thread Paul Brook
On Saturday 26 February 2005 17:54, Sam Lauber wrote:
> > > (2) -> Some of us would like DO .1 <- #0
> > >
> > > to be translated into movl $0, v1
>
> v1 is the name of a variable.  Needed because the manual
> says that each variable namespace (meshes, spots, tails,
> what-have-yous, hybrids) has 65535 variables!!!  I don't
> know of any machine that has 65535+65535+65535+65535 regs.
> We have to use asm variables.

Um, I think you're missing the whole point of writing a gcc frontend. A gcc 
frontend generates GENERIC, it shouldn't need to know or care about assembly. 
If you already have a whatever->C compiler it should be relatively simple[1] 
to turn it into a GCC frontend.

Paul

[1] Obviously interfacing with gcc isn't trivial, but it's a sight easier than 
generating assembly for N+1 targets.


IA64 record alignment rules, and modes?

2005-02-26 Thread Gary Funck

On the IA64, the following record,

typedef struct sptr_struct
  {
long unsigned int phase: 48;
short unsigned int thread: 16;
void *addr;
  } sptr_t;

is assigned a BLKmode rather a TImode, and I was wondering whether
this is a requirement of the IA64 ABI, or a coincidental result of
various target configuration defintions?

The final determination of the mode assigned to this struct is
made in compute_record_mode().  The logic first tentatively assigns
a TImode (128 bits) as expected, in the second branch of this if
statement (GCC version 3.3.2):

  /* If we only have one real field; use its mode.  This only applies to
 RECORD_TYPE.  This does not apply to unions.  */
  if (TREE_CODE (type) == RECORD_TYPE && mode != VOIDmode)
TYPE_MODE (type) = mode;
  else
TYPE_MODE (type) = mode_for_size_tree (TYPE_SIZE (type), MODE_INT, 1);


and then reverses that decision in the subsequent if statement:

  /* If structure's known alignment is less than what the scalar
 mode would need, and it matters, then stick with BLKmode.  */
  if (TYPE_MODE (type) != BLKmode
  && STRICT_ALIGNMENT
  && ! (TYPE_ALIGN (type) >= BIGGEST_ALIGNMENT
|| TYPE_ALIGN (type) >= GET_MODE_ALIGNMENT (TYPE_MODE (type
{
  /* If this is the only reason this type is BLKmode, then
 don't force containing types to be BLKmode.  */
  TYPE_NO_FORCE_BLK (type) = 1;
  TYPE_MODE (type) = BLKmode;
}

primarily because STRICT_ALIGNMENT is asserted, and BIGGEST_ALIGNMENT is 128 in 
config/ia64/ia64.h:
  
#define STRICT_ALIGNMENT 1

/* Optional x86 80-bit float, quad-precision 128-bit float, and quad-word
   128 bit integers all require 128 bit alignment.  */
#define BIGGEST_ALIGNMENT 128

And this configuration parameter in config/ia64/ia64.h may also have led
to the decision to force 64 bit alignment for this structure (this is asserted
on most targets):

/* Define this if you wish to imitate the way many other C compilers handle
   alignment of bitfields and the structures that contain them.
   The behavior is that the type written for a bit-field (`int', `short', or
   other integer type) imposes an alignment for the entire structure, as if the
   structure really did contain an ordinary field of that type.  In addition,
   the bit-field is placed within the structure so that it would fit within such
   a field, not crossing a boundary for it.  */
#define PCC_BITFIELD_TYPE_MATTERS 1




Question: If we assume that a TImode would've been a more efficient mode
to represent the record type above, would it not have been acceptable for
the compiler to promote the alignment of this type to 128, given there
are no apparent restrictions otherwise, or are there other C conventions
at work that dictate otherwise?  Is there a configuration tweak that
would've led to using TImode rather than BLKmode?







gcc-3.4.4-20050211: maybe a danger behaviour

2005-02-26 Thread Denis Zaitsev
Consider the following example:


enum w {
//c=-1,
a,
b
};
whattodo (
char option
) {
static
struct todo {
enum w what;
char option;
} todos[]= {
{a,'a'},
{b,'b'},
{-1}
};
struct todo *p= todos;
do if (
(option && !option)
) break;
while ((++p)->what >= 0);
return p->what;
}


Compiling with -O[>0] and -Wall for x86 we have that code for
whattodo:


whattodo:
.L2:
jmp .L2


a) Formally, the code is correct.  As p->what can never be < 0
according to its type.

b) GCC _silently_ allows the {-1} initialization for that type, even
with -Wall.

Uncommenting the c= -1 member of enum, or explicit casting p->what to
int solves the problem, of course.  But maybe some warning would be
appropriate in such a situation?  It takes some time for me to
recognize what leads me to that cool .L2: jmp .L2 from seemengly
harmless C code...  Or maybe I don't know some healthy compiler
option?


gcc leaking?

2005-02-26 Thread Stefan Strasser
are there any allocation schemes besides garbage collection in gcc which 
preserve some memory for reuse which could cause memory leaks if not 
cleaned up, or are these bugs? (which don#t matter in the normal 
compilation process of course)

I'm using gcc as a library and experiencing memory leaks. I need a 
shared address space with gcc so invoking gcc is not an option. the 
leaks add up, because I need to reload gcc shared library since it's not 
safe to call gcc twice.

I loose about 500kb a compilation. is there anything besides garbage 
collection I can free before unloading?
(gc pages are released).

Thanks,
--
Stefan Strasser


SVN plans update

2005-02-26 Thread Daniel Berlin
In the next week, i'll be posting a test repo with all tags but
snapshots and the 3 tags with rtag -F issues.

Assuming nobody has anything but speed issues and niggling stuff at that
point, i will begin to convert our post-commit hooks.

As for speed issues and sorting issues, Subversion 1.2 will include both
server and client updates that will help here, and should speed up
checkouts, updates, etc.  The bulk of the speedup comes from server side
changes, however.  The sorting work is not done yet, but is on the todo
list of a developer

More cancellation points have been added for those who have complained
about ctrl-c not working when they want it to.

I can't upgrade the server process on toolchain.org to the development
version of subversion, as it is a real live machine used by real live
people :)

I can make dberlin.org available for anyone who really wants to argue
over timings or whatever, but the network connection can't support real
testing (384kbps up).

However, blaming the changelog now takes ~33 seconds over my local
network, most time spent doing the diffs between versions (it currently
actually differences all the plaintexts involved, though it sends them
to the client as deltas).

Blaming smaller files takes a shorter amount of time.
Blaming combine.c, rtl.h, rtl.c, tree.c, etc, take a minute or less.

I will do some significant merge (IE create branch from trunk as of
certain revision, merge in updates from mainline, then commit) to get a
feel for how long regualr merge commits will take (taggings are
relatively instantenous. it took less than 2 seconds to create and
commit a tag of a given revision).
I'm told to expect it to take about as long as CVS does, or shorter, but
i plan to verify this anyway.

I've also got viewcvs running on dberlin.org and it seems to work
perfectly fine (it goes just as slow/fast as viewcvs on cvs, afaict :P)

Hopefully when sourceware gets new hardware, i'll be able to
move/maintain the test repo there.




Re: GCC 4.1 Projects

2005-02-26 Thread Nathanael Nerode
The libada-gnattools-branch suffers severely from having to be maintained
in parallel with mainline (since it's a rearrangment of existing code).
Another two months of waiting will necessitate many hours of totally
unneccessary work on my part.

The longer the existing portion remains on a branch, the less work I can
do on additional improvements.  (The additional improvements are somewhat
less painful to maintain on a branch.)

Although you have listed it as "stage 2", I wish to commit the finished
portion as soon as possible during stage 1.  I have maintainership authority
to do so.  This will not interfere in any way with *any* of the projects
approved for stage 1, since it is in a disjoint section of code.  Accordingly,
I plan to do so unless I am told not to.

-- 
This space intentionally left blank.


Web page still claims mainline frozen

2005-02-26 Thread Nathanael Nerode
This is clearly false.

Could someone fix it?
-- 
This space intentionally left blank.


Mainline java is broken

2005-02-26 Thread Steven Bosscher
Mainline java is broken:
./.libs/libgcj0_convenience.a(Logger.o)(.text+0x620): In function 
`java::util::logging::Logger::getName()':
/abuild/gcc-test/gcc/libjava/java/util/logging/Logger.java:510: multiple 
definition of `java::util::logging::Logger::getName()'

Gr.
Steven



Extension compatibility policy

2005-02-26 Thread Mike Hearn
Hello,

I have just finished fixing up a piece of code dating from around 2001
which was quite badly broken by the incompatible change of __FUNCTION__ to
no longer operate as a preprocessor constant.

Unfortunately this codebase is riddled with constructs like

   fatal_error(__FUNCTION__": foo");

This is not done in a macro. This sort of thing appears many times
throughout the source.

I understand removing it simplified GCC. That is good. Unfortunately by
saving work for yourselves you made much more work for many other
people. I see from Google that Andrew Morton simply used old compilers
when faced with this problem before.

As recent releases have broken more and more code, I would like to
understand what GCCs policy on source compatibility is. Sometimes the
code was buggy, in this particular case GCC simply decided to pull an
extension it has offered for years. Is it documented anywhere? Are there
any more planned breakages? How do you make the cost:benefit judgement
call? Are there any guidelines you follow?

Typically a warning was emitted for a release or two before but this
achieves little: old unmaintained code, or code too large/fragile to fix
up, will not be fixed anytime soon. In other cases people like me must
spend our spare time doing boring mechanical work for no obvious reason,
so it is usually left until it actually starts causing compiles to fail.
This is quite depressing.

In cases where breaking sources lets you achieve greater performance or
efficiency, please do make the change but offer a switch to disable it and
let the old code still compile. This way we it seems everybody can be
happy.

thanks -mike



Re: Mainline java is broken

2005-02-26 Thread Andrew Pinski
On Feb 26, 2005, at 2:45 PM, Steven Bosscher wrote:
Mainline java is broken:
./.libs/libgcj0_convenience.a(Logger.o)(.text+0x620): In function 
`java::util::logging::Logger::getName()':
/abuild/gcc-test/gcc/libjava/java/util/logging/Logger.java:510: 
multiple definition of `java::util::logging::Logger::getName()'
I think this is a libtool bug (or relate to libtool).
-- Pinski


Re: Inlining and estimate_num_insns

2005-02-26 Thread Jan Hubicka
> On Thu, 24 Feb 2005 20:05:37 +0100, Richard Guenther
> <[EMAIL PROTECTED]> wrote:
> > Jan Hubicka wrote:
> > 
> > >>Also, for the simple function
> > >>
> > >>double foo1(double x)
> > >>{
> > >>return x;
> > >>}
> > >>
> > >>we return 4 as a cost, because we have
> > >>
> > >>   double tmp = x;
> > >>   return tmp;
> > >>
> > >>and count the move cost (MODIFY_EXPR) twice.  We could fix this
> > >>by not walking (i.e. ignoring) RETURN_EXPR.
> > >
> > >
> > > That would work, yes.  I was also thinking about ignoring MODIFY_EXPR
> > > for var = var as those likely gets propagated later.
> > 
> > This looks like a good idea.  In fact going even further and ignoring
> > all assigns to DECL_IGNORED_P allows us to have the same size estimates
> > for all functions down the inlining chain for
> 
> Note that this behavior also more closely matches the counting of gcc 3.4
> that has a cost of zero for
>   inline int foo(void) { return 0; }
> and a cost of one for
>   int bar(void) { return foo(); }
> while with the patch we have zero for foo and zero for bar.
> 
> For
>   inline void foo(double *x) { *x = 1.0; }
>   double y; void bar(void) { foo(&y); }
> 3.4 has 3 and 5 after inlining, with the patch we get 2 and 2.
> 
> For
>   inline double foo(double x) { return x*x; }
>   inline double foo1(double x) { return foo(x); }
>   double foo2(double x) { return foo1(x); }
> 3.4 has 1, 2 and 3, with the patch we get 1, 1 and 1.
> 
> For a random collection of C files out of scimark2 we get
>3.4  4.0 4.0 patched
> SOR54, 10125, 26   63, 14
> FFT 44, 11, 200, 59   65, 10, 406, 11151, 10, 243, 71
> 
> so apart from a constant factor 4.0 patched goes back to 3.4
> behavior (at least it doesn't show weird numbers).  Given that
> we didn't change inlining limits between 3.4 and 4.0 that
> looks better anyway.  And of course the testcases above show
> we are better in removing abstraction penalty.

This really looks like good compromise until we will be able to optimize
code before inlining better (this is what I am shooting for in
tree-profiling but we are not quite there yet) and even after that
perhaps assignments to temporaries should be considered free, since we
count cost of the operation later anyway.  The patch is OK for mainline
and assuming that there will be no negative hops in SPEC scores on
Diego's and Andrea's testers, I would like to see it in 4.0 too since
this counts as code quality regression, but this is Mark's call I guess.

Mark?  I would say that there is little risk in this patch corectness
wise, might have negative effect on compilation times since we re-start
inlining more like we did in old days.

Thanks!
Honza
> 
> Richard.


Re: Inlining and estimate_num_insns

2005-02-26 Thread Steven Bosscher
On Saturday 26 February 2005 23:03, Jan Hubicka wrote:
> Mark?  I would say that there is little risk in this patch corectness
> wise, might have negative effect on compilation times since we re-start
> inlining more like we did in old days.

Can we see some timings with and without this patch?

Gr.
Steven



Re: gcc leaking?

2005-02-26 Thread Tommy Vercetti
Stefan Strasser wrote:
are there any allocation schemes besides garbage collection in gcc 
which preserve some memory for reuse which could cause memory leaks if 
not cleaned up, or are these bugs? (which don#t matter in the normal 
compilation process of course)

I'm using gcc as a library and experiencing memory leaks. I need a 
shared address space with gcc so invoking gcc is not an option. the 
leaks add up, because I need to reload gcc shared library since it's 
not safe to call gcc twice.

I loose about 500kb a compilation. is there anything besides garbage 
collection I can free before unloading?
(gc pages are released).

I don't know what's "refrubish rate" of gc, but I would say that any 
garbage collector is a pretty much cause of solid leak of memory (unless 
it frees memory when not used anymore, but I doubt they do).

--
GJ


Decimal Floating-Point

2005-02-26 Thread David Starner
The Wiki only mentions the C front-end. Is this going to require any
back-end changes? Is there going to be any work done to make this work
well with Ada (which already has decimal floating point), to make
decimal floating-point values be passable between C and Ada functions?


Re: Decimal Floating-Point

2005-02-26 Thread Robert Dewar
David Starner wrote:
The Wiki only mentions the C front-end. Is this going to require any
back-end changes? Is there going to be any work done to make this work
well with Ada (which already has decimal floating point), to make
decimal floating-point values be passable between C and Ada functions?
Ada does not have decimal floating-point, you are probably mixing
this up with decimal fixed-point. I do not believe there is any
intent or work on making these data formats compatible between Ada
and C.


Re: Inlining and estimate_num_insns

2005-02-26 Thread Richard Guenther
Steven Bosscher wrote:
On Saturday 26 February 2005 23:03, Jan Hubicka wrote:
Mark?  I would say that there is little risk in this patch corectness
wise, might have negative effect on compilation times since we re-start
inlining more like we did in old days.

Can we see some timings with and without this patch?
I'll do some tests with checking disabled once the builds are completed.
Bootstrap times with checking enabled were not affected by the patch, so
was PR8361.  The tramp3d testcase showed a significant increase in
compile time, but the 350% performance regression compared to 3.4 is
fixed - in fact, we are now comparing apples-to-apples wrt compile times
of 3.4 and 4.0 for this kind of code (comparisons with leafify show
roughly the same compile time performance regression from 3.4 to 4.0).
I'll post some more exact numbers for a checking disabled compiler later.
Richard.


Re: Inlining and estimate_num_insns

2005-02-26 Thread Richard Guenther
Jan Hubicka wrote:
On Thu, 24 Feb 2005 20:05:37 +0100, Richard Guenther
<[EMAIL PROTECTED]> wrote:
Jan Hubicka wrote:

Also, for the simple function
double foo1(double x)
{
  return x;
}
we return 4 as a cost, because we have
 double tmp = x;
 return tmp;
and count the move cost (MODIFY_EXPR) twice.  We could fix this
by not walking (i.e. ignoring) RETURN_EXPR.

That would work, yes.  I was also thinking about ignoring MODIFY_EXPR
for var = var as those likely gets propagated later.
This looks like a good idea.  In fact going even further and ignoring
all assigns to DECL_IGNORED_P allows us to have the same size estimates
for all functions down the inlining chain for
Note that this behavior also more closely matches the counting of gcc 3.4
that has a cost of zero for
 inline int foo(void) { return 0; }
and a cost of one for
 int bar(void) { return foo(); }
while with the patch we have zero for foo and zero for bar.
For
 inline void foo(double *x) { *x = 1.0; }
 double y; void bar(void) { foo(&y); }
3.4 has 3 and 5 after inlining, with the patch we get 2 and 2.
For
 inline double foo(double x) { return x*x; }
 inline double foo1(double x) { return foo(x); }
 double foo2(double x) { return foo1(x); }
3.4 has 1, 2 and 3, with the patch we get 1, 1 and 1.
For a random collection of C files out of scimark2 we get
  3.4  4.0 4.0 patched
SOR54, 10125, 26   63, 14
FFT 44, 11, 200, 59   65, 10, 406, 11151, 10, 243, 71
so apart from a constant factor 4.0 patched goes back to 3.4
behavior (at least it doesn't show weird numbers).  Given that
we didn't change inlining limits between 3.4 and 4.0 that
looks better anyway.  And of course the testcases above show
we are better in removing abstraction penalty.

This really looks like good compromise until we will be able to optimize
code before inlining better (this is what I am shooting for in
tree-profiling but we are not quite there yet) and even after that
perhaps assignments to temporaries should be considered free, since we
count cost of the operation later anyway.  The patch is OK for mainline
You mean the patch at 
http://gcc.gnu.org/ml/gcc-patches/2005-02/msg01571.html
I guess?

and assuming that there will be no negative hops in SPEC scores on
Diego's and Andrea's testers, I would like to see it in 4.0 too since
this counts as code quality regression, but this is Mark's call I guess.
Mark?  I would say that there is little risk in this patch corectness
wise, might have negative effect on compilation times since we re-start
inlining more like we did in old days.
I'll apply the patch to mainline tomorrow and post some compile time
numbers as Steven suggested.
Richard.


gcc-4.0-20050226 is now available

2005-02-26 Thread gccadmin
Snapshot gcc-4.0-20050226 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.0-20050226/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.0 CVS branch
with the following options: -rgcc-ss-4_0-20050226 

You'll find:

gcc-4.0-20050226.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.0-20050226.tar.bz2 C front end and core compiler

gcc-ada-4.0-20050226.tar.bz2  Ada front end and runtime

gcc-fortran-4.0-20050226.tar.bz2  Fortran front end and runtime

gcc-g++-4.0-20050226.tar.bz2  C++ front end and runtime

gcc-java-4.0-20050226.tar.bz2 Java front end and runtime

gcc-objc-4.0-20050226.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.0-20050226.tar.bz2The GCC testsuite

Diffs from 4.0-20050220 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.0
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: gcc leaking?

2005-02-26 Thread Stefan Strasser
Tommy Vercetti schrieb:
I don't know what's "refrubish rate" of gc, but I would say that any 
garbage collector is a pretty much cause of solid leak of memory (unless 
it frees memory when not used anymore, but I doubt they do).


gcc gc does free memory when it has not been used in the last 2 
collections. on a normal termination there are still gc roots so there 
are still pages allocated, but I've done a collection with no roots and 
GC says 0k allocated, and there's still a leak.

it must come from another part of gcc. there is a pool allocator, but it 
is not used at all(at least when compiling c++).

would it help to do leak checking on libiberty alloc functions or is 
than done regularily anyway?

--
Stefan Strasser


Re: gcc leaking?

2005-02-26 Thread Daniel Jacobowitz
On Sun, Feb 27, 2005 at 12:58:24AM +0100, Stefan Strasser wrote:
> Tommy Vercetti schrieb:
> >
> >I don't know what's "refrubish rate" of gc, but I would say that any 
> >garbage collector is a pretty much cause of solid leak of memory (unless 
> >it frees memory when not used anymore, but I doubt they do).
> >
> 
> 
> gcc gc does free memory when it has not been used in the last 2 
> collections. on a normal termination there are still gc roots so there 
> are still pages allocated, but I've done a collection with no roots and 
> GC says 0k allocated, and there's still a leak.
> 
> it must come from another part of gcc. there is a pool allocator, but it 
> is not used at all(at least when compiling c++).
> 
> would it help to do leak checking on libiberty alloc functions or is 
> than done regularily anyway?

GCC is not designed as a library, so I expect many parts of the
compiler allocate data that does not need to be saved to PCH files
and will live the length of the compilation with xmalloc.  I'm sure a
good leak checker could tell you where they are coming from.

-- 
Daniel Jacobowitz
CodeSourcery, LLC


Re: Inlining and estimate_num_insns

2005-02-26 Thread Richard Guenther
Steven Bosscher wrote:
On Saturday 26 February 2005 23:03, Jan Hubicka wrote:
Mark?  I would say that there is little risk in this patch corectness
wise, might have negative effect on compilation times since we re-start
inlining more like we did in old days.

Can we see some timings with and without this patch?
Bootstrap times with checking disabled are the same with and without the 
patch.  To be fair I'll try to pick out tests that likely show sensitive
behavior on inlining decisions.

tramp3d-v3 -O3 compile- and run-times without checking are
   compile size  run
 3.4   1m7s4421288 51s
 4.0   1m2s4531197   1m57s
 4.0 patched   2m39s   5391554 27s
 3.4 leafify   1m36s   4539503 34s
 4.0 leafify   1m52s   4844784 15s
the testcase is unusual as it runs into our default unit-growth inlining
limit.  Also -O3 is not exactly the best option for runtime speed, but
it will surely show the most effects on compile time wrt inlining due to
-finline-functions.
PR8361 -O3 compile times are
 3.4  25s
 4.0  20s
 4.0 patched  31s
ublas4 -O3 from ccfun compile times are
 3.4  14s
 4.0  16s
 4.0 patched  19s
spirit -O3 from ccfun compile times are
 3.4  24s
 4.0  20s
 4.0 patched  26s
In the end we surely want to watch CiSBE and SPEC testers.
Richard.


Re: Inlining and estimate_num_insns

2005-02-26 Thread Steven Bosscher
On Feb 27, 2005 02:04 AM, Richard Guenther <[EMAIL PROTECTED]> wrote:
> In the end we surely want to watch CiSBE and SPEC testers.

Maybe so, but your timings already show this is pretty unacceptable.

Gr.
Steven




RE:about gcc -XLinker -M

2005-02-26 Thread gan_xiao_jun
for example,

I want to locate the which lib is linked for fprintf 
v===test.c===v
#include 
main()
{
doit();
}
doit()
{
fprintf(stderr,"==");
}
^^^
I run
gcc -Xlinker -M test.c 2>map
v===map===v
...
.plt 0x0804826c 0x30
 *(.plt)
 .plt0x0804826c 0x30 /usr/lib/crt1.o
 0x0804827c   fprintf@@GLIBC_2.0
 00804828c__libc_start_main@@GLIBC_2.0
...
^^I remove some blank for better display
here^^
rpm -qf /usr/lib/crt1.o 
tell me it belong to glibc
But 
nm /usr/lib/crt1.o | grep fprint
can't find it.

I also find 
vvv
Archive member included because of file (symbol)
  
   
/usr/lib/libc_nonshared.a(elf-init.oS)
   /usr/lib/crt1.o (__libc_csu_init)
^^^
at the map file. Is there some information it
indicate?

Thanks for any comment
gan




  
   






__ 
Do you Yahoo!? 
Yahoo! Mail - Find what you need with new enhanced search.
http://info.mail.yahoo.com/mail_250


Re: Mainline java is broken

2005-02-26 Thread Andreas Jaeger
Andrew Pinski <[EMAIL PROTECTED]> writes:

> On Feb 26, 2005, at 2:45 PM, Steven Bosscher wrote:
>
>> Mainline java is broken:
>> ./.libs/libgcj0_convenience.a(Logger.o)(.text+0x620): In function
>> `java::util::logging::Logger::getName()':
>> /abuild/gcc-test/gcc/libjava/java/util/logging/Logger.java:510:
>> multiple definition of `java::util::logging::Logger::getName()'
>
> I think this is a libtool bug (or relate to libtool).

This is a breakage of "[PATCH] Don't use weak linkage for symbols in
COMDAT groups"

Andreas
-- 
 Andreas Jaeger, [EMAIL PROTECTED], http://www.suse.de/~aj
  SUSE Linux Products GmbH, Maxfeldstr. 5, 90409 NÃrnberg, Germany
   GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


pgpnFUQkXTRCI.pgp
Description: PGP signature


Re: Extension compatibility policy

2005-02-26 Thread Eric Botcazou
> I understand removing it simplified GCC. That is good. Unfortunately by
> saving work for yourselves you made much more work for many other
> people. I see from Google that Andrew Morton simply used old compilers
> when faced with this problem before.

That's indeed unfortunate and has already been complained about several times.

> As recent releases have broken more and more code, I would like to
> understand what GCCs policy on source compatibility is. Sometimes the
> code was buggy, in this particular case GCC simply decided to pull an
> extension it has offered for years. Is it documented anywhere? Are there
> any more planned breakages? How do you make the cost:benefit judgement
> call? Are there any guidelines you follow?

Generally speaking, this occurs as follows: a patch happens to break an 
extension because GCC has (had?) so many extensions that it is nearly 
impossible to foresee all the side-effects a patch will have on them.  Then 
somebody notices the breakage and complains about it, and sometimes even 
writes a patch to undo the breakage (typically an Apple employee, because 
Apple is legitimately concerned about backwards compatibility).  Then the 
patch is knocked down by the language lawyers who are floating around 
because, see, if the extension was broken, it probably deserved it as it was 
under-specified and, consequently, cannot be anything else than an 
abomination.

Admittedly a bit caricatural, but not that much.

> Typically a warning was emitted for a release or two before but this
> achieves little: old unmaintained code, or code too large/fragile to fix
> up, will not be fixed anytime soon. In other cases people like me must
> spend our spare time doing boring mechanical work for no obvious reason,
> so it is usually left until it actually starts causing compiles to fail.
> This is quite depressing.

I agree, GCC should not be known for randomly breaking compatibility with 
itself, in addition to its other weaknesses.

> In cases where breaking sources lets you achieve greater performance or
> efficiency, please do make the change but offer a switch to disable it and
> let the old code still compile. This way we it seems everybody can be
> happy.

My impression is that this has nothing to do with performance and efficiency, 
unfortunately.

-- 
Eric Botcazou