Re: libiberty requirements and ISO C90

2005-05-16 Thread Gabriel Dos Reis
Ian Lance Taylor  writes:

[...]

| I think the more conservative approach would be to simply add strerror
| to AC_CHECK_DECLS, include  in xstrerror.c (protected by
| HAVE_STRING_H), and protect the strerror declaration with
| #if !HAVE_DECL_STRERROR

Thanks, that indeed makes sense.  Patch pending.
-- Gaby


Re: Is there a way to generate a cross reference listing for a c/c++ program using gcc?

2005-05-16 Thread Carlo Corridore
Or probably using sourcenav software from sourcenav.sourceforge.net
Daniel Berlin wrote:
On Fri, 2005-05-13 at 14:49 -0500, Paul Albrecht wrote:
Eric writes:

-Wl,-Map=mapfile.map,--cref
I'm not looking for a cross-reference from a symbol to its memory location
in linked file, rather a cross-reference from a symbol definition in a
program source file to its line number references in all the program source
files.

It's probably easiest to snarf that from the debug information produced
in the executable with -g.


begin:vcard
fn:Carlo Corridore
n:Corridore;Carlo
org:Alenia Aeronautica Nola;Sistemi di Integrazione
adr:;;Via Boscofangone Zona A.S.I.;Polvica di Nola;NA;80035;Italy
email;internet:[EMAIL PROTECTED]
title:Systems & Integration Manager
tel;work:++39-81-315-4103
tel;fax:++39-81-315-4044
x-mozilla-html:FALSE
version:2.1
end:vcard



Re: Auto-vectorization with gcj

2005-05-16 Thread Andrew Haley
Andrew Pinski writes:
 > 
 > On May 15, 2005, at 10:33 AM, Andrew Pinski wrote:
 > > The multiple exit comes bounds checking (which VRP does not remove 
 > > still
 > > because we don't pull out a load of the length).
 > >
 > > If we add -fno-bounds-checks, we get:
 > > Test.java:7: note: not vectorized: too many BBs in loop.
 > > Test.java:11: note: not vectorized: too many BBs in loop.
 > > Test.java:6: note: vectorized 0 loops in function.
 > >
 > > And this is because we have a label in the loop (and the tree CFG
 > > does not remove user labels, maybe setting DECL_ARTIFICIAL on the
 > > labels will fix this fully).
 > 
 > After working around that problem,

A patch to do that is pre-approved.

Thanks,
Andrew.


Default expansion of builtin_longjmp

2005-05-16 Thread Øyvind Harboe
The default expansion of builtin_longjmp will use nonlocal_goto if it is
defined by the backend.

Looking at the builtin_longjmp expansion code in builtins.c, I find that
it does not match the documentation for 'nonlocal_goto' in GCC
internals: "the first argument is to be loaded into the frame pointer"

However, the expand_goto() in stmt.c does match the GCC internal
documentation.

Reading the nonlocal_goto expansion i960.md, sparc.md and xtensa.md,
didn't help to clear up the confusion.

i960.md seems to treat the first argument as the frame pointer, whereas
sparc.md & xtensa.md seems to treat the last argument as the frame
pointer.


Snippet from builtins.c:

---
if (HAVE_nonlocal_goto)
/* We have to pass a value to the nonlocal_goto pattern thatwill
   get copied into the static_chain pointer, but it does not matter
   what that value is, because builtin_setjmp does not use it.  */
emit_insn (gen_nonlocal_goto (value, lab, stack, fp));
  else
---





-- 
Øyvind Harboe
http://www.zylin.com



Re: [lkcl@lkcl.net: has gcc been reworked so that code/templates can be "outsourced" e.g. to perl yet?]

2005-05-16 Thread Luke Kenneth Casson Leighton
On Sun, May 15, 2005 at 02:43:57PM -0700, Mike Stump wrote:

> See google("OpenMP") for what I mean by OpenMP.

 ah _ha_ *grin*.

 this is _very_ significant for the parallel processor project
 i have been asked about.

 


Re: some question about gc

2005-05-16 Thread zouq
i am sorry for that.

> "zouq" <[EMAIL PROTECTED]> writes:
>
> Please don't start a new thread by replying to a message on an
> existing thread.  Just send a new message, instead.  Otherwise your
> message goes in the wrong place for people who use threaded e-mail
> readers.

yes, as you have suggested, i have already read the gcc-int about garbage
collection, and i still can`t get the imformation i want.
i want to know the following constructs:
gt_ggc_cache_rtab, gt_ggc_deletable_rtab 
acording to what rules to genenrate them,
i read some of the gengtype.c, and i still don`t understand it. :(

and now i am thinking that why use garbage collection in gcc,
is it because of its high efficiency?








Re: [lkcl@lkcl.net: has gcc been reworked so that code/templates can be "outsourced" e.g. to perl yet?]

2005-05-16 Thread Luke Kenneth Casson Leighton
On Sun, May 15, 2005 at 02:43:57PM -0700, Mike Stump wrote:
> On Sunday, May 15, 2005, at 01:01  PM, Luke Kenneth Casson Leighton 
> wrote:
> >unfortunately, integration of aspex's proprietary tool-chain - written
> >in modula-2 - is extremely unlikely to ever be integrated into gcc.
> 
> Right.  But the ideas could be.  The ideas in some respects are more 
> important than the code.

 okay.

 the key architectural things about the ASP processor are as follows:

 * per-APE (qty 4096) 128 bits of content-addressable memory registers
 * per-APE (qty 4096) 256 bits of memory-registers
 * a 2-bit pipelined ALU
 * every APE is connected to its neighbour (left and right).
 * the APE string can be subdivided into 16-long segments at
   ARBITRARY boundaries, and also cyclically looped back
 

 the key thing about the instruction set is as follows:

 * you can "tag" certain APEs such that only the "tagged" APEs will
   execute the next instruction.

   see below for a bool example involving valarray.

 * an 8-bit 16-bit or 32-bit "compare", in the CAM memory, in one
   instruction cycle.
   
   this is _highly_ significant for data recognition: it's the
   one part of the ASP that _doesn't_ go at "bit-level" speed.

   obviously, the compare needs to be on an 8-bit, 16-bit or 32-bit
   boundary in the 128-bits of CAM.

   you _can't_ do 8-bit, 16-bit or 32-bit compares in the 256
   bits of memory-registers.  it's not CAM - it's ordinary memory
   cells.

 * you can shuffle bits left and right down the APE neighbour
   communications bus.  if the "cyclic loop" is enabled, bits
   dropping out the end of a segment come back to the other end,
   whereever that end has been programatically set.

   it's equivalent to the valarray "shift" and "cshift" functions,
   but not quite - because you can "break" the string into arbitrary
   lengths.

   ... quite an interesting data logistics problem, there :)

 * an instruction that checks, down the length of a string of APEs,
   where a particular bit in a register is set, and where that bit
   ends, with the results ending up in _two_ "tag" registers

   bit 5 in the APEs: 0 0 1 1 1 1 1 0 0 1 1 0 0 0 0 1 1 1 1  results in:
   tag 1 register 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0
   tag 2 register 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1

   this is typically used to implement the "carry" of a parallelised
   add instruction.

   each APE is utilised to perform an 8-bit add, over 8 instruction
   cycles, and then the 9th instruction is one of these "carry"
   instructions, and the 10th instruction the tag2 register is shuffled
   right one APE, and then added as a "carry" bit.

 * an instruction that checks whether all bits are set in a particular
   APE:

   bit 5 in all APEs: 1 1 1 1 1 1 1 1 1 1 0 1 1 1
   tag 1 register   : 0 0 0 0 0 0 0 0 0 0 0 0 0 0

   bit 5 in all APEs: 1 1 1 1 1 1 1 1 1 1 1 1 1 1
   tag 1 register   : 1 1 1 1 1 1 1 1 1 1 1 1 1 1

   this can also, iirc, be "qualified" by tagging hm, not sure...
   it's been a long time, but if it _does_ then it would go like this:

   bit 5 in all APEs: 1 1 1 0 1 0 1 1 1 1 0 1 1 1
   tag 1 register   : 1 1 0 0 1 0 1 1 1 1 1 1 1 1
   tag 2 register   : 0 0 0 0 0 0 0 0 0 0 0 0 0 0

   bit 5 in all APEs: 1 1 1 0 1 0 1 1 1 1 1 1 1 1
   tag 1 register   : 1 1 0 0 1 0 1 1 1 1 1 1 1 1
   tag 2 register   : 1 1 0 0 1 0 1 1 1 1 1 1 1 1

   i.e the tag1 register tells you "we don't give a stuff".

 * likewise an instruction that checks whether all bits are clear.


 from this, it should be _very_ obvious that valarray is
 _highly_ suited to hardware acceleration by an ASP:

 valarray b(20);
 valarray  x(20);
 valarray  y(20);

 . obtain x data...
 . obtain y data...

 b = (x != 19);
 x[b] += y[b];

 i.e. _only_ in those elements where b is not equal to 19,
 add the corresponding array element of y.

 this is _exactly_ the sort of thing where APE "tagging" allows an
 instruction to be conditionally performed - in parallel.


 the only thing that definitely _doesn't_ exist conceptually
 in valarray is that "carry-equivalent" function.

 it's just not something that people other than Aspex have ever
 thought about, because of course nobody in their right minds does
 bit-level programming any more :)

 l.



Re: [lkcl@lkcl.net: has gcc been reworked so that code/templates can be "outsourced" e.g. to perl yet?]

2005-05-16 Thread Luke Kenneth Casson Leighton
On Sun, May 15, 2005 at 08:16:04PM -0700, Mike Stump wrote:

> On Sunday, May 15, 2005, at 04:11  PM, Luke Kenneth Casson Leighton 
> wrote:
> > *click* - so you  you... ooo :)
> >
> > holy cow.
> >
> > you looked at valarray,
> 
> No, not really, I'm not a library guy.  I know of almost nothing of the 
> space, the applications or the tricks people play, but...

 ack.

> >and went "how could this be automatically speeded up by gcc, if gcc 
> >had access to a hardware vector processing unit"?
> >
> > i'm... genuinely impressed.
> 
> I'm sorry, wasn't meant to be impressive.  

 :)

 how to put it best - i'm impressed by the audacity and ambitious
 goals, then :)


> What would have been 
> impressive, is if I read up on ASP and coded up some complex algorithm 
> using all the latest tips and tricks of templates, and had you try it 
> and and you discovered that indeed it was trivial enough to write, 
> exactly matched what, as an author, you would have expected, best 
> case...  I think that is possible, but alas, I'd just leave it as an 
> exercise for the reader.

 well, given as i mentioned in my previous message that bit-level
 programming just _isn't_ something that sane people do, i would be
 _very_ impressed to find any support for all of the features
 of an ASP (in particular, that hardware "carry" instruction).


> > can you _imagine_ the number of different tags you'd need to say
> > "i want this register to be 1-bit wide, spread across 16 processors 
> >each,
> >  i want _this_ register array to be 4-bits wide, spread across 32 
> >processors.."
> 
> bitregister<1,16> i;
> 
> bitregister<4, 32> j;
> 
> I can imagine...  Seems trivial to me...

 it's the data interleave and also the fact that, due to the size of the
 arrays, you would need to do _arrays_ of arrays or some other such
 trick...  and yet _still_ have it parallelised:

 bitregister<4, 32> j[16];

 or, in valarray-like terminology:

 valarray j[10];

 for (i = 0; i < 10; i++)
j.set_size(16 * j); /* emulate the ability to cut an ASP into
   arbitrary length strings at 16-APE
   boundaries */

 and then _still_ be able to have _this_ parallelised:

 for (i = 0; i < 10; i++)
j++;

 and there be only one instruction :)


 ... btw, for something like a parallel processing unit containing 64
 cells, where you declare an array j[8][8], or if you have only 16
 cells and you have an array j[4][4] ... is _that_ taken care of in the
 design of OpenMP?


> > ... it just goes _nuts_.
> 
> I don't see the use of the above nuts.  Coding up the library to 
> support it, would be, well, fun  but for you (someone that knows 
> ASP) and someone that knows how to make C++ do tricks (expression 
> templates and template metaprograms at least) for them, it should be 
> trivial enough.
 
 *cackle*

 i _did_ start to create a template-based library which emulated
 the behaviour of the ASP (working from the original code i'd
 written in python).

 python's "map" and "reduce" functions were extremely useful
 in this respect, and certain ASP operations can be done with
 "reduce" with an appropriate lambda function.

 

> >well, the approach taken by aspex _makes_ it portable, already
> >[because it's a macro pre-processing step, turning inline-asp
> >instructions into c-code].
> 
> Vendor lock in by a vendor that can go out of business isn't what we 
> call portable.  Portable means that someone versed in it, can use it, 
> and that code can run on sse3, mmx, altivec, ASP, normal hardware or a 
> Cell processor, BlueGene, virtually unmodified.  

 *sigh*...

> For example, OpenMP 
> would seem to be portable (not being an expert in that field, I'd let 
> people correct me).  BLAS, boost and Blitz++  are yet other ways...  
> http://ggt.sourceforge.net/html/main.html is a new one I've not heard 
> of...  but google has.

 
> Do you know what Blitz++ is and does?  And how?
 
  looks like blitz++ and ggtl are both a bit high-level
  (and as such, end up being perfect targets for the OpenMP
   optimisation process).

> >[just not the ASP, because of their proprietary assembler-based 
> >toolchain]
> 
> No, even ASP, one just needs to understand the output of their 
> compiler, and then code it up, though, admittedly, one might not get 
> the speed, if the interface (valarray) is wrong.  

 it's the interleaving of data and coding that makes it so difficult
 to program.

 remember, you're talking _gigabytes_ per second, here.  IIRC
 it's a 64-bit bus, running at 250mhz on the VASP-F architecture,
 and tera bit-ops / sec (which obviously come rapidly down as you
 use that to do 8-bit adds, 8-bit MACs etc.)
 
 god only knows what they're doing with the VASP-G architecture which,
 last time i heard, was going to have SIXTEEN times the number of
 processing elements.


 _but_ if you _could_ do bitregister<1, 32> x and then change that to
 bitregister<2, 16>, bitregister<4, 8> and TEST CO

Re: some question about gc

2005-05-16 Thread Nix
[Disclaimer: the transition to GC happened around the time I started
 paying attention to GCC, so my knowledge of the pre-GC situation
 may be inaccurate.]

On 16 May 2005, zouq suggested tentatively:
> and now i am thinking that why use garbage collection in gcc,
> is it because of its high efficiency?

The opposite. Back in days of yore (GCC 1.x and 2.x), GCC managed
virtually everything using obstacks, i.e., `stacks of objects' with the
usual stack property that you can only add and remove from the end of
the stack; you could also free a whole stack at once.

For some objects, this made sense: you could build up a bunch of RTL
expressions on an obstack with ease, say. But if you wanted to *change*
that sequence in ways that added, removed, or reordered elements, things
got hairier. (And, of course, that's basically what many of the RTL
optimizers do.)

The contortions needed to ensure correct ownership of every obstack and
to arrange that GCC never needed to add or remove anything in the middle
of the obstacks grew rather extreme, and a considerable number of bugs
were tripped by freeing an obstack and then referencing some pointer
that turned out to point into the middle of it, and so on.

Around the long development cycle that led to GCC 3.0, this finally grew
overwhelming, and a GC was implemented. Mike Stump (I think it was) and
others have collected evidence indicating that the GC was actually
responsible for a substantial overall compiler *slowdown*, not because
it was slow at collecting garbage, but because related objects now got
allocated far apart in memory, so they tended not to share cachelines
anymore. As a consequence, some time-critical parts of GCC have actually
been moving back towards using obstacks again, and new GC algorithms
(the zone allocator in particular) have been worked on which might fix
this.


But, on balance, GC has probably been a boon, I feel: the code is much
more comprehensible now[1], and changing it is not nearly so
hair-raising[1].  Some performance loss is worth it if the consequence
is a working compiler, I'd say.


[1] except for reload

-- 
`End users are just test loads for verifying that the system works, kind of
 like resistors in an electrical circuit.' - Kaz Kylheku in c.o.l.d.s


Re: GCC 3.4.4 RC2

2005-05-16 Thread Etienne Lorrain
> GCC 3.4.4 RC2 is now available here:
> ftp://gcc.gnu.org/pub/gcc/prerelease-3.4.4-20050512
> There are just a few changes from RC1 to fix critical problems people
experienced with RC1.

  Work for me, thanks.
  Etienne.




Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Peter Barada

>We're not talking about 5% speedup; if the linker starts thrashing because
>of insufficient memory you pay far more than that.  And certainly anyone
>with an older computer who is dissatified with its performance, but
>doesn't have a lot of money, should look into getting more memory before
>anything else.  Still, the GNU project shouldn't be telling people in the
>third world with cast-off machines that they are out of luck; to many of
>them, 256M is more than they have.

Also don't forget us embedded people that are *desperately* trying to
do native compilations using an NFSroot with limited main memory and
don't have a disk in the hardware design to swap to.

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Richard Earnshaw
On Mon, 2005-05-16 at 13:19, Robert Dewar wrote:

> > 
> > Also don't forget us embedded people that are *desperately* trying to
> > do native compilations using an NFSroot with limited main memory and
> > don't have a disk in the hardware design to swap to.
> 
> Why would you work in such a crippled environment?
> 

One reason is that many people write open-source packages that can't be
cross compiled (there are far too many autoconf scripts around that try
to build executable test programs).

Robert, please stop trying to shoot the messenger.  The problems are
real, and users often cannot 'fix' these problems themselves.  Just like
they can't 'fix' the compiler bloat themselves.

R.


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Richard Earnshaw
On Mon, 2005-05-16 at 13:44, Robert Dewar wrote:
> Richard Earnshaw wrote:
> 
> > Robert, please stop trying to shoot the messenger.  The problems are
> > real, and users often cannot 'fix' these problems themselves.  Just like
> > they can't 'fix' the compiler bloat themselves.
> 
> Right, but again, if developers make bad decisions about
> development environments, it is not clear that gcc should
> be trying to bail them out.

I didn't say it was the developers of the package, I said it was the
folks trying to *build* the package.  These are very often different
people in the open-source community.  The folks building the package are
restricted in whether or not they can do cross compilation by whether or
not the *developers* have built that facility into their package. 
Sadly, 90%[1] of the time they have not.


>  Personally, I would rather have
> a gcc generating better code and using more memory, than
> the other way round. Of course, there are limits, and the
> places that gcc uses unreasonable amounts of memory should
> be fixed. But making it a goal to compile in less than 256 megs
> seems dubious in days when any decently configured notebook
> should have at least a gig or memory (mine has 2 gigs).

I find this laughable.  Most desktop machines around here have between
512M and 1G.  And if I were lucky enough to own a laptop with 2G of
memory I'd want it to be used to avoid having to spin up the disk on a
regular basis, not as squander-resource for compiler developers who were
being too lazy to think through the consequences of their decisions.  
Anyway, with all that extra memory I'd rather be running multiple
parallel compilations than one single one that consumed all the RAM,
especially if I had a multi-core processor.

R.

[1] Random number, I've not measured it.  But I'd be very surprised if
it were less than this.


Re: Backporting to 4_0 the latest friend bits

2005-05-16 Thread Kriang Lerdsuwanakij
Mark Mitchell wrote:
OK.  Do you happen to have access to any other testsuites, beyond the 
GCC testsuite?  If so, it would be great to validate the behavior of 
the compiler on the 4.0 branch with and without your patch to make 
sure that we're not doing any harm. 
I am sorry I don't have it.
--Kriang


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Paul Koning
> "Robert" == Robert Dewar <[EMAIL PROTECTED]> writes:

 Robert> Peter Barada wrote:
 >>> We're not talking about 5% speedup; if the linker starts
 >>> thrashing because of insufficient memory you pay far more than
 >>> that.  And certainly anyone with an older computer who is
 >>> dissatified with its performance, but doesn't have a lot of
 >>> money, should look into getting more memory before anything else.
 >>> Still, the GNU project shouldn't be telling people in the third
 >>> world with cast-off machines that they are out of luck; to many
 >>> of them, 256M is more than they have.

 Robert> Sure it would be nice if GCC could operate well on obsolete
 Robert> machines but you can't expect the mainline development to pay
 Robert> too much attention to extreme cases like this. After all, you
 Robert> can buy from Dell today a 2.4GHz machine with a 17" monitor,
 Robert> DVD drive, and 256Meg memory for $299 complete. 

Certainly.  But GCC doesn't build well at all on such a machine.  It
seems to me that the expectation right now is at least a gig or two of
RAM.  My machine roughly fits your description except for having 512
meg of RAM, and builds on it are ok only if you avoid Java.

 Robert> Perhaps a separate project dedicated to old tiny machines is
 Robert> the way to go.

I.e., all non-GHz platforms should be on the obsolete list?

  paul



Re: some question about gc

2005-05-16 Thread Ian Lance Taylor
"zouq" <[EMAIL PROTECTED]> writes:

> yes, as you have suggested, i have already read the gcc-int about garbage
> collection, and i still can`t get the imformation i want.
> i want to know the following constructs:
> gt_ggc_cache_rtab, gt_ggc_deletable_rtab 
> acording to what rules to genenrate them,
> i read some of the gengtype.c, and i still don`t understand it. :(

Can you ask a more specific question?  What do you really want to
know?

The cache_rtab symbols wind up in files specific to particular
languages.  The effect is that gt_ggc_cache_rtab is defined once in a
specific backend (e.g., cc1, cc1plus, etc.).  It lists GC roots which
use "if_marked" in the GTY description--see the docs.

The deletable_rtab symbols are similar, but list GC roots which use
"deletable".

> and now i am thinking that why use garbage collection in gcc,
> is it because of its high efficiency?

Ha ha, no.  See Nix's reply.

Ian


Re: GCC 3.4.4 RC2

2005-05-16 Thread Mark Mitchell
Etienne Lorrain wrote:
GCC 3.4.4 RC2 is now available here:
ftp://gcc.gnu.org/pub/gcc/prerelease-3.4.4-20050512
There are just a few changes from RC1 to fix critical problems people
experienced with RC1.
  Work for me, thanks.
Good; thanks for confirming.
--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Peter Barada

>> Also don't forget us embedded people that are *desperately* trying to
>> do native compilations using an NFSroot with limited main memory and
>> don't have a disk in the hardware design to swap to.
>
>Why would you work in such a crippled environment?

Agh!

Believe me, I do as much work on a 3+Ghz 2GbDDR x86 box, but then
I'm literally screwed by the plethora of Linux packages that just
can't cross build because their configure thinks it can build/run test
programs to figure out things like byte ordering, etc.  Take perl, zlib,
openssh, as an example.  Also there are so many interdependencies
between packages that we have to build a pile of libraries and support
stuff that is never used on the target just so we can get a package
that we do need to configure/build(like sed and perl).

Until package maintainers take cross-compilation *seriously*, I have
no choice but to do native compilation of a large hunk of the packages
on eval boards that can literally takes *DAYS* to build.

We embedded linux developers have been harping on this for the past
couple of years, but no one really takes our problem seriously.
Instead we keep getting the "get faster hardware" as the patent
cure-all to execution speed problems, but in my case, there is no
other hardware I can use.

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Monday 16 May 2005 16:53, Scott Robert Ladd wrote:
> The problem is, a bloated GCC has no consequences for the majority of
> GCC developers -- their employers have other (and valid) concerns. It's
> less a matter of laziness than it is of not caring outside one's own
> backyard.

And to second your point in an awkward way: I don't see this as a
problem.  If all those people who think this is a problem would
also fund GCC development (with hard cash or with developers), who
knows, probably things would look different.

But AFAICT even the developers who work on embedded targets focus
on code quality and new features, instead of on the compile time
and memory footprint issues that you would expect their group of
users to complain about.

Gr.
Steven




Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Peter Barada

>But AFAICT even the developers who work on embedded targets focus
>on code quality and new features, instead of on the compile time
>and memory footprint issues that you would expect their group of
>users to complain about.

I think that most of us embedded developers are trying to keep up with
where GCC is going.  Personally I spend most of my time in
gcc/config/m68k instead of the optimizers since its the target
description that I know, not the optimizers.

Also the mainline developers for x86 don't have the constraints that
we have, so its a case of "out of sight, out of mind" and a batch of
them have those glitzy workstations that they build native code for
instead of the hardware us embedded developers have.

Since I don't have any choice but to build natively on what to GCC
developers is "crippled hardware" (only 263 BogoMips) then it takes
somwhere 20 times as long to build the packages, and a "minor" 3%
slowdown means it takes a *lot* longer to go through a build cycle.
This also means that I can't track snapshots since they show up
quicker than the amount of raw compute time to just build everything
while hoping that the build doesn't blow its brains out due to a "minor"
increase in memory consumption.  

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Richard Earnshaw
On Mon, 2005-05-16 at 16:17, Steven Bosscher wrote:
> On Monday 16 May 2005 16:53, Scott Robert Ladd wrote:
> > The problem is, a bloated GCC has no consequences for the majority of
> > GCC developers -- their employers have other (and valid) concerns. It's
> > less a matter of laziness than it is of not caring outside one's own
> > backyard.
> 
> And to second your point in an awkward way: I don't see this as a
> problem.  If all those people who think this is a problem would
> also fund GCC development (with hard cash or with developers), who
> knows, probably things would look different.

if only it were that simple[1].  However, even if the money does get
spent it's unlikely to help because there are too many developers that
just DON'T CARE about (or worse, seem to be openly hostile to) making
the compiler more efficient.

No company is going to spend money on fixing this until we adjust our
(collective) attitude and take this seriously.  If one person can
continue to undo the good work of a dozen others with one lousy commit
we'll never get anywhere here.

R.

[1] Spending money is never simple ;-)


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Daniel Berlin
On Mon, 2005-05-16 at 16:53 +0100, Richard Earnshaw wrote:
> On Mon, 2005-05-16 at 16:17, Steven Bosscher wrote:
> > On Monday 16 May 2005 16:53, Scott Robert Ladd wrote:
> > > The problem is, a bloated GCC has no consequences for the majority of
> > > GCC developers -- their employers have other (and valid) concerns. It's
> > > less a matter of laziness than it is of not caring outside one's own
> > > backyard.
> > 
> > And to second your point in an awkward way: I don't see this as a
> > problem.  If all those people who think this is a problem would
> > also fund GCC development (with hard cash or with developers), who
> > knows, probably things would look different.
> 
> if only it were that simple[1].  However, even if the money does get
> spent it's unlikely to help because there are too many developers that
> just DON'T CARE about (or worse, seem to be openly hostile to) making
> the compiler more efficient.

They don't care because nobody pays them to care (IE you've got it
backwards), and they have other higher priority spare time projects that
they like to work on.

If you want to change the priorities of paid developers, you will have
to do so by affecting the work they are paid to do, not by trying to
convince them that speeding up the compiler is better than whatever
hobby projects they enjoy working on.  This is because speeding up the
compiler is almost never an enjoyable hobby project :).




Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Monday 16 May 2005 17:53, Richard Earnshaw wrote:
> On Mon, 2005-05-16 at 16:17, Steven Bosscher wrote:
> > On Monday 16 May 2005 16:53, Scott Robert Ladd wrote:
> > > The problem is, a bloated GCC has no consequences for the majority of
> > > GCC developers -- their employers have other (and valid) concerns. It's
> > > less a matter of laziness than it is of not caring outside one's own
> > > backyard.
> >
> > And to second your point in an awkward way: I don't see this as a
> > problem.  If all those people who think this is a problem would
> > also fund GCC development (with hard cash or with developers), who
> > knows, probably things would look different.
>
> if only it were that simple[1].  However, even if the money does get
> spent it's unlikely to help because there are too many developers that
> just DON'T CARE about (or worse, seem to be openly hostile to) making
> the compiler more efficient.

I've not seen anyone who is hostile to the idea of making the compiler
more efficient.  But it is difficult to expect people to care about a
thing that is not a problem for them.  I think it _would_ help if there
were people working specifically on reducing e.g. the memory footprint.
It is not like we don't know where the problems are, there is just not
a soul who cares enough to fix these problems.  Most souls start caring
when there is a monetary compensation in it for them ;-)

> No company is going to spend money on fixing this until we adjust our
> (collective) attitude and take this seriously.

Agreed.

A lot of work has already been done on speeding up the compiler, but
we are not really going anywhere if we add 80+ tree passes and remove
nothing.  There is also a SUSE bot that posts to gcc-regressions if
the memory footprint has increased, but people are completely ignoring
it.

>  If one person can
> continue to undo the good work of a dozen others with one lousy commit
> we'll never get anywhere here.

I don't think people are deliberately sabotaging efforts to make GCC
faster/smaller/etc.  And there are rules for what is considered a
regression, and for what to do with patches that cause them.

Gr.
Steven




Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Richard Earnshaw
On Mon, 2005-05-16 at 17:03, Daniel Berlin wrote:

> > if only it were that simple[1].  However, even if the money does get
> > spent it's unlikely to help because there are too many developers that
> > just DON'T CARE about (or worse, seem to be openly hostile to) making
> > the compiler more efficient.
> 
> They don't care because nobody pays them to care (IE you've got it
> backwards), and they have other higher priority spare time projects that
> they like to work on.
> 

It shouldn't be necessary to pay every developer to care.  We need to
buy into the fact that if some of the developer community cares enough
to pay for the work to be done, then doing things that undo that work
are going to be unpopular.  That is, we should treat increases in memory
usage/slow-downs in the compiler as regressions in the same way as we
treat worse code as regressions.  That's the only way we'll ever get
serious about this.  Unless and until we can accept this then nobody is
going to put money into it, because it'll just be wasted money.

> If you want to change the priorities of paid developers, you will have
> to do so by affecting the work they are paid to do, not by trying to
> convince them that speeding up the compiler is better than whatever
> hobby projects they enjoy working on.  This is because speeding up the
> compiler is almost never an enjoyable hobby project :).

I'm fully aware of this fact.  It doesn't change things though.  If we
are serious about engineering a good compiler, then we need to be just
as serious about these issues.

R.


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Paolo Bonzini
> Also there are so many interdependencies
between packages that we have to build a pile of libraries and support
stuff that is never used on the target just so we can get a package
that we do need to configure/build(like sed and perl).
Please give me as much information as possible on sed.  AFAIK, 
configuring with --disable-nls should be enough to skip libiconv, 
libintl, etc. and cross-build.

Paolo


Daniel Berlin and Sebastian Pop maintainerships

2005-05-16 Thread David Edelsohn
I am pleased to announce that the GCC Steering Committee has
appointed Daniel Berlin as maintainer of

tree-chrec.*
tree-data-ref.*
tree-scalar-evolution.*
tree-ssa-sink.*
lambda*

and Sebastian Pop as maintainer of

tree-chrec.*
tree-data-ref.*
tree-scalar-evolution.*

Daniel and Sebastian, please update your listings in the
MAINTAINERS file.

Happy hacking!
David



Re: GCC 3.4.4 RC2

2005-05-16 Thread Janis Johnson
On Sun, May 15, 2005 at 08:59:48AM -0700, Mark Mitchell wrote:
> Joseph S. Myers wrote:
> 
> >It also looks like this patch has been backported to 3.4 branch but not to 
> >4.0 branch?  Because 4.0 branch builds are still creating 
> >libstdc++-abi.sum, while 3.4 branch builds no longer do, the ABI tests 
> >having been subsumed in the main libstdc++.sum for mainline and 3.4 
> >branch.
> 
> Yes, I asked Janis to test each branch separately, because the patches 
> were separate.  She has confirmed that the 4.0 version of the patch 
> works OK.  So, that patch will go on 4.0 today, along with the 
> additional patch Andreas found.

I hadn't noticed originally but on powerpc64-linux with 3.4.4 RC2 and
with the 3.4 branch, the results for libstdc++-v3 show only one run of
the tests for "unix", not two for "unix/-m32" and "unix/-m64", and the
results are actually for check-abi.  The leftover temp files in the
build directory show that the library tests were actually run, just not
reported.  I can't tell if they were run for both -m32 and -m64.

On the 4.0 branch, check-abi is not being run (or not reported?) but
the libstdc++ tests are being run and reported for -m32 and -m64 as
expected.

I'm very sorry I didn't notice this earlier.

Janis


Re: Need help creating a small test case for g++ 4.0.0 bug

2005-05-16 Thread Janis Johnson
On Sat, May 14, 2005 at 12:16:54PM +1000, Paul C. Leopardi wrote:
> Hi all,
> I originally posted these messages to gcc-help, but had no reply, so I am 
> re-posting links to them here. 
> 
> I think I have found a bug in g++ 4.0.0, but need help in reporting it. 
> Maintainers like their bug reports to include short test cases, but I don't 
> know how to generate a short test case involving inlining. I discovered the 
> original problem by compiling GluCat ( http://glucat.sf.net ) and the 
> preprocessor output from a short GluCat test program contains over 66 000 
> lines of libstdc++, uBLAS and Glucat code.
> 
> Can anyone help, or should I just file a bug report using the huge test case?

The information in http://gcc.gnu.org/bugs/minimize.html might help.

Janis


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Peter Barada

>> Also there are so many interdependencies
>> between packages that we have to build a pile of libraries and support
>> stuff that is never used on the target just so we can get a package
>> that we do need to configure/build(like sed and perl).
>
>Please give me as much information as possible on sed.  AFAIK, 
>configuring with --disable-nls should be enough to skip libiconv, 
>libintl, etc. and cross-build.

I don't think sed has a problem cross-building, its just all the junk
that each package uses in its configure that if it *has* to be
natively built that compounds the problem.

-- 
Peter Barada
[EMAIL PROTECTED]


Bootstrap broken

2005-05-16 Thread David Edelsohn
The awk patch appears to have broken bootstrap:

In file included from /usr/include/sys/localedef.h:44,
 from /usr/gnu/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.2.3/include/
stdlib.h:447,
 from /farm/dje/src/src/gcc/system.h:208,
 from options.c:5:
/usr/gnu/lib/gcc-lib/powerpc-ibm-aix5.1.0.0/3.2.3/include/locale.h:91: parse err
or before "const"
options.c:904: parse error before numeric constant
options.c:3393: invalid lvalue in unary `&'
options.c:3393: initializer element is not constant
options.c:3393: (near initialization for `cl_options[489].flag_var')
options.c:3393: warning: missing initializer
options.c:3393: warning: (near initialization for `cl_options[489].flag_var')
options.c:3393: initializer element is not constant
options.c:3393: (near initialization for `cl_options[489]')
options.c:3398: initializer element is not constant
...


I notice that options.h no longer is included, but including that
file does not fix the problem.

David


Re: Bootstrap broken

2005-05-16 Thread Richard Sandiford
David Edelsohn <[EMAIL PROTECTED]> writes:
>   I notice that options.h no longer is included, but including that
> file does not fix the problem.

Argh!  Sorry for the breakage.  Can you send me the preprocessed
options.c file?

Richard


Re: Bootstrap broken

2005-05-16 Thread Richard Sandiford
Richard Sandiford <[EMAIL PROTECTED]> writes:
> David Edelsohn <[EMAIL PROTECTED]> writes:
>>  I notice that options.h no longer is included, but including that
>> file does not fix the problem.
>
> Argh!  Sorry for the breakage.  Can you send me the preprocessed
> options.c file?

BTW, that was kind of a random quote from the message rather than
a direct reply to the options.h thing.  To answer that: options.h
is now included indirectly via tm.h.

Richard


Re: Bootstrap broken

2005-05-16 Thread David Edelsohn
Sigh.  This appears to be a combination of issues related to the
new headers being included.

First, src/gcc/intl.h includes:

#ifndef HAVE_SETLOCALE
# define setlocale(category, locale) (locale)
#endif

and auto-host.h in the build directory includes:

/* Define to 1 if you have the `setlocale' function. */
#ifndef USED_FOR_TARGET
#define HAVE_SETLOCALE 1
#endif

but the define is triggering, causing

struct lconv *localeconv(void);
char   *setlocale(int, const char *);

to become

struct lconv *localeconv(void);
char   *(int, const char *);

The second problem is including tm.h means that options.c sees

#undef  TARGET_ALTIVEC_VRSAVE
#define TARGET_ALTIVEC_VRSAVE 0

conflicting with options.c

/* Set by -mvrsave.
   Generate VRSAVE instructions when generating AltiVec code  */
int TARGET_ALTIVEC_VRSAVE;

which it did not see before.  Other uses of the variable protect it with a
test that it is not a constant, but options.c does not.

I can remove the TARGET_ALTIVEC_VESAVE definition, but the
setlocale() problem is more fundamental.

David


Re: GCC 3.4.4 RC2

2005-05-16 Thread Mark Mitchell
Janis Johnson wrote:
I hadn't noticed originally but on powerpc64-linux with 3.4.4 RC2 and
with the 3.4 branch, the results for libstdc++-v3 show only one run of
the tests for "unix", not two for "unix/-m32" and "unix/-m64", and the
results are actually for check-abi.  The leftover temp files in the
build directory show that the library tests were actually run, just not
reported.  I can't tell if they were run for both -m32 and -m64.
On the 4.0 branch, check-abi is not being run (or not reported?) but
the libstdc++ tests are being run and reported for -m32 and -m64 as
expected.
I'm very sorry I didn't notice this earlier.
Not to worry; already fixed!  On 3.4, we just had a merge botch, which 
Andreas fixed.  On 4.0, the behavior you're seeing is as intended; the 
ABI test is now included in "make check".

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Georg Bauhaus
Peter Barada wrote:
Until package maintainers take cross-compilation *seriously*,
Or cross-programming in general,
or until GNU programmers write software in a way such that if
the GNU platform changes, translation of configuration tools is
still possible by design.
I've just given up running the GCC 3.4 testsuite on MacOS X 10.2,
after spending a few days getting from 3.1-incl-Ada -> 3.3-incl-Ada
(src 1495) -> system-3.3 + 3.3-incl-Ada -> 3.4.
I've got a compiler, after injection of a number of shoehorns
into Make-lang.in and into the system.
The configuration trouble appears not to be caused by the GCC
sources proper, but mostly by the assumption-based configuration
programs used and *their* associated version hel^H^H^H dependence
graph etc.. (one workaround in another thread.)
Yes, MacOS X 10.2 is old, not GNU etc., still it seems to me that
the quality of the configuration programs could be improved by
just slightly relaxing the at-least-complete-GNU-x86 or equivalent
assumption, if only the attitude towards configuration changes.
This can't hurt even within GNU. End of rant.
Georg


Re: Is there a way to generate a cross reference listing for a c/c++ program using gcc?

2005-05-16 Thread Paul Albrecht
William Beebe writes:

>
>... If you want what LXR provides (and yes, I looked it up) then get
Doxygen
>

Not only does Doxygen not meet my requirements, but also it doesn't make
much sense to me for each cross-reference tool to implement its own source
code parser and symbol database when the symbol information could be
(optionally)
output by the compiler and subsequently processed into a database file by a
utility program. But, I guess we'll just have to agree to disagree.

>
>... why don't you write that cross-reference output feature?
>

Is anyone else planning on or interested in adding a symbol cross-reference
option to gcc for the "C" programming language? Are there any objections to
adding a symbol cross-reference option to gcc for the "C" programming
language?





Re: Bootstrap broken

2005-05-16 Thread Richard Sandiford
As David describes here:

   http://gcc.gnu.org/ml/gcc/2005-05/msg00807.html

having options.c include config.h, system.h, coretypes.h and tm.h
was causing a bootstrap failure on AIX.  The first of the two problems
was caused by options.c including  before anything else.

The patch below changes the include order so that "intl.h"
is included after config.h and system.h, and is included using
quotes rather than angle brackets:

#include "config.h"
#include "system.h"
#include "coretypes.h"
#include "tm.h"
#include "opts.h"
#include "intl.h"

This matches the convention used elsewhere in gcc (such as in toplev.c).

Tested by sanity-checking on i686-pc-linux-gnu.  David confirmed
that the new include order fixes the first problem.  Applied to
mainline as obvious.

Sorry once again for the breakage.

Richard


* optc-gen.awk: Include intl.h after the externally-provided files.

Index: optc-gen.awk
===
RCS file: /cvs/gcc/gcc/gcc/optc-gen.awk,v
retrieving revision 2.7
diff -u -p -F^\([(a-zA-Z0-9_]\|#define\) -r2.7 optc-gen.awk
--- optc-gen.awk16 May 2005 12:30:04 -  2.7
+++ optc-gen.awk16 May 2005 18:13:00 -
@@ -56,11 +56,11 @@ BEGIN {
 END {
 print "/* This file is auto-generated by opts.sh.  */"
 print ""
-print "#include "
 n_headers = split(header_name, headers, " ")
 for (i = 1; i <= n_headers; i++)
print "#include " quote headers[i] quote
 print "#include " quote "opts.h" quote
+print "#include " quote "intl.h" quote
 print ""
 
 for (i = 0; i < n_opts; i++) {


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Russ Allbery
Peter Barada <[EMAIL PROTECTED]> writes:

> Until package maintainers take cross-compilation *seriously*, I have no
> choice but to do native compilation of a large hunk of the packages on
> eval boards that can literally takes *DAYS* to build.

And package maintainers will never take cross-compilation seriously even
if they really want to because they, for the most part, can't test it.
Very few people who are not cross-compiling for specific reasons have any
sort of cross-compilation setup available or even know how to start with
one, and it's the sad fact in software development that anything that
isn't regularly tested breaks.

Most free software packages are doing well if they even have a basic test
suite for core features, let alone something perceived as obscure like
cross-compilation.

To really make cross-compilation work on a widespread basis would require
a huge amount of effort in setting up automated test environments where
package maintainers could try it out, along with a lot of help in
debugging problems and providing patches.  It seems unlikely to me that
it's going to happen outside the handful of packages that are regularly
used in cross-build environments and receive active regular testing by
people who are part of the development team (like gcc).

-- 
Russ Allbery ([EMAIL PROTECTED]) 


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Karel Gardas
On Mon, 16 May 2005, Steven Bosscher wrote:
Just for the record, attached is gcctest's history of the overall
memory requirement at -O[0123] for combine.i, insn-attrtab.i, and
generate.ii (aka PR8361).  Honza's bot has been sending these
reports since Septemper 2004, so that's where I started.
Is it possible to also add -Os to your tested option set? IMHO this option 
is quite necessary for embedded developers who seems to complain in this 
thread.

Thanks,
Karel
--
Karel Gardas  [EMAIL PROTECTED]
ObjectSecurity Ltd.   http://www.objectsecurity.com


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Robert Dewar <[EMAIL PROTECTED]> wrote:

> After all, you can buy from Dell today a 2.4GHz machine with a 17"
> monitor, DVD drive, and 256Meg memory for $299 complete. Sure, some
> people cannot even afford that, but it is not clear that the gcc
> project can regard this as a major user segment that should be taken
> into account.

Just step back for a second and consider that the most common
computation platform these days is cell phones.  Also consider that a
number of cell phone manufacturers are adopting, or considering
adopting, GNU/Linux.  Consider that at least some of them are going to
enable users to download programs into the cell phones and run them.
Also consider that not all cell phones are identical.

Now wouldn't it be nice to be able to download some useful program in
source form and build it locally on your cell phone, while on the
road?  Sure, few people might be able to accomplish that without a
nice building wizard front-end, but that's doable.  Would we want GCC
to be tool that prevents this vision from coming true?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread DJ Delorie

> No company is going to spend money on fixing this until we adjust
> our (collective) attitude and take this seriously.

We could call ulimit() to force everyone to have less available RAM.
Connect it with one of the maintainer flags, like enable-checking or
something, so it doesn't penalize distributors.


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Andrew Pinski
On May 16, 2005, at 2:46 PM, DJ Delorie wrote:

No company is going to spend money on fixing this until we adjust
our (collective) attitude and take this seriously.
We could call ulimit() to force everyone to have less available RAM.
Connect it with one of the maintainer flags, like enable-checking or
something, so it doesn't penalize distributors.
We already do that for when checking is enabled, well the GC heuristics
are tuned such that it does not change which is why
--enable-checking=release is always faster than without it.
-- Pinski


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

> And package maintainers will never take cross-compilation seriously even
> if they really want to because they, for the most part, can't test it.

configure --build=i686-pc-linux-gnu \
--host=i686-somethingelse-linux-gnu 

should be enough to exercise most of the cross-compilation issues, if
you're using a sufficiently recent version of autoconf, but I believe
you already knew that.

The most serious problem regarding cross compilation is that it's
regarded as hard, so many people would rather not even bother to try
to figure it out.  So it indeed becomes a hard problem, because then
you have to fix a lot of stuff in order to get it to work.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Russ Allbery
Alexandre Oliva <[EMAIL PROTECTED]> writes:
> On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

>> And package maintainers will never take cross-compilation seriously
>> even if they really want to because they, for the most part, can't test
>> it.

> configure --build=i686-pc-linux-gnu \
> --host=i686-somethingelse-linux-gnu 

> should be enough to exercise most of the cross-compilation issues, if
> you're using a sufficiently recent version of autoconf, but I believe
> you already knew that.

What, you mean my lovingly hacked upon Autoconf 2.13 doesn't work?  But I
can't possibly upgrade; I rewrote all of the option handling in a macro!

Seriously, though, I think the above only tests things out to the degree
that Autoconf would already be warning about no default specified for
cross-compiling, yes?  Wouldn't you have to at least cross-compile from a
system with one endianness and int size to a system with a different
endianness and int size and then try to run the resulting binaries to
really see if the package would cross-compile?

A scary number of packages, even ones that use Autoconf, bypass Autoconf
completely when checking certain things or roll their own broken macros to
do so.

> The most serious problem regarding cross compilation is that it's
> regarded as hard, so many people would rather not even bother to try to
> figure it out.  So it indeed becomes a hard problem, because then you
> have to fix a lot of stuff in order to get it to work.

It's not just that it's perceived as hard.  It's that it's perceived as
hard *and* obscure.  Speaking as the maintainer of a package that I'm
pretty sure could be cross-compiled with some work but that I'm also
pretty sure likely wouldn't work just out of the box, I have never once
gotten a single bug report, request, or report of anyone cross-compiling
INN.  Given that, it's hard to care except in some abstract cleanliness
sense (and I already got rid of all of the Autoconf warnings as best as I
could figure out, in the abstract caring department).

-- 
Russ Allbery ([EMAIL PROTECTED]) 


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread DJ Delorie

> We already do that for when checking is enabled, well the GC heuristics
> are tuned such that it does not change which is why
> --enable-checking=release is always faster than without it.

Right, but it doesn't call ulimit(), so other sources of memory
leakage wouldn't be affected.  I'm thinking if the gcc driver set a
per-process limit of, say, 128M, developers would learn to care about
working set performance.


Re: GCC 3.4.4 RC2

2005-05-16 Thread Georg Bauhaus
Mark Mitchell wrote:
GCC 3.4.4 RC2 is now available here:

Please download, build, and test.

On Mac OX X 10.2, the results are slightly discomforting,
even though I do get a compiler with
--enable-languages=c,ada,f77,c++,objc.
gcc summary has
# of unexpected failures1080
(Couldn't get any further because gnatmake, run from ACATS dir,
can't find macrosub.adb, even after some tweaking.)
In case anyone is interested in this "old" configuration, which
in one place might be of interest for mainline Ada configurations,
I needed a GCC 3.3 for bootstrapping with
--enable-languages=c,ada,f77,c++,objc. Apple's 3.3 dmg doesn't
come with Ada, I managed to built GCC from sources numbered 1495
(the macada.org sources), using --with-suffix=1495.
Then to get get a bootstrapped 3.4 compiler, the following changes
were necessary to pass $CC (which is not "gcc" per --with-suffix)
on to gnatmake in gcc/ada/Make-lang.in. It looks like this is a
relative of PR/13035. The Make-lang.in still looks similar in
mainline, so maybe this is of general interest.
--- gcc/ada/Make-lang.in.orig   Wed Sep  1 22:07:42 2004
+++ gcc/ada/Make-lang.inMon May 16 19:41:23 2005
@@ -446,7 +446,7 @@
ada/doctools/xgnatugn$(build_exeext): ada/xgnatugn.adb
-$(MKDIR) ada/doctools
$(CP) $^ ada/doctools
-   cd ada/doctools && gnatmake -q xgnatugn
+   cd ada/doctools && gnatmake -q --GCC=$(CC) xgnatugn -largs --GCC=$(CC)
# Note that gnat_ugn_unw.texi does not depend on xgnatugn 
# being built so we can distribute a pregenerated gnat_ugn_unw.info
@@ -988,27 +988,37 @@
ada/treeprs.ads : ada/treeprs.adt ada/sinfo.ads ada/xtreeprs.adb
	-$(MKDIR) ada/bldtools
	$(CP) $^ ada/bldtools
-	(cd ada/bldtools; gnatmake -q xtreeprs ; ./xtreeprs ../treeprs.ads )
+	(cd ada/bldtools; \
+	 gnatmake -q --GCC=$(CC) xtreeprs -largs -v --GCC=$(CC) ; \
+	 ./xtreeprs ../treeprs.ads )

ada/einfo.h : ada/einfo.ads ada/einfo.adb ada/xeinfo.adb
-$(MKDIR) ada/bldtools
$(CP) $^ ada/bldtools
-   (cd ada/bldtools; gnatmake -q xeinfo ; ./xeinfo ../einfo.h )
+   (cd ada/bldtools; \
+gnatmake -q --GCC=$(CC) xeinfo -largs -v --GCC=$(CC); \
+./xeinfo ../einfo.h )
ada/sinfo.h : ada/sinfo.ads ada/xsinfo.adb
-$(MKDIR) ada/bldtools
$(CP) $^ ada/bldtools
-   (cd ada/bldtools; gnatmake -q xsinfo ; ./xsinfo ../sinfo.h )
+   (cd ada/bldtools; \
+gnatmake -q --GCC=$(CC) xsinfo -largs -v --GCC=$(CC); \
+./xsinfo ../sinfo.h )
ada/nmake.adb : ada/sinfo.ads ada/nmake.adt ada/xnmake.adb
-$(MKDIR) ada/bldtools
$(CP) $^ ada/bldtools
-   (cd ada/bldtools; gnatmake -q xnmake ; ./xnmake -b ../nmake.adb )
+   (cd ada/bldtools; \
+gnatmake -q --GCC=$(CC) xnmake -largs -v --GCC=$(CC) ; \
+./xnmake -b ../nmake.adb )
ada/nmake.ads :  ada/sinfo.ads ada/nmake.adt ada/xnmake.adb ada/nmake.adb
-$(MKDIR) ada/bldtools
$(CP) $^ ada/bldtools
-   (cd ada/bldtools; gnatmake -q xnmake ; ./xnmake -s ../nmake.ads )
+   (cd ada/bldtools; \
+gnatmake -q --GCC=$(CC) xnmake -largs -v --GCC=$(CC); \
+./xnmake -s ../nmake.ads )
update-sources : ada/treeprs.ads ada/einfo.h ada/sinfo.h ada/nmake.adb \
ada/nmake.ads


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Hugh Sasse
On Mon, 16 May 2005, Alexandre Oliva wrote:
On May 16, 2005, Robert Dewar <[EMAIL PROTECTED]> wrote:
After all, you can buy from Dell today a 2.4GHz machine with a 17"
monitor, DVD drive, and 256Meg memory for $299 complete. Sure, some
people cannot even afford that, but it is not clear that the gcc
project can regard this as a major user segment that should be taken
into account.
Just step back for a second and consider that the most common
computation platform these days is cell phones.  Also consider that a
number of cell phone manufacturers are adopting, or considering
adopting, GNU/Linux.  Consider that at least some of them are going to
[...]
Is it pertinent to remind people of the wider spread of Free
Software, such as Bangladesh (Brave GNU World, issue 56) and Africa
(various issues of Brave GNU World Eg 53,43) where people have
considerably more difficulties keeping up with Moore's Law?
Apologies if you didn't need reminding. :-)
I approve of the spirit of the suggestion about ulimit.
Hugh


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Florian Weimer
* Russ Allbery:

> Seriously, though, I think the above only tests things out to the degree
> that Autoconf would already be warning about no default specified for
> cross-compiling, yes?  Wouldn't you have to at least cross-compile from a
> system with one endianness and int size to a system with a different
> endianness and int size and then try to run the resulting binaries to
> really see if the package would cross-compile?

Is this really necessary?  I would think that a LD_PRELOADed DSO which
prevents execution of freshly compiled binaries would be sufficient to
catch the most obvious errors.

If configure is broken, you can still bypass it and manually write a
config.h.  Even I can remember the days when this was a rather common
task, even when you were not cross-compiling.

> It's not just that it's perceived as hard.  It's that it's perceived as
> hard *and* obscure.

Well, it's hard to keep something working which you cannot test
reliably.  I think it would be pretty straightforward to support some
form of cross-compiling for the software I currently maintain
(especially if I go ahead and write that GCC patch for exporting
structure layout and compile-time constants), but there's no point in
doing so if it's not requested by users, I cannot test it as part of
the release procedures, and anybody who needs a binary can typically
cross-compile it without much trouble anyway ("vi config.h ; gcc
*/*.o").


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Mike Stump
On May 16, 2005, at 12:25 PM, Hugh Sasse wrote:
Is it pertinent to remind people of the wider spread of Free
Software, such as Bangladesh (Brave GNU World, issue 56) and Africa
(various issues of Brave GNU World Eg 53,43) where people have
considerably more difficulties keeping up with Moore's Law?
?  I'd predict they are exponential too...  They just lag by, oh,  
5-40 years.



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Joseph S. Myers
On Mon, 16 May 2005, Steven Bosscher wrote:

> Just for the record, attached is gcctest's history of the overall
> memory requirement at -O[0123] for combine.i, insn-attrtab.i, and
> generate.ii (aka PR8361).  Honza's bot has been sending these
> reports since Septemper 2004, so that's where I started.

When memory consumption regresses, I think it's necessary to *file bugs in 
Bugzilla* reporting the issue, and quite likely to track down the 
responsible patch and add the responsible individual to the CC list of the 
PR.

Bots are useful, but they don't narrow things down to an individual 
patch and don't lead to the problem's existence being tracked beyond the 
immediate discussion.  Automatic regression testers are more effective 
when backed up by the people running them examining all the reported 
regressions and reporting them (less those which are already fixed or 
reported or are just noise from problems with the tester or tests which 
randomly pass or fail) to Bugzilla.

That's what I do with all testsuite regressions for C or C++ appearing on 
i686-linux, ia64-hpux, hppa2.0w-hpux or hppa64-hpux.  When a bug is 
reported this way there is a significant chance that there will be 
productive discussion involving the people responsible for causing or 
exposing the bug and leading to it being fixed.  (This doesn't always 
happen, especially for regressions only showing up on a subset of targets; 
so the reporter or someone else who cares about the problem may need to 
fix the problem someone else caused in the end.  Bugs 20605 and 21050 are 
examples of testsuite regressions affecting at least one secondary release 
platform which have the responsible patch identified but little attention 
shown.)

There has been discussion of regression testers automatically reporting 
failures to Bugzilla, and even a "regression" component existing for that 
purpose (presumably with the idea that people will refile those bugs into 
more specific components after analysis) - but there are only two open 
bugs in that component, both apparently manually reported.  From 
experience I think having people look at the regressions before reporting 
them in order at least to identify the distinct bugs involved and whether 
any are already known is desirable; at least it would avoid floods of 
automatic duplicate bugs for essentially the same issue.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: GCC 3.4.4 RC2

2005-05-16 Thread Mark Mitchell
Georg Bauhaus wrote:
On Mac OX X 10.2, the results are slightly discomforting,
even though I do get a compiler with
--enable-languages=c,ada,f77,c++,objc.
gcc summary has
# of unexpected failures1080
First, I would suggest disabling Ada, in order to get further.
As for the GCC failures, 1080 is certainly enough to say that the 
compiler is not working very well.  It may be the case that GCC 3.4.4 
requires newer versions of Apple's "cctools" package than you have 
installed -- and that the newer cctools cannot be installed on your 
version of the OS.  If that's the case, there may be no very good solution.

We'll not be able to say for sure unless you post additional information 
about the failures.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Karel Gardas
On Mon, 16 May 2005, DJ Delorie wrote:

We already do that for when checking is enabled, well the GC heuristics
are tuned such that it does not change which is why
--enable-checking=release is always faster than without it.
Right, but it doesn't call ulimit(), so other sources of memory
leakage wouldn't be affected.  I'm thinking if the gcc driver set a
per-process limit of, say, 128M, developers would learn to care about
working set performance.
I like the idea, but will it really work? While compiling MICO I hardly 
see mem usage below 128MB on 512MB/1GB RAM boxes, perhaps more on 512MB 
due to memory usage heuristic(s) -- so I assume setting hard ulimit to 
128MB will just result in build process crashing instead of slowdown and 
swapping, which would man get while using mem=128m as a linux boot param. 
Or am I completely mistaken?

Thanks,
Karel
--
Karel Gardas  [EMAIL PROTECTED]
ObjectSecurity Ltd.   http://www.objectsecurity.com


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Monday 16 May 2005 20:26, Karel Gardas wrote:
> On Mon, 16 May 2005, Steven Bosscher wrote:
> > Just for the record, attached is gcctest's history of the overall
> > memory requirement at -O[0123] for combine.i, insn-attrtab.i, and
> > generate.ii (aka PR8361).  Honza's bot has been sending these
> > reports since Septemper 2004, so that's where I started.
>
> Is it possible to also add -Os to your tested option set? IMHO this option
> is quite necessary for embedded developers who seems to complain in this
> thread.

Not interesting.  -Os is basically -O2 with some passes disabled.

Gr.
Steven


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread DJ Delorie

> so I assume setting hard ulimit to 128MB will just result in build
> process crashing instead of slowdown and swapping,

We would limit physical ram, not virtual ram.  If you do a "man
setrlimit", I'm talking about RLIMIT_RSS.  The result would be slowing
down and swapping, not crashing.


updating /testsuite/gcc.misc-tests

2005-05-16 Thread Nicholas K Rivers
Hello,

I'm new to GCC and hoping to get involved in its development. I'm working on
moving tests out of testsuite/gcc.misc-tests and putting them into the more
general frameworks--a project listed on
http://gcc.gnu.org/projects/beginner.html. Is there someone who would
receive my updates and review them?

Thanks,

Nicholas



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Karel Gardas
On Mon, 16 May 2005, DJ Delorie wrote:

so I assume setting hard ulimit to 128MB will just result in build
process crashing instead of slowdown and swapping,
We would limit physical ram, not virtual ram.  If you do a "man
setrlimit", I'm talking about RLIMIT_RSS.  The result would be slowing
down and swapping, not crashing.
But will this really work? For example FreeBSD's manpage says:
``RLIMIT_RSS  The maximum size (in bytes) to which a process's resident
  set size may grow.  This imposes a limit on the amount of
  physical memory to be given to a process; if memory is
  tight, the system will prefer to take memory from pro-
  cesses that are exceeding their declared resident set
  size.''
What I have problem understanding is the last sentence of this paragraph 
in the light of your claim that it will results in swapping especially 
when we consider developers' machines with 512MB/1GB RAM, i.e. machines 
where memory is not "tight".

Thanks,
Karel
--
Karel Gardas  [EMAIL PROTECTED]
ObjectSecurity Ltd.   http://www.objectsecurity.com


Re: updating /testsuite/gcc.misc-tests

2005-05-16 Thread Mike Stump
On May 16, 2005, at 12:21 PM, Nicholas K Rivers wrote:
I'm new to GCC and hoping to get involved in its development. I'm  
working on
moving tests out of testsuite/gcc.misc-tests and putting them into  
the more
general frameworks--a project listed on
http://gcc.gnu.org/projects/beginner.html. Is there someone who would
receive my updates and review them?
In general, I am against bug migration.  We want the testcase names  
to be stable, for ever, to quote my favorite.., uhm, to quote sponge  
bob.



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Peter Barada

>What I have problem understanding is the last sentence of this paragraph 
>in the light of your claim that it will results in swapping especially 
>when we consider developers' machines with 512MB/1GB RAM, i.e. machines 
>where memory is not "tight".

Sure, and this is the point.  Pick a number for the RSS and stick to
it.  Crank it up if you don't want to be bothered, but this
"yardstick" can be used to measure trends in the compiler's footprint
by measuring the number of page swaps.  It might also be a way to
measuer/improve the locality of reference since poor locality will
cause thrashing if the RSS is set low enough.  Of course if the RSS is
set too low than *any* pattern of page access will cause thrashing.

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread DJ Delorie

> What I have problem understanding is the last sentence of this
> paragraph in the light of your claim that it will results in
> swapping especially when we consider developers' machines with
> 512MB/1GB RAM, i.e. machines where memory is not "tight".

Sigh, Linux works the same way.  Processes can exceed their HARD
ulimit if there happens to be memory available, making RLIMIT_RSS
basically useless.

Grrr.


Re: [gnu.org #232556] GNU Mirror: SWITCHmirror replaces Swiss SunSITE

2005-05-16 Thread Gerald Pfeifer
On Thu, 5 May 2005, John Sullivan wrote:
> I've updated the mirror list on gnu.org. I'm passing this on to you so
> you can consider it for the GCC mirrors list as per his request.

Thanks.

Please note that for http://gcc.gnu.org/mirrors.html we only add mirrors
which specifically mirror the ftp area from gcc.gnu.org, whereas we refer
to the GNU mirror site list for those sites that mirror ftp.gnu.org (and
thus GCC releases).

Gerald
-- 
Gerald (Jerry) Pfeifer   [EMAIL PROTECTED]   http://www.pfeifer.com/gerald/


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Ralf Corsepius
On Mon, 2005-05-16 at 10:42 -0400, Peter Barada wrote:

> Until package maintainers take cross-compilation *seriously*, I have
> no choice but to do native compilation of a large hunk of the packages
> on eval boards that can literally takes *DAYS* to build.

The most amazing fact to me is: Not even GCC seems to take cross-
compilation seriously :(

Ralf




Re: updating /testsuite/gcc.misc-tests

2005-05-16 Thread Nicholas K Rivers
Mike Stump wrote:

> On May 16, 2005, at 12:21 PM, Nicholas K Rivers wrote:
>> I'm new to GCC and hoping to get involved in its development. I'm
>> working on
>> moving tests out of testsuite/gcc.misc-tests and putting them into
>> the more
>> general frameworks--a project listed on
>> http://gcc.gnu.org/projects/beginner.html. Is there someone who would
>> receive my updates and review them?
> 
> In general, I am against bug migration.  We want the testcase names
> to be stable, for ever, to quote my favorite.., uhm, to quote sponge
> bob.

I see. Well maybe there's a better task I could do. But shouldn't this
project be taken off http://gcc.gnu.org/projects/beginner.html if people
don't think it's worth doing?



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Joe Buck
On Mon, May 16, 2005 at 10:15:29PM +0200, Steven Bosscher wrote:
> On Monday 16 May 2005 20:26, Karel Gardas wrote:
> > On Mon, 16 May 2005, Steven Bosscher wrote:
> > > Just for the record, attached is gcctest's history of the overall
> > > memory requirement at -O[0123] for combine.i, insn-attrtab.i, and
> > > generate.ii (aka PR8361).  Honza's bot has been sending these
> > > reports since Septemper 2004, so that's where I started.
> >
> > Is it possible to also add -Os to your tested option set? IMHO this option
> > is quite necessary for embedded developers who seems to complain in this
> > thread.
> 
> Not interesting.  -Os is basically -O2 with some passes disabled.

That's not quite correct.  For example, the rule -Os uses for considering
inlining is completely different (inlining is allowed when the size of
generated code does not increase).



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Monday 16 May 2005 23:43, Ralf Corsepius wrote:
> On Mon, 2005-05-16 at 10:42 -0400, Peter Barada wrote:
> > Until package maintainers take cross-compilation *seriously*, I have
> > no choice but to do native compilation of a large hunk of the packages
> > on eval boards that can literally takes *DAYS* to build.
>
> The most amazing fact to me is: Not even GCC seems to take cross-
> compilation seriously :(

BS.  Even the large disto builders do cross compilations a lot.

I am getting pretty sick of this.  Can we now start discussing
what GCC does do well, or otherwise, for further complaints
remove me from the CC: please.

I can't say all is good about GCC.  There are always ways to do
things better.  But, as Dewar already pointed out, GCC just can
not be perfect for everyone's needs.  I, for one, am very happy
that we are finally pulling GCC out of the 80s, into the 21st
century.  The compile time and memory consumption problems are
obviously there, but just complaining is not going to fix them.

Yet complaining is all some people do.  It only demotivates me
more to work on these issues that you care about, and I in all
honesty don't give a s*** about.

It is IMVHO rediculous that the list where GCC is bashed the most
is the GCC list itself.

Gr.
Steven



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Tuesday 17 May 2005 00:07, Joe Buck wrote:
> On Mon, May 16, 2005 at 10:15:29PM +0200, Steven Bosscher wrote:
> > On Monday 16 May 2005 20:26, Karel Gardas wrote:
> > > On Mon, 16 May 2005, Steven Bosscher wrote:
> > > > Just for the record, attached is gcctest's history of the overall
> > > > memory requirement at -O[0123] for combine.i, insn-attrtab.i, and
> > > > generate.ii (aka PR8361).  Honza's bot has been sending these
> > > > reports since Septemper 2004, so that's where I started.
> > >
> > > Is it possible to also add -Os to your tested option set? IMHO this
> > > option is quite necessary for embedded developers who seems to complain
> > > in this thread.
> >
> > Not interesting.  -Os is basically -O2 with some passes disabled.
>
> That's not quite correct.  For example, the rule -Os uses for considering
> inlining is completely different (inlining is allowed when the size of
> generated code does not increase).

...meaning that the memory footprint at -Os is going to be smaller 
than the -O2 print for all but a few strange cases.

Gr.
Steven


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Joe Buck
On Mon, May 16, 2005 at 05:35:43PM -0400, DJ Delorie wrote:
> 
> > What I have problem understanding is the last sentence of this
> > paragraph in the light of your claim that it will results in
> > swapping especially when we consider developers' machines with
> > 512MB/1GB RAM, i.e. machines where memory is not "tight".
> 
> Sigh, Linux works the same way.  Processes can exceed their HARD
> ulimit if there happens to be memory available, making RLIMIT_RSS
> basically useless.

The Linux kernel has a boot-time option that you can use to lie to your
kernel about how much physical memory you have.  If the number you give
is too high, you will eventually crash, but if it is too low, you can
test how well your system would behave if it had less memory.  It was
originally there because the BIOS call that Linux used to use could only
report a value up to 64m, so if you had more you had to let the kernel
know.  That problem was fixed long ago.

So you can say "mem=128m" or the like.

See

http://www.faqs.org/docs/Linux-HOWTO/BootPrompt-HOWTO.html



Re: updating /testsuite/gcc.misc-tests

2005-05-16 Thread Zack Weinberg
Nicholas K Rivers <[EMAIL PROTECTED]> writes:

> Mike Stump wrote:
>> 
>> In general, I am against bug migration.  We want the testcase names
>> to be stable, for ever, to quote my favorite.., uhm, to quote sponge
>> bob.
>
> I see. Well maybe there's a better task I could do.

No, the instability in test names is a minor price to pay for having
less custom Tcl cruft.

You want to talk to Janis Johnson <[EMAIL PROTECTED]>, she's the
testsuite maintainer these days.

zw


[wwwdocs] Re: Wiki on home page

2005-05-16 Thread Gerald Pfeifer
On Fri, 13 May 2005, dmdlf wrote:
> I note that you have a wiki from 27 January 2005.
> It is made known by a line in the News/Announcements, which will disappear at
> some time, and anyway is not always the first reading for people.
> I suggest that, when the state of the wiki will be mature enough from your
> perspective, to insert the wiki in the left column under Documentation an
> maybe also in your Welcome.

Done thusly.

Add a link to our Wiki to the navigation bar.

Gerald

Index: style.mhtml
===
RCS file: /cvs/gcc/wwwdocs/htdocs/style.mhtml,v
retrieving revision 1.75
retrieving revision 1.78
diff -u -3 -p -r1.75 -r1.78
--- style.mhtml 3 May 2005 22:34:07 -   1.75
+++ style.mhtml 16 May 2005 22:23:44 -  1.78
@@ -194,6 +194,7 @@
   · http://gcc.gnu.org/install/test.html";>Testing
   Manual
   FAQ
+  http://gcc.gnu.org/wiki";>Wiki
   Further Readings
   
   


Re: updating /testsuite/gcc.misc-tests

2005-05-16 Thread Janis Johnson
On Mon, May 16, 2005 at 03:18:28PM -0700, Zack Weinberg wrote:
> 
> No, the instability in test names is a minor price to pay for having
> less custom Tcl cruft.
> 
> You want to talk to Janis Johnson <[EMAIL PROTECTED]>, she's the
> testsuite maintainer these days.

Yes, feel free to send questions to this list or, if you prefer, to
contact me directly about getting started.

Janis Johnson
IBM Linux Technology Center


Re: gcc.dg/compat/struct-layout-1.exp does not supported installed-compiler testing

2005-05-16 Thread Mark Mitchell
Ian Lance Taylor wrote:
1. Remove the use of config.h and HAVE_*_H.
2. Modify the generator not to depend on libiberty headers, including
hashtab.h, by substituting a simple dictonary object.
3. Adjust struct-layout-1.exp accordingly.
 
This is what I would recommend anyhow.
Done with the attached patch.  Tested on x86_64-unknown-linux-gnu by 
comparing the generated files with and without the patch, as well as by 
running the testsuite.  The time taken to run the struct layout tests 
(including their generation) was not measurably different before and 
after the change.  Applied to 4.0 and mainline.

I cribbed a bit from libiberty, but didn't take all of hashtab.c, as 
that just seemed excessive.

Please report any problems to me, of course.
(There's still a POSIX-ism in the generator, in that it tries to write 
to "/dev/null".  On Windows systems, I bet this will often work, but 
create a real file with that name.  It would be better, and avoid 
portability problems, to guard the calls to fwrite, etc., with "if 
(file)" rather than spew to "/dev/null", but that's for another day.)

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304
2005-05-16  Mark Mitchell  <[EMAIL PROTECTED]>

* gcc.dg/compat/generate-random.c (config.h): Do not include.
(limits.h): Include unconditionally.
(stdlib.h): Likewise.
* gcc.dg/compat/generate-random_r.c (config.h): Do not include.
(limits.h): Include unconditionally.
(stdlib.h): Likewise.
* gcc.dg/compat/struct-layout-1.exp: Do not link with libiberty.
* gcc.dg/compat/struct-layout-1_generate.c (config.h): Do not include.
(limits.h): Include unconditionally.
(stdlib.h): Likewise. 
(hashtab.h): Do not include.
(getopt.h): Likewise.
(stddef.h): Include.
(hashval_t): Define.
(struct entry): Add "next" field.
(HASH_SIZE): New macro.
(hash_table): New variable.
(switchfiles): Do not use xmalloc.
(mix): New macro.
(iterative_hash): New function.
(hasht): Remove.
(e_exists): New function.
(e_insert): Likewise.
(output): Use, instead of libiberty hashtable functions.
(main): Do not use getopt.  Do not call htab_create.

Index: gcc.dg/compat/generate-random.c
===
RCS file: /cvs/gcc/gcc/gcc/testsuite/gcc.dg/compat/generate-random.c,v
retrieving revision 1.2
diff -c -5 -p -r1.2 generate-random.c
*** gcc.dg/compat/generate-random.c 6 Nov 2004 19:28:43 -   1.2
--- gcc.dg/compat/generate-random.c 16 May 2005 22:45:12 -
***
*** 49,66 
 HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
 LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
 OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
 SUCH DAMAGE.*/
  
- #include "config.h"
- #ifdef HAVE_LIMITS_H
  #include 
- #endif
  #include "libiberty.h"
- #ifdef HAVE_STDLIB_H
  #include 
- #endif
  #include "generate-random.h"
  
  
  /* An improved random number generation package.  In addition to the standard
 rand()/srand() like interface, this package also has a special state info
--- 49,61 
Index: gcc.dg/compat/generate-random_r.c
===
RCS file: /cvs/gcc/gcc/gcc/testsuite/gcc.dg/compat/generate-random_r.c,v
retrieving revision 1.1
diff -c -5 -p -r1.1 generate-random_r.c
*** gcc.dg/compat/generate-random_r.c   23 Jul 2004 22:36:46 -  1.1
--- gcc.dg/compat/generate-random_r.c   16 May 2005 22:45:12 -
***
*** 50,67 
   *@(#)random.c5.5 (Berkeley) 7/6/88
   * It was reworked for the GNU C Library by Roland McGrath.
   * Rewritten to be reentrant by Ulrich Drepper, 1995
   */
  
- #include "config.h"
- #ifdef HAVE_LIMITS_H
  #include 
- #endif
  #include "libiberty.h"
- #ifdef HAVE_STDLIB_H
  #include 
- #endif
  #include "generate-random.h"
  
  
  /* An improved random number generation package.  In addition to the standard
 rand()/srand() like interface, this package also has a special state info
--- 50,62 
Index: gcc.dg/compat/struct-layout-1.exp
===
RCS file: /cvs/gcc/gcc/gcc/testsuite/gcc.dg/compat/struct-layout-1.exp,v
retrieving revision 1.2
diff -c -5 -p -r1.2 struct-layout-1.exp
*** gcc.dg/compat/struct-layout-1.exp   4 Aug 2004 01:43:30 -   1.2
--- gcc.dg/compat/struct-layout-1.exp   16 May 2005 22:45:12 -
*** set tstobjdir "$tmpdir/gcc.dg-struct-lay
*** 100,113 
  set generator "$tmpdir/gcc.dg-struct-layout-1_generate"
  
  set generator_src "$srcdir/$subdir/struct-layout-1_generate.c"
  set generator_src "$generator_src $srcdir/$subdir/generate-random.c"
  set generator_src "$generator_src $srcdir/$subdir/generate-random_r.c"
! set

GCC 3.4.4

2005-05-16 Thread Mark Mitchell
I've very nearly ready to release GCC 3.4.4.  If you have objections or 
high-priority fixes that you think will be required for this release, 
please speak up within the next 24 hours.

Thanks,
--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread DJ Delorie

> So you can say "mem=128m" or the like.

Yes, but that doesn't help when I want to test one application on a
system that's been otherwise up and running for months, and is busy
doing other things.  The RSS limit is *supposed* to do just what we
want, but nobody seems to implement it correctly any more.


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Ralf Corsepius
On Tue, 2005-05-17 at 00:10 +0200, Steven Bosscher wrote:
> On Monday 16 May 2005 23:43, Ralf Corsepius wrote:
> > On Mon, 2005-05-16 at 10:42 -0400, Peter Barada wrote:
> > > Until package maintainers take cross-compilation *seriously*, I have
> > > no choice but to do native compilation of a large hunk of the packages
> > > on eval boards that can literally takes *DAYS* to build.
> >
> > The most amazing fact to me is: Not even GCC seems to take cross-
> > compilation seriously :(
> 
> BS.  Even the large disto builders do cross compilations a lot.

So I suppose you have these general crossbuilding PRs fixed in your
sources:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21143

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21247

Another one I haven't filed yet, is GCC-4.x not correctly propagating
CFLAGS/CFLAGS_FOR_{BUILD|HOST|TARGET} to newlib in one-tree builds (I am
still investigating).

All these to me are strong indications that GCC-4.x has been poorly
tested in cross compilation.

Ralf




Re: gcc.dg/compat/struct-layout-1.exp does not supported installed-compiler testing

2005-05-16 Thread DJ Delorie

> (There's still a POSIX-ism in the generator, in that it tries to
> write to "/dev/null".  On Windows systems, I bet this will often
> work, but create a real file with that name.  It would be better,
> and avoid portability problems, to guard the calls to fwrite, etc.,
> with "if (file)" rather than spew to "/dev/null", but that's for
> another day.)

Both Cygwin and DJGPP know about /dev/null just fine.  I don't know
about MinGW though.

But it's a lot faster if you don't do the write at all ;-)


Re: some question about gc

2005-05-16 Thread zouq
1.
in the gt-c-decl.h,
three functions about lang_decl,
gt_pch_nx_lang_decl(),gt_ggc_mx_lang_decl, gt_pch_g_9lang_decl(),
what are the differences between the three functions?

2.
i can find the prefixes in the gengtype.c,

what are they setting for?

static const struct write_types_data ggc_wtd =
{
  "ggc_m", NULL, "ggc_mark", "ggc_test_and_set_mark", NULL,
  "GC marker procedures.  "
};

   static
const
struct
write_types_data
pch_wtd
=
{
  "pch_n", "pch_p", "gt_pch_note_object", "gt_pch_note_object",
  "gt_pch_note_reorder",
  "PCH type-walking procedures.  "
};





Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Tuesday 17 May 2005 02:53, Ralf Corsepius wrote:
> On Tue, 2005-05-17 at 00:10 +0200, Steven Bosscher wrote:
> > On Monday 16 May 2005 23:43, Ralf Corsepius wrote:
> > > On Mon, 2005-05-16 at 10:42 -0400, Peter Barada wrote:
> > > > Until package maintainers take cross-compilation *seriously*, I have
> > > > no choice but to do native compilation of a large hunk of the
> > > > packages on eval boards that can literally takes *DAYS* to build.
> > >
> > > The most amazing fact to me is: Not even GCC seems to take cross-
> > > compilation seriously :(
> >
> > BS.  Even the large disto builders do cross compilations a lot.
>
> So I suppose you have these general crossbuilding PRs fixed in your
> sources:
>
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21143

No, I just don't build gfortran as a cross.  There are many reasons
why this is a bad idea anyway.

Oh, and how helpful of you to post that patch to gcc-patches@ too...
NOT!

> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21247

I don't build Ada cross either, but AdaCore does, so you could ask
them to help you with this problem.

> Another one I haven't filed yet, is GCC-4.x not correctly propagating
> CFLAGS/CFLAGS_FOR_{BUILD|HOST|TARGET} to newlib in one-tree builds (I am
> still investigating).

I don't build with newlib either.

> All these to me are strong indications that GCC-4.x has been poorly
> tested in cross compilation.

No, just in the configurations you are using.

And since you're not posting the patches you attach to the bugzilla
PRs you open, you're not exactly helping to make things better either.

Gr.
Steven


Re: gcc.dg/compat/struct-layout-1.exp does not supported installed-compiler testing

2005-05-16 Thread Mark Mitchell
DJ Delorie wrote:
(There's still a POSIX-ism in the generator, in that it tries to
write to "/dev/null".  On Windows systems, I bet this will often
work, but create a real file with that name.  It would be better,
and avoid portability problems, to guard the calls to fwrite, etc.,
with "if (file)" rather than spew to "/dev/null", but that's for
another day.)

Both Cygwin and DJGPP know about /dev/null just fine.  I don't know
about MinGW though.
It doesn't.  MinGW is just MSVCRT.
But it's a lot faster if you don't do the write at all ;-)
Yes; that's why I said "better" in addition to "avoid portability 
problems". :-)

I kinda think any program opening /dev/null itself is slightly confused; 
/dev/null is a convenience for users to use with programs that insist on 
writing to a file, not for programs that might or might not themselves 
want to write to things.  Of course, this particular program is not 
performance-critical, so it's not like anyone has, or should have, tried 
hard to make it go maximally fast. :-)

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Tuesday 17 May 2005 02:59, Steven Bosscher wrote:
> On Tuesday 17 May 2005 02:53, Ralf Corsepius wrote:
> > On Tue, 2005-05-17 at 00:10 +0200, Steven Bosscher wrote:
> > > On Monday 16 May 2005 23:43, Ralf Corsepius wrote:
> > > > On Mon, 2005-05-16 at 10:42 -0400, Peter Barada wrote:
> > > > > Until package maintainers take cross-compilation *seriously*, I
> > > > > have no choice but to do native compilation of a large hunk of the
> > > > > packages on eval boards that can literally takes *DAYS* to build.
> > > >
> > > > The most amazing fact to me is: Not even GCC seems to take cross-
> > > > compilation seriously :(
> > >
> > > BS.  Even the large disto builders do cross compilations a lot.
> >
> > So I suppose you have these general crossbuilding PRs fixed in your
> > sources:
> >
> > http://gcc.gnu.org/bugzilla/show_bug.cgi?id=21143
>
> No, I just don't build gfortran as a cross.  There are many reasons
> why this is a bad idea anyway.
>
> Oh, and how helpful of you to post that patch to gcc-patches@ too...
> NOT!

Ah, I see you did post it to gcc-patches@, but not to fortran@, which
is a requirement for gfortran patches -- and the reason why nobody
has noticed the patch.

http://gcc.gnu.org/ml/gcc-patches/2005-04/msg02287.html

The patch is OK too.

Gr.
Steven



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Joe Buck
On Tue, May 17, 2005 at 03:11:03AM +0200, Steven Bosscher wrote:
> On Tuesday 17 May 2005 02:59, Steven Bosscher wrote:
> > Oh, and how helpful of you to post that patch to gcc-patches@ too...
> > NOT!
> 
> Ah, I see you did post it to gcc-patches@, but not to fortran@, which
> is a requirement for gfortran patches -- and the reason why nobody
> has noticed the patch.
> 
> http://gcc.gnu.org/ml/gcc-patches/2005-04/msg02287.html
> 
> The patch is OK too.

Steven, please try to be politer to someone who is trying to help.
This kind of tone will only discourage contributors.


Default value for libiconv in target-supports.exp?

2005-05-16 Thread Mark Mitchell
In the past, if libiconv wasn't set in site.exp, 
target_supports.exp:check_iconv_available would crash.  So, I changed it 
to default to "-liconv".

On GNU/Linux, that's not a very good default, since iconv is in libc. 
The same seems to hold on Solaris and HP-UX.

Does anyone have opinions about what the default should be?
--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Steven Bosscher
On Tuesday 17 May 2005 03:16, Joe Buck wrote:
> On Tue, May 17, 2005 at 03:11:03AM +0200, Steven Bosscher wrote:
> > On Tuesday 17 May 2005 02:59, Steven Bosscher wrote:
> > > Oh, and how helpful of you to post that patch to gcc-patches@ too...
> > > NOT!
> >
> > Ah, I see you did post it to gcc-patches@, but not to fortran@, which
> > is a requirement for gfortran patches -- and the reason why nobody
> > has noticed the patch.
> >
> > http://gcc.gnu.org/ml/gcc-patches/2005-04/msg02287.html
> >
> > The patch is OK too.
>
> Steven, please try to be politer to someone who is trying to help.

How is it helpful to not follow the rules when posting patches and
make exaggerated claims that something does not work?  All this
complaining makes me not want to contribute to GCC at all any more. 

> This kind of tone will only discourage contributors.

My tone was no different than Ralf's toward me.

This is the second time you think it necessary to "correct" me on a
public mailing list.  Don't do that.

Gr.
Steven



Re: Default value for libiconv in target-supports.exp?

2005-05-16 Thread David Edelsohn
> Mark Mitchell writes:

Mark> In the past, if libiconv wasn't set in site.exp, 
Mark> target_supports.exp:check_iconv_available would crash.  So, I changed it 
Mark> to default to "-liconv".

Mark> On GNU/Linux, that's not a very good default, since iconv is in libc. 
Mark> The same seems to hold on Solaris and HP-UX.

Mark> Does anyone have opinions about what the default should be?

It is deparate on AIX.  Is there any way that the testsuite can
pick up the value from the Makefile?

David


GCC Porting advice needed

2005-05-16 Thread Jonathan Bastien-Filiatrault
Hi list,

We are currently making a port for the TMS320C54x, I have some questions
on how to implement certain operations. Most arithmetic operations on
this machine are affected by flags which control how the operations do
sign-extension and overflow. These are called the "SXM" and "OVM" flags
respectively. These flags are stored in two registers along with carry
and other flags. I am somewhat confused as what CCmodes I should have to
use these flags. I also need to know how to test these flags in insns to
be able to do a "if_then_else" in rtl. One other possibility would be to
use "unspec" patterns, but I do not know how well GCC handles these.

Any advice would be appreciated. It would also be great if an
experienced GCC developer could give us a hand in the undertaking of
this project, we are currently running partially blind in some areas.

Happy Hacking,
Jonathan


signature.asc
Description: OpenPGP digital signature


Re: Default value for libiconv in target-supports.exp?

2005-05-16 Thread Mark Mitchell
David Edelsohn wrote:
Mark Mitchell writes:

Mark> In the past, if libiconv wasn't set in site.exp, 
Mark> target_supports.exp:check_iconv_available would crash.  So, I changed it 
Mark> to default to "-liconv".

Mark> On GNU/Linux, that's not a very good default, since iconv is in libc. 
Mark> The same seems to hold on Solaris and HP-UX.

Mark> Does anyone have opinions about what the default should be?
It is deparate on AIX.  Is there any way that the testsuite can
pick up the value from the Makefile?
It does that already for in-tree testing, but for installed compiler 
testing we have no Makefiles...

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: updating /testsuite/gcc.misc-tests

2005-05-16 Thread Nicholas K Rivers
> Yes, feel free to send questions to this list or, if you prefer, to
> contact me directly about getting started.
> 
> Janis Johnson
> IBM Linux Technology Center

Alright, to start with, I'd like to get a few clarifications. 

Should I go through all of the test cases in testsuite/gcc.misc-tests or
just important ones?

Also, do all of these tests need to be kept around? To make this discussion
more concrete, let's consider the two test cases sieve.c and matrix1.c,
both added in 1995:

Sieve.c uses the sieve of Eratosthenes to count all of the primes between 3
and 2*8190+3. It does this 100 times and then returns 0 on completion. 

Matrix1.c declares three 100x100 two dimensional arrays: a, b, and c. It
loops through a and b setting all of their elements to one; uses a loop to
perform matrix multiplacation setting c=a*b; and then verifies the result
printing an error if it is wrong and returning zero in either case.

The expect scripts for these two test cases basically just compile and run
the programs making sure that the programs compile without any errors,
return zero upon running and don't produce any output. 

Are test cases like these still important or can they be scrapped? And if
they should be saved where should they go?



Re: some question about gc

2005-05-16 Thread Ian Lance Taylor
"zouq" <[EMAIL PROTECTED]> writes:

> in the gt-c-decl.h,
> three functions about lang_decl,
> gt_pch_nx_lang_decl(),gt_ggc_mx_lang_decl, gt_pch_g_9lang_decl(),
> what are the differences between the three functions?

The _nx_ functions fill in the pchw field of ggc_root_tab.  This is used
when saving the data out to a PCH file.  The _mx_ functions fill in
the cb field of gcc_root_tab.  This is used when marking from a root
during garbage collection.  The 9lang functions appear in language
specific header files.

> 2.
> i can find the prefixes in the gengtype.c,
> 
> what are they setting for?
> 
> static const struct write_types_data ggc_wtd =
> {
>   "ggc_m", NULL, "ggc_mark", "ggc_test_and_set_mark", NULL,
>   "GC marker procedures.  "
> };
>   
>  static
> const
> struct
> write_types_data
> pch_wtd
> =
> {
>   "pch_n", "pch_p", "gt_pch_note_object", "gt_pch_note_object",
>   "gt_pch_note_reorder",
>   "PCH type-walking procedures.  "
> };

PCH means Precompiled Header.  See the docs.

Ian


Re: GCC 3.4.4 RC2

2005-05-16 Thread John David Anglin
> Please download, build, and test.

I've now completed testing on the PA and don't see any major issues.

The only easily fixable issue that showed up in testing was the failure
of 26_numerics/complex/pow.cc under hpux 10.20.  This fails because of
a corner case in the 10.20 math library.  The problem was fixed in the
4.0.0 release.

There are a number of minor testsuite issues, some fixed in 4.0.0.
The failure of badalloc1.C under hpux 11.11 is a testsuite problem
that could be fixed by updating to the 4.1.0 version.

Dave
-- 
J. David Anglin  [EMAIL PROTECTED]
National Research Council of Canada  (613) 990-0752 (FAX: 952-6602)


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

> Alexandre Oliva <[EMAIL PROTECTED]> writes:
>> On May 16, 2005, Russ Allbery <[EMAIL PROTECTED]> wrote:

>>> And package maintainers will never take cross-compilation seriously
>>> even if they really want to because they, for the most part, can't test
>>> it.

>> configure --build=i686-pc-linux-gnu \
>> --host=i686-somethingelse-linux-gnu 

>> should be enough to exercise most of the cross-compilation issues, if
>> you're using a sufficiently recent version of autoconf, but I believe
>> you already knew that.

> What, you mean my lovingly hacked upon Autoconf 2.13 doesn't work?

No, just that it doesn't have the code that just compares build with
host to decide whether to enter cross-compilation mode.  Unless you
back-ported that from autoconf 2.5x, that is.

> Seriously, though, I think the above only tests things out to the degree
> that Autoconf would already be warning about no default specified for
> cross-compiling, yes?

I believe so, yes.  A configure script written with no regard to
cross-compilation may still fail to fail in catastrophic ways if
tested with native-cross.

> Wouldn't you have to at least cross-compile from a
> system with one endianness and int size to a system with a different
> endianness and int size and then try to run the resulting binaries to
> really see if the package would cross-compile?

Different endianness is indeed a harsh test on a package's
cross-compilation suitability.  Simple reliance on size of certain
types can already get you enough breakage.  Cross-building to x86 on
an x86_64 system may already catch a number of these.

> A scary number of packages, even ones that use Autoconf, bypass Autoconf
> completely when checking certain things or roll their own broken macros to
> do so.

+1

> I have never once gotten a single bug report, request, or report of
> anyone cross-compiling INN.  Given that, it's hard to care except in
> some abstract cleanliness sense

But see, you do care, and you're aware of the issues, so it just
works.  Unfortunately not all maintainers have as much knowledge or
even awareness about the subject as you do.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Florian Weimer <[EMAIL PROTECTED]> wrote:

> Is this really necessary?  I would think that a LD_PRELOADed DSO which
> prevents execution of freshly compiled binaries would be sufficient to
> catch the most obvious errors.

This would break legitimate tests on the build environment, that use
e.g. CC_FOR_BUILD.

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 3.4.4 RC2

2005-05-16 Thread Alexandre Oliva
On May 16, 2005, Georg Bauhaus <[EMAIL PROTECTED]> wrote:

> - cd ada/doctools && gnatmake -q xgnatugn
> + cd ada/doctools && gnatmake -q --GCC=$(CC) xgnatugn -largs --GCC=$(CC)

Don't you need quotes around $(CC), for the general case in which it's
not as simple as `gcc', but rather something like `ccache distcc gcc'
or just `gcc -many -Options'?

-- 
Alexandre Oliva http://www.ic.unicamp.br/~oliva/
Red Hat Compiler Engineer   [EMAIL PROTECTED], gcc.gnu.org}
Free Software Evangelist  [EMAIL PROTECTED], gnu.org}


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Peter Barada

>BS.  Even the large disto builders do cross compilations a lot.

Yeah, I know.  I did consulting for a 'large disto builder'.  Do you
have a clue how long it takes to build the base packages for a PXA255
board(including X11 that won't even run on the board but is required
due to package dependecies)?  Can you think in *days*, and that was
more than a year ago.  Even then we were all concerned about the trend
in compliation speed.  Speak of what you *personally* know.

>I am getting pretty sick of this.  Can we now start discussing
>what GCC does do well, or otherwise, for further complaints
>remove me from the CC: please.

I've pulled you from the CC list, but I'm passing it on to the GCC
list in hopes that someone there cares more than you.  The RSS bloat
probelm is *not* going to go away, and *wishing* it away won't.

>I can't say all is good about GCC.  There are always ways to do
>things better.  But, as Dewar already pointed out, GCC just can
>not be perfect for everyone's needs.  I, for one, am very happy
>that we are finally pulling GCC out of the 80s, into the 21st
>century.  The compile time and memory consumption problems are
>obviously there, but just complaining is not going to fix them.

No, gcc is not perfect for all things, but the trend in resource
consumption is getting pretty serious.  As others have pointed out
before, no one complains about a resource bproblem until it gets large
enough that it made it inconvenient if not just impossible.

You don't complain to your car dealer when your car runs fine, but if
it craps out on the way to work, you'll be complaining pretty damn
loudly, expecially if its nearly brand new.

I develop GCC for ColdFire, and I have been contributing back changes
to GCC in the hopes that it will be a world-class compiler that I can
use for my work.  Unfortunately due to circumstances that have
*nothing* to do with GCC I have no choice but to build packages using
a GCC that runs natively in an Linux environment on my ColdFire V4e
embedded board where the resource constraints are *exteremely* severe,
and possibly an extra MB of RSS usage by GCC version-to-version will
be the difference between success and failure.

I have great faith in OSS and FSF code, and I don't want to demean the
valued contributions that people have made to it, but please
understand that Linux systems are built using GCC, whether its for a
workstation or an embedeed Linux device, and as such *should* consider
the problems that both encounter and not just favor the workstation end. 

-- 
Peter Barada
[EMAIL PROTECTED]


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Michael Veksler






Peter Barada wrote on 17/05/2005 07:12:41:

>
> >BS.  Even the large disto builders do cross compilations a lot.
>
> Yeah, I know.  I did consulting for a 'large disto builder'.  Do you
> have a clue how long it takes to build the base packages for a PXA255
> board(including X11 that won't even run on the board but is required
> due to package dependecies)?  Can you think in *days*, and that was
> more than a year ago.  Even then we were all concerned about the trend
> in compliation speed.  Speak of what you *personally* know.

If things are as bad as you say, then IMVHO you may write a small
utility for PXA255 that will impersonate a native gcc compiler. This
utility will RPC (or ssh, etc) a cross compiler on a GHz machine, and will
then pull the results back to your PXA255 (via NFS, RPC, SSH, etc).
Maybe you can even take distcc and hack it to give you what you
need. This may cut your times by a couple of orders of magnitude.

Of course, this will not help someone in Bangladesh with a Pentium.

  Michael



Re: GCC 4.1: Buildable on GHz machines only?

2005-05-16 Thread Karel Gardas
Folks,
you all are great brave men hacking on one of the most mission-critical 
free software piece ever. I'm seeing some of you are more and more 
frustrated, since this thread is turning into the flame-war.

As a long time GCC user, I would like to ask you to calm down a bit if 
this is possible, please!

Thanks,
Karel
--
Karel Gardas  [EMAIL PROTECTED]
ObjectSecurity Ltd.   http://www.objectsecurity.com