Re: generation of gt-*.h files

2006-10-17 Thread Mike Stump

On Oct 17, 2006, at 7:07 AM, Basile STARYNKEVITCH wrote:

mv gcc/gt-tree-ssa-operands.h /tmp


/usr/src/Lang/gcc/gcc/tree-ssa-operands.c:2571:34: error: gt-tree- 
ssa-operands.h: No such file or directory


Patient: Doctor, it hurts when I hit myself in the head.
Doctor: Don't do that.


Why is the gt-tree-ssa-operands.h not rebuilt if removed?


The Makefile contains the answer to your question.  Hint, see the s-*  
rules and the rules with gtype in them.


More generally, I have trouble adding a pass basile-analys.c (which  
does
nothing useful yet) which includes its header basile-analys.h which  
contains
struct-ures with GTY(()) so I need to have gt-basile-analys.h  
generated from

basile-analys.h


Check to ensure that it is in GTFILES in Makefile.in.


Re: Expressions simplification statistic

2006-10-17 Thread Mike Stump

On Oct 17, 2006, at 8:05 AM, Dino Puller wrote:

i'm looking for a statistic of how many expressions simplification
may be possible on source code


One way would be:

  http://www.cs.fit.edu/~mmahoney/compression/text.html

though, this assumes a particular definition of simplification.  For  
other definitions, gcc doesn't just tell you an answer, you'd have to  
define what you think a simplification is and then instrument gcc to  
provide that answer.  This would be lots of hard work at best.


Re: Help with openGL

2006-10-17 Thread Mike Stump

On Oct 17, 2006, at 1:32 PM, gbiaki wrote:

I have just downloaded the openGL libs through cygwin/setup.exe


Wrong list.  Try the cygwin list instead.



Sent from the gcc - Dev mailing list archive at Nabble.com.


:-(


Re: How can I put gcc on NCR MP-RAS with no ncr compiler?

2006-10-17 Thread Mike Stump

On Oct 17, 2006, at 3:12 PM, Saul Krasny wrote:
I need to put up a cc development environment on an MCR SVR4 MP RAS  
unix. The machines I have don't have any c compiler except the  
hidden one the os build uses.


How can I go about this?


Build a cross compiler.  See the web site on how to build one.  This  
assumes that the software you're interested in building has been  
ported.  If not, first, port the software, then build it.  Once you  
have a cross compiler, then just build up gcc, install it, copy the  
binaries over, and presto, you then have a native build system on your  
box.


Is there a place to download a prebuilt version. (What else will I  
need? for example make)


Nope, not that we know of.  You'd need to venture someplace that has  
software for your box to find binaries.  Google might be able to find  
a binary for you, ask him.


Re: gcc/testsuite incorrect checking of compiler's retval in dg-compile

2006-10-18 Thread Mike Stump

On Oct 18, 2006, at 2:22 PM, Bernhard Fischer wrote:

I need to check for a non-0 return value in dg-compile testcases in
gcc.


I'd not worry about it in general.  The exit status should be properly  
checked for every other compile line and it should be ok.


Re: gcc trunk

2006-10-27 Thread Mike Stump

On Oct 26, 2006, at 7:10 PM, Murali Vemulapati wrote:

what is the release number for gcc trunk (mainline)?


$ cat gcc/BASE-VER
will always show you the correct information, presently it says:
4.3.0


Re: regeneration of files

2006-10-27 Thread Mike Stump

On Oct 26, 2006, at 6:40 PM, Mike Stump wrote:
The ones that were of particular interest were the libgfortran  
ones, Jack was trying to build on a G4 and had hopes they might fix  
his build.


Jack confirms that a regeneration of libgfortran fixed his build.  He  
also reports that boehm-gc has the same problem.


Re: build failure, GMP not available

2006-10-30 Thread Mike Stump

On Oct 30, 2006, at 1:55 PM, Kaveh R. GHAZI wrote:

Copies of the correct sources were put in:
ftp://gcc.gnu.org/pub/gcc/infrastructure/


mrs $ bunzip2 ?  :-(  I just installed the broken one and didn't worry about it.   
I'm sure it'll come back to bite me.  I wish the mpfr people could be  
swayed to put out a `fixed' release, say 2.2.1.  Requiring the  
patching of a download is, uhm, so old-skool.


Re: build failure, GMP not available

2006-10-30 Thread Mike Stump

On Oct 30, 2006, at 4:55 PM, Mike Stump wrote:

3 out of 4 hunks FAILED -- saving rejects to file tests/texp2.c.rej

?


I'm informed that --dry-run is broken...  Very odd, so unfortunate.


Re: regeneration of files

2006-10-31 Thread Mike Stump

On Oct 29, 2006, at 9:49 PM, Tom Tromey wrote:

Mike> libjava/configure

I updated (svn trunk) and re-ran autoconf here, and didn't see any
change.


Hum, I don't know off-hand if your autoconf 2.59 isn't quite 2.59, or  
mine is strange, or I'm getting some sort of cross pollution or you  
are.  Judging from the change, I'd say I am, though, I did think I  
was running virgin 2.59.



What version of autoconf are you using?


Virgin 2.59 ( I think).


Re: Even stricter implicit conversions between vectors

2006-10-31 Thread Mike Stump

On Oct 31, 2006, at 10:47 AM, Mark Shinwell wrote:

What do others think?


My only concern is that we have tons of customers with tons of code  
and you don't have any and that you break their code.  I don't have  
any idea if this would be the case or not, I don't usually do the  
vector bugs.


Re: Even stricter implicit conversions between vectors

2006-10-31 Thread Mike Stump

On Oct 31, 2006, at 11:52 AM, Andrew Pinski wrote:

#define vector __attribute__((vector_size(16) ))

vector int f(vector int, vector unsigned int);

int g(void)
{
  vector int t;
  vector int t1;
  vector unsigned int t2;
  t2 = f(t,t1);
}


Our 3.3 compiler gives:

t.c:10: error: incompatible type for argument 2 of `f'
t.c:10: error: incompatible types in assignment

so, if this represents what you want to do, certainly this probably  
would be safe.


Re: return void from void function is allowed.

2006-10-31 Thread Mike Stump

On Oct 31, 2006, at 12:21 PM, Igor Bukanov wrote:

GCC 4.1.2 and 4.0.3 incorrectly accepts the following program:

void f();

void g()
{
   return f();
}

No warning are issued on my Ubuntu Pentium-M box. Is it a known bug?


If you want one:

mrs $ gcc-4.2 -ansi -pedantic-errors t.c
t.c: In function 'g':
t.c:5: error: 'return' with a value, in function returning void

This is valid in C++.

And final thought, wrong mailing list...   gcc-help would have been  
better.


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mike Stump

On Nov 1, 2006, at 10:48 AM, Steven Bosscher wrote:
I am probably overlooking something, but if the only problematic  
system

is glibc, maybe this can be fixed with a fixincludes hack?


That would be a massive hack.


Yes, fixincludes is a massive hack.  Yes, it should not exist.  But,  
let's keep in mind, it was invented to solve this type of problem.   
Doing up a fixincludes hack is better than requiring that people  
upgrade glibc.  And, if that work is done, it lessons the impact of  
the change and that would be a good thing.


Re: Question about asm on functions/variables

2006-11-02 Thread Mike Stump

On Nov 1, 2006, at 8:10 PM, Andrew Pinski wrote:

We don't reject this TU during compiling but the assembler does.  Is
this correct or should we actually reject this during compiling?


If you add any checking code, also consider:

int i asm("r1");
int j asm("r1");


Re: regenerating configure in gcc

2006-11-03 Thread Mike Stump

On Nov 1, 2006, at 7:56 AM, Jack Howarth wrote:

autoreconf -I ../config


In general, you will want to check the Makefile and see what it uses  
to run aclocal.


In java for example, they use:

ACLOCAL_AMFLAGS = -I . -I .. -I ../config

So, in fact, I think you regenerated the file incorrectly.  You  will  
either want to --enable-maintainer-mode and use make to regenerate  
them or be careful in how you regenerate them (if you check in  
regenerated files).  Each directory is randomly different.  Side  
note, if you need config, then, you'd want to ensure that you add - 
I ../config to ACLOCAL_AMFLAGS if it doesn't already have it, so that  
the next person to regenerate the file doesn't `remove' your patches.


Re: 16 byte alignment hint for sse vectorization

2006-11-05 Thread Mike Stump

On Nov 4, 2006, at 11:00 AM, Michael James wrote:

Does anyone have a suggestion?


#define SSE __attribute__((aligned (16)))

typedef float matrix_sub_t[1024] SSE;
typedef matrix_sub_t matrix_t[100];

matrix_t a, b, c;

void calc(matrix_sub_t * restrict ap,
  matrix_sub_t * restrict bp,
  matrix_sub_t * restrict cp) {
  int i, n = 1024;

  for (i=0; iwould seem the be one way to do this that should work.  Doesn't seem  
to vectorize for me, though, if I read that right, the alignment  
issue isn't the problem but rather the data dependancy analysis:


t.c:13: note: === vect_analyze_dependences ===
t.c:13: note: not vectorized: can't determine dependence between  
(*bp_12)[i_4] and (*ap_16)[i_4]

t.c:13: note: bad data dependence.


Re: differences between dg-do compile and dg-do assemble

2006-11-06 Thread Mike Stump

On Nov 5, 2006, at 6:52 PM, Manuel López-Ibáñez wrote:

Although I understand what is the difference between dg-do compile and
dg-do assemble, I have noticed that there are many testcases  that use
either dg-compile or dg-do assemble and do nothing with the output.



Thus, I would like to know:

Is it faster {dg-do compile} or {dg-do assemble}?


Our assembler is in the 1-2% range for compile times.  So, using the  
right one might speed things up 1-2%, if we didn't test on a 2-4  
processor box, but we do, so, we don't care.  On a 1 processor box,  
it is so marginal as to almost not worry about it, though, I'd tend  
to pick the right one for tests I author.


Is it appropriate to always use the faster one if the testcase just  
checks for the presence/absence of warnings and errors?


Yes, it is appropriate to use the right one.  As for which one is  
right, the one that FAILs when the compiler has the bug and PASSes  
when the compiler has been fixed is a good first order  
approximation.  Beyond that, if assemble is enough to do that, you  
can use it.  Some testcases will need compile to elicit a FAIL.  It  
is natural that some people will tend to use compile instead of  
assemble occasionally, when assemble would have worked, don't worry  
about it, it is healthy to have a mix.


Re: Compiling gcc 3.2.3, AMD, x86_64,

2006-11-06 Thread Mike Stump


On Nov 6, 2006, at 5:25 PM, Philip Coltharp wrote:

I'm trying to compile gcc v3.2.3 and I'm getting through most of it  
but the make file stops showing the following error:


/bin/sh: ./../../../configure: No such file or directory


I suspect the answer is don't do:

../configure

instead, do ../gcc/configure...  This would be a FAQ if my guess is  
correct.  gcc-help is more appropriate for user questions like this.


Re: Compiling gcc 3.2.3, AMD, x86_64,

2006-11-06 Thread Mike Stump

On Nov 6, 2006, at 6:57 PM, Mike Stump wrote:

On Nov 6, 2006, at 5:25 PM, Philip Coltharp wrote:

I'm trying to compile gcc v3.2.3 and I'm getting through most of  
it but the make file stops showing the following error:


/bin/sh: ./../../../configure: No such file or directory


I suspect the answer is don't do:

../configure

instead, do ../gcc/configure...


Ah, I read though more of your wiki, I guessed wrong.  You didn't  
give all the commands you issued and I still suspect the same issue,  
try this:


untar gcc
mkdir obj
cd obj
../gcc/configure [...]
make

or annotate your wiki with _all_ the commands you did, also, start of  
in a new directory, no make clean, it just isn't worth it.


Re: Problem with listing i686-apple-darwin as a Primary Platform

2006-11-06 Thread Mike Stump

On Nov 6, 2006, at 9:10 PM, Eric Christopher wrote:
Oh and 10.0, 10.1, 10.2 compiling with GCC are all broken (so is  
10.3).


I'd probably suggest at least 10.3.9 myself


My take, 10.2 and on should work.  I think it is wrong to put things  
into darwin.[ch] that don't work on earlier systems.  And I don't  
think that on 10.2, dwarf should be the default, as the gdb on that  
system isn't going to work.  I think that for 10.5, the default being  
dwarf would be fine.  For 10.4, I tend to think it won't work, so  
there is no point in even trying.


Geoff?


Re: Abt long long support

2006-11-06 Thread Mike Stump

On Nov 6, 2006, at 9:30 PM, Mohamed Shafi wrote:

My target (non gcc/private one) fails for long long testcases


Does it work flawlessly otherwise, if not, fix all those problems  
first.  After those are all fixed, then you can see if  it then just  
works.  In particular, you will want to ensure that 32 bit things  
work fine, first.



there are cases (with long long) which gets through, but not with the
right output. When i replace long long with long the testcases runs
fine, even those giving wrong output.
The target is not able to compile properly for simple statements like

long long a = 10;

So when i looked into the .md file i saw no patterns with DI machine
mode ,used for long long(am i right?), execpt

define_insn "adddi3"  and   define_insn "subdi3"

The .md file says that this is to prevent gcc from synthesising it,
though i didnt understand what that means.


Does this mean that in your md file you define adddi3 and subdi3?   
And, are the definition right or wrong?  If wrong, why not fix them?   
I suspect they are wrong.  For example if they expand to "", that is  
certainly wrong.



Any thoughts?


Yes, having us guess as why your port doesn't work, isn't productive  
for us.  If you want to enable us to know why it doesn't work and  
help you, you can put the entire port up for easy access somewhere.


To do a port, you have to be able to synthesize testcases, run them  
through the compiler, read the output of the compiler, understand the  
target assembly language to know why it is wrong and then map that  
back to the target files.  How many of these steps were you able to  
do for this case?  For the last step, reading and understanding the  
gcc porting manual is useful, as is studying ports that are similar  
to yours.


You can help us answer your questions by including all the relevant  
details.  Things that are relevant include the testcase, the output  
of -da and -fdump-all-trees, the generated .s file, and the relevant  
portions of the .md file.


Re: How to grow the Fortran I/O parameter struct and keep ABI compatibility

2006-11-07 Thread Mike Stump

On Nov 7, 2006, at 1:59 AM, FX Coudert wrote:
The idea is that common.flags has a bit for every possible member  
such as rec, to indicated whether it's present or not. The question  
is that we sometimes need to add another struct member (like rec)  
in this structure, to implement new features. I think we can add it  
at the end, since when code generated by older compilers calls the  
library, the flag for that new member is not set, and the struct  
member is never accessed.


Almost sounds reasonable.

Now, we also sometimes want to increase the size of the private  
stuff, and I don't see how we can do that in a way that keeps ABI  
compatibility, because the bits in the private stuff are always  
used by the library. So, I am missing something here or is there  
actually no way to keep that scheme and have ABI compatibility?


A layer of indirection can solve almost any problem:

st_parameter_dt *parm = __gfortran_st_parameter_dt_factory ();

The idea is that the only thing that constructs one, calls out to the  
library to construct it.  If the library is updated, it can be  
updated for a new size to allocate as well.  See the factory  
pattern.  Also, this way, you don't need to pad it, nothing should  
know the size except for the library.  If you shift fields around,  
just add accessors for the fields in the library.  As the fields move  
around, the accessor code changes, but since it is in the library,  
hidden behind stable interfaces, it continues to work.


Note, once you add an interface,  you're stuck with it forever.  One  
you define an interface type, you're stuck with it forever.


Imagine LTO knowing the type, layout and size from a header file,  
squirreling that away in the filesystem, you update fortran  
compilers, the private stuff changes, LTO kicks in, the question is  
asked, is this the same type or different type, some optimizer person  
says, obviously different, the stuff in p (the private structure) are  
different, so now we generate bad code.  :-(  Either, the user can't  
use LTO that way, or you can't even change the private details that  
are exposed by headers.


I'd like to think that google can find a comprehensive set of rules  
for you to follow, does seem like we should have a pointer to such a  
page from our documentation.  Maybe a glibc type can furnish such a  
pointer.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mike Stump

On Nov 7, 2006, at 2:13 PM, Richard Kenner wrote:

Like when int and long have the same range on a platform?
The answer is they are different, even when they imply the same  
object

representation.

The notion of unified type nodes is closer to syntax than semantics.


I'm more than a little confused, then, as to what we are talking about
canonicalizing.  We already have only one pointer to each type, for  
example.


Ok, great, pointers are unique, or, to be precise, they are unique iff  
the types they point to are unique.  Notice how that doesn't actually  
buy you very much.


Anyway, in C++, the entire template mechanism was rife with building  
up duplicates.  I'd propose that this problem can (and should be  
addressed) and that we can do it incrementally.  Start with a hello  
world, then in comptypes, check to see when it says yes, they are the  
same, but the address equality checker says they might not be the  
same, print a warning.  Fix the dup builders until no warnings.  Then  
rebuild the world and repeat.  :-)  Work can be checked in, as dups  
are eliminated.


This will tend to reduce memory consumption.  Compile time will be  
sped up when you think you have an entire category of duplicates (same  
TREE_CODE for example) handled, and you limit recursion in comptypes  
and instead return not_equal when you hit the completely handled  
category.


I did an extensive investigation in this area years ago, templates  
were the worse offender at that time.   !=  
 type stuff.


Re: bootstrap on powerpc fails

2006-11-07 Thread Mike Stump

On Nov 7, 2006, at 3:48 PM, Kaveh R. GHAZI wrote:
Perhaps we could take a second look at this decision?  The average  
system
has increased in speed many times since then.  (Although sometimes I  
feel
like bootstrapping time has increased at an even greater pace than  
chip

improvements over the years. :-)


Odd.  You must not build java.  I'd rather have one person that tests  
it occasionally and save the CPU cycles of all the rest of the folks,  
more scalable.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mike Stump

On Nov 7, 2006, at 3:53 PM, Mike Stump wrote:
Anyway, in C++, the entire template mechanism was rife with building  
up duplicates.


Oh, and as for why not having a canonical type is bad, callers to  
comptypes are notorious for just beating it to death:


  http://gcc.gnu.org/ml/gcc-patches/2002-11/msg00537.html

My conclusion at the end was, the best speed up possible, isn't to  
mess with the callers, but rather, to get types canonicalized, then  
all the work that comptypes would normally do, hopefully, just all  
goes away.  Though, in the long run those quadratic algorithms have to  
one day die, even if comptypes is fixed.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mike Stump

On Nov 7, 2006, at 7:13 PM, Doug Gregor wrote:
Now, how much do we worry about the fact that we won't be printing  
typedef names


The only potential language gotcha I can think of is:

5 If the typedef declaration defines an unnamed  class  (or  enum),  the
  first  typedef-name  declared by the declaration to be that class  
type
  (or enum type) is used to denote the class type  (or  enum  type)   
for

  linkage purposes only (_basic.link_).  [Example:
  typedef struct { } *ps, S;  // S is the class name for linkage  
purposes

--end example]

we still gotta get that right.  I think this just changes the  
underlying type and doesn't have any typedef bits to it.


As for warning/error messages, I'd hate to regress on that front.   
Doug, as a heavy template user, I'm sure you've see plenty of warning/ 
error messages...  How important is the typedef name in your experience?


when we combine duplicate nodes? If we don't care, our job is much  
easier. If we do care, then we need to introduce a system for  
equivalence classes to keep those typedef names, which will  
complicate matters a bit.


I agree, it will complicate things if we want the typedef name  
around.  I'm thinking of an inverse mapping that takes us from tuple 
(context,type) to typedef name.  I want to boost the typedef name  
out, kinda like a flyweight pattern, but inverted, instead of  
boosting the invariant out, I want to raise the variant (the typedef  
name) out of the type.  Imagine if we had a map (tuple(context, type)  
-> typedef name, and when we wanted the possible typedef name, we  
take the type and the context in which we got the type and look it up  
in the map.  If not found, no typedef name.  If found, it is the  
typedef name we're interested in.  The benefit of this scheme would  
be that constant time type equality operators remain constant time.   
Producing error messages is slow, but we don't care about that.


Each place that `moves' a type around where we want to preserve the  
typedef would instead have to call move_typedef (new_context,  
old_context, type), which would register the found typedef map(tuple 
(old_context, type) into the tuple(new_context, type) slot in the map.


The map can be added after the fact, if we ripped out the typedef  
name, if we think it isn't important and then later we want to add it  
back in.



Since we know that type canonicalization is incremental, could we work
toward type canonicalization in the GCC 4.3 time frame?


If by we you mean you, I don't see why that would be a bad  
idea.  :-)  The risk is if one invests all this effort, and the win  
turns out to be < 1% on real code and 10x on benchmark code, one  
feels bad.  The was the net result of the template-3 patch, nice 10x  
compile time speedup, on more normal code, just a bit slower (2.436%).


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mike Stump

On Nov 7, 2006, at 7:09 PM, Dale Johannesen wrote:
I do understand the advantages of sharing them more.  Perhaps some  
90% solution could be made to work, with most type nodes being  
unified and the problem cases (there would not be any in C++,  
apparently) using the existing inefficient mechanisms.


Unfortunately, I suspect it has to be nearly perfect to get the  
benefit (compile time), anything less, and all it does is save memory  
and increase compile time.  While saving memory is good by itself,  
this would a a project, that if started, you'd really want to press  
forward and finish.


I can dig out actual real live numbers, if you're curious.  For  
example, when calling comptypes, the no answers are (were) 34x more  
likely than yes answers.  If you cannot return false immediately when  
point_to_type1 != pointer_to_type2, you then have to run a structural  
equality tester, and once you do that, you spend 120ns per depth in  
the tree as you fault everything into cache, what's that 300 some  
instructions.  21,980 were fast, 336,523 were slow, the slow path  
dominated.


Now, I'd love to be proven wrong.


Re: Abt long long support

2006-11-09 Thread Mike Stump

On Nov 9, 2006, at 6:39 AM, Mohamed Shafi wrote:

When i diff the rtl dumps for programs passing negative value with and
without frame pointer i find  changes from file.greg .


And, is that change bad?  We do expect changes in codegen, you didn't  
say if those changes are invalid, or what was invalid about them.  If  
they are valid, which pass is the first pass that contains invalid  
rtl?  If this was the first pass with invalid rtl, which instruction  
was invalid and why?  What assembly do you get in both cases?  Which  
instruction is wrong?  What's wrong about it?


Did you examine:

  long long l, k;
  l = -k;

for correctness by itself?  Was it valid or invalid?

[ read ahead for spoilers, I'd rather you pull this information out  
of the dump and present it to us... ]


A quick glance at the rtl shows that insn 95 tries to use [a4+4] but  
insn 94 clobbered a4 already, also d3 is used by insn 93, but there  
isn't a set for it.


The way the instructions are numbered suggests that the code went  
wrong before this point.  You have to read and understand all the  
dumps, whether they are right or wrong and why, track down the code  
in the compiler that is creating the wrong code and then see if you  
can guess why.  If not, fire up gdb, and watch it add/remove/reorder  
instructions and why it was doing it and the conditions it checked  
before doing the transformation and then reason about why it is wrong  
and what it should be doing instead.  I'd suspect the bug lies in  
your port file and gcc is using information from it and coming up  
with the bad code.


For example, what pass removed the setting of d3?


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-09 Thread Mike Stump

On Nov 8, 2006, at 5:59 AM, Doug Gregor wrote:

However, this approach could have some odd side effects when there are
multiple mappings within one context. For instance, we could have
something like:

 typedef int foo_t;
 typedef int bar_t;

 foo_t* x = strlen("oops");


x is a decl, the decl has a type, the context of that instance of the  
type is x.


map(int,x) == foo_t.

It is this, because we know that foo_x was used to create x and we  
set map(int,x) equal to foo_t as it is created.


It can never be wrong.  It can never be wrong, because any use of the  
type that would have had the wrong value comes from a specific  
context, and that specific context map(type,context) can be set to  
_any_ value, including the right value.  Put another way, any time  
one carries around type outside of a specific context, one also needs  
to also carry around the context the type came from.



The error message that pops out would likely reference "bar_t *"


map(int,x) doesn't yield bar_t.


This approach wouldn't help with the implementation of concepts,
because we need to be able to take two distinct types (say, template
type parameters T and U) and consider them equivalent in the type
system.


I'd need to see more specifics, but from just the above...  Any data  
you need that would make them different, you put into map 
(type,context), we're not restricted to just the typedef name.  Once  
you do that, then you discover that what's left, is identical, and  
since they are identical, they have the same address, and the same  
address makes them the same type.


The two things this doesn't work on are if you have two different  
notions of equality, my scheme (unaltered) can only handle 1  
definition for equality, or some of the temporal aspects, like, we  
didn't know T1 and T2 were the same before, but now we do, because  
they are both int.  The later case I'm expecting to not be an issue,  
as to form the type, you do the substitution and after you do it, you  
replace T1 with int (creating data in map(int,context), if you later  
need to know this was a T1 for any reason (debugging, error  
messages)).  These bubble up and one is left with the real type, and  
then equality remains fast, post substitution.  Reasoning about type  
equality pre-substitution remains slow.


You can even get fast unsubstituted comparisons for a particular  
definition of equality.  You boost the substitution bits out as  
variants, notice then, you have nothing left, and nothing is nothing,  
so the values wind up being the same again.  Now to get comptypes to  
work, you just have to add code to compare the boosted variants in  
the top of comptypes.  Now, before you say that that is as bad as  
what we had before, no, it isn't.  If the type is equal, then you can  
immediate fail the comparison, this takes care of 90% of the calls.   
After than you check the variants for equality and return that.  The  
one address compare doesn't hit memory and can answer most of the  
equations by itself.  The variants are all on one cache line, and if  
the cost to compare them is cheap, it is just two memory hits.



We can't literally combine T and U into a single canonical
type node, because they start out as different types.


?

Granted, we could layer a union-find implementation (that better  
supports

concepts) on top of this approach.


Ah, but once you break the fundamental quality that different  
addresses implies different types, you limit things to structural  
equality and that is slow.



 type = type_representative (TREE_TYPE (exp));
 if (TREE_CODE (type) == REFERENCE_TYPE)
   type = TREE_TYPE (type);

We could find all of these places by "poisoning" TREE_CODE for
TYPE_ALIAS_TYPE nodes, then patch up the compiler to make the
appropriate type_representative calls. We'd want to save the original
type for diagnostics.


Or, you can just save the context the type came from:

type = TREE_TYPE (exp);
type_context = &TREE_TYPE (exp);

same amount of work on the use side, but much faster equality checking.


An alternative to poisoning TREE_CODE would be to have TREE_TYPE do
the mapping itself and have another macro to access the original
(named) type:

 #define TREE_TYPE(NODE) type_representative ((NODE)->common.type)
 #define TREE_ORIGINAL_TYPE(NODE) ((NODE)->common.type)


Likewise, given those, we could do:

 #define TREE_TYPE(NODE) ((NODE)->common.type)
 #define TREE_ORIGINAL_TYPE(NODE)
   (map((NODE)->common.type, &(NODE)->common.type)
? map((NODE)->common.type, &(NODE)->common.type)
: (NODE)->common.type)

and remain fast for equality.

 Since we know that type canonicalization is incremental, could we  
work

> toward type canonicalization in the GCC 4.3 time frame?

If by we you mean you, I don't see why that would be a bad
idea.  :-)  The risk is if one invests all this effort, and the win
turns out to be < 1% on real code and 10x on benchmark code, one
feels bad.


ConceptGCC has hit the 

Re: Canonical type nodes, or, comptypes considered harmful

2006-11-09 Thread Mike Stump

On Nov 8, 2006, at 7:14 AM, Ian Lance Taylor wrote:

The way to canonicalize them is to have all equivalent types point to
a single canonical type for the equivalence set.  The comparison is
one memory dereference and one pointer comparison, not the current
procedure of checking for structural equivalence.


Once not equal addresses might mean equal types, you have to do a  
structure walk to compare types, and you're right back were we  
started.  The only way to save yourself, is to be able to say,  
different addresses, _must_ be different types.


An example, are these two types the same:

A
B

given that A and B are the same type.  Your way, you need to walk two  
trees, hitting memory 40 times.  The cost is 40 cache misses, each  
one takes 100 ns, so we're up to 2000 ns to compare them.  In my  
scheme, the addresses are the same, so for codegen you get:


cmp p1, p2

which is 1 machine instruction, and no memory hits, and this runs in  
around 0.2 ns, a 1x speedup.  Now, imagine real live template  
code.  20 deep is nothing, and the branching is worse than one I  
suspect.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-09 Thread Mike Stump

On Nov 9, 2006, at 1:06 PM, Joern RENNECKE wrote:

I think in order to handle the C type system with the non-transitive
type compatibility effectively, for each type we have to pre-compute
the most general variant, even if that has no direct representative in
the current program.


The scheme you describe is logically the same as mine, where the  
things I was calling variants are present in the non-most general  
variants bit but not in the most general variant bits.  I think the  
data layout of your scheme is better as then you don't have to do log  
n map lookups to find data, the data are right there and for  
comparison, instead of address equality of the main node, you do  
address equality of the `most general variant'.  In my scheme, I was  
calling that field just the type.


I'm sorry if the others where thinking of that type of scheme, I  
though they weren't.


Now, what are the benefits and weaknesses between mine and your, you  
don't have to carry around type_context the way mine would, that's a  
big win.  You don't have to do anything special move a reference to a  
type around, that's a big win.  You have to do a structural walk if  
there are any bits that are used for type equality.  In my scheme, I  
don't have to.  I just have a vector of items, they are right next to  
each other, in the same cache line.  In your scheme, you have to walk  
all of memory to find them, which is slow.


So, if you want speed, I have a feeling mine is still faster.  If you  
want ease of implementation or conversion yours may be better.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-09 Thread Mike Stump

On Nov 8, 2006, at 5:11 AM, Richard Kenner wrote:
My confusion here is how can you "canonicalize" types that are  
different

(meaning have different names) without messing up debug information.



If you have:

Foo xyz;


typedef int Foo;
TREE_TYPE (xyz) == int
map(int, &TREE_TYPE (xyz))  == Foo

debug infomation for xyz is nameL "xyz", type: map(TREE_TYPE (decl),  
&TREE_TYPE (decl)), which happens to be Foo.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-09 Thread Mike Stump

On Nov 9, 2006, at 5:00 PM, Dale Johannesen wrote:

On Nov 9, 2006, at 4:54 PM, Mike Stump wrote:
   else if (p1->ptr_equality_suffices_for_this_type || p2- 
>ptr_equality_suffices_for_this_type)

  not equal
   else
  tree walk


For trivial things, those things that are fast anyway, you make them  
fast, for slow things, you make them slow, so, there isn't a net  
change in speed.


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-09 Thread Mike Stump

On Nov 9, 2006, at 5:11 PM, Joe Buck wrote:

On Thu, Nov 09, 2006 at 04:54:23PM -0800, Mike Stump wrote:

Once not equal addresses might mean equal types, you have to do a
structure walk to compare types, and you're right back were we
started.


Not quite.


Ah, you're right, thanks for spotting that.


A structure walk is required to be certain of equality,


:-(


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-10 Thread Mike Stump

On Nov 9, 2006, at 11:09 PM, Ian Lance Taylor wrote:

I meant something very simple: for every type, there is a
TYPE_CANONICAL field.  This is how you tell whether two types are
equivalent:
TYPE_CANONICAL (a) == TYPE_CANONICAL (b)


Ah, yes, that would work.  Hum, so simple, why was I thinking  
something was not going to work about it.  There are advantages to  
real-time conversations...  anyway, can't think of any down sides  
right now except for the obvious, this is gonna eat 1 extra pointer  
per type.  In my scheme, one would have to collect stats on the sizes  
to figure out if there are enough types that don't have typedefs to  
pay for the data structure for those that do.  I think mine would  
need less storage, but, your scheme is so much easier to implement  
and transition to, that, I think it is preferable over an along side  
datatype.  Thanks for bearing with me.


Threading the compiler

2006-11-10 Thread Mike Stump
We're going to have to think seriously about threading the compiler.   
Intel predicts 80 cores in the near future (5 years).  http:// 
hardware.slashdot.org/article.pl?sid=06/09/26/1937237&from=rss  To  
use this many cores for a single compile, we have to find ways to  
split the work.  The best way, of course is to have make -j80 do that  
for us, this usually results in excellent efficiencies and an ability  
to use as many cores as there are jobs to run.  However, for the  
edit, compile, debug cycle of development, utilizing many cores is  
harder.


To get compile speed in this type of case, we will need to start  
thinking about segregating data and work out into hunks, today, I  
already have a need for 4-8 hunks.  That puts me 4x to 8x slower than  
I'd like to be.  8x slower, well, just hurts.


The competition is already starting to make progress in this area.

I think it is time to start thinking about it for gcc.

We don't want to spend time in locks or spinning and we don't want to  
liter our code with such things, so, if we form areas that are fairly  
well isolated and independent and then have a manager, manage the  
compilation process we can have just it know about and have to deal  
with such issues.  The rules would be something like, while working  
in a hunk, you'd only have access to data from your own hunk, and  
global shared read only data.


The hope is that we can farm compilation of different functions out  
into different cores.  All global state updates would be fed back to  
the manager and then the manager could farm out the results into  
hunks and so on until done.  I think we can also split out lexing out  
into a hunk.  We can have the lexer give hunks of tokens to the  
manager to feed onto the parser.  We can have the parser feed hunks  
of work to do onto the manager and so on.


How many hunks do we need, well, today I want 8 for 4.2 and 16 for  
mainline, each release, just 2x more.  I'm assuming nice, equal sized  
hunks.  For larger variations in hunk size, I'd need even more hunks.


Or, so that is just an off the cuff proposal to get the discussion  
started.


Thoughts?


Re: Threading the compiler

2006-11-10 Thread Mike Stump

On Nov 10, 2006, at 12:46 PM, H. J. Lu wrote:

Will use C++ help or hurt compiler parallelism? Does it really matter?


I'm not an expert, but, in the simple world I want, I want it to not  
matter in the least.  For the people writing most code in the  
compiler, I want clear simple rules for them to follow.


For example, google uses mapreduce http://labs.google.com/papers/ 
mapreduce.html as a primitive, and there are a few experts that  
manage that code, and everyone else just mindlessly uses it.  The  
rules are explained to them, and they just follow the rules and it  
just works.  No locking, no atomic, no volatile, no cleaver lock free  
code, no algorithmic changes (other than decomposing into isolated  
composable parts) .  I'd like something similar for us.


Re: strict aliasing question

2006-11-10 Thread Mike Stump

On Nov 10, 2006, at 9:48 AM, Howard Chu wrote:

Richard Guenther wrote:

If you compile with -O3 -combine *.c -o alias it will break.


Thanks for pointing that out. But that's not a realistic danger for  
the actual application. The accessor function is always going to be  
in a library compiled at a separate time. The call will always be  
from a program built at a separate time, so -combine isn't a factor.


We are building a compiler to outsmart you.  We presently working on  
technology (google ("LTO")) to break your code.  :-)  Don't cry when  
we turn it on by default and it does.  I'd recommend understanding  
the rules and following them.


Re: expanding __attribute__((format,..))

2006-11-10 Thread Mike Stump


On Nov 10, 2006, at 9:14 AM, Ian Lance Taylor wrote:


"Nuno Lopes" <[EMAIL PROTECTED]> writes:


I've been thinking that it would be a good idea to extend the current
__attribute__((format,..)) to use an arbitrary user callback.
I searched the mailing list archives and I found some references to
similar ideas. So do you think this is feasible?


I think it would be nice.  We usually founder


I think that a 20% solution would handle 95% of the cases.  :-)

__attribute((extra_formats, "AB"))

for example.  Easy to use, easy to describe, handles things well  
enough to keep people happy for 10 years.  The next version after  
this would be comprehensive enough to handle describing the values  
and the types, the checking rules  and the rules and the warning/ 
error messages to generate.


Re: Planned LTO driver work

2006-11-10 Thread Mike Stump

On Nov 9, 2006, at 11:37 PM, Mark Mitchell wrote:
It might be that we should move the invocation of the real linker  
back into gcc.c, so that collect2's job just becomes


Or move all of collect2 back into gcc.c.  There isn't a reason for it  
being separate any longer.


Re: Threading the compiler

2006-11-10 Thread Mike Stump

On Nov 10, 2006, at 2:19 PM, Kevin Handy wrote:
What will the multi-core compiler design do to the old processors  
(extreme slowness?)


Roughly speaking, I want it to add around 1000 extra instructions per  
function compiled, in other words, nothing.  The compile speed will  
be what the compile speed is.  Now, I will caution, the world doesn't  
look kindly on people trying to bootstrap gcc on a 8 MHz m68k  
anymore, even though it might even be possible.  In 5 years, I'm  
gonna be compiling on an 80 or 160 way box.  :-)  Yeah, Intel  
promised.  If you're trying to compile on a single 1 GHz CPU, it's  
gonna be slow  I don't want to make the compiler any slower, I  
want to make it faster, others will make use of the faster compiler,  
to make it slower, but that is orthogonal to my wanting to make it  
faster.


4. Will you "serialize" error messages so that two compiles of a  
file will always display the errors in the same order?


Yes, I think that messages should feed back into manager, so that the  
manager can `manage' things.  A stable, rational ordering for  
messages makes sense.



Also, will the object  files created be the same between compiles.


Around here, we predicate life on determinism, you can pry that away  
from my cold dead fingers.  We might have to switch from L472 to  
L10.22 for internal labels for example.  This way, each thread can  
create infinite amounts of labels that don't conflict with other  
threads (functions).


5. Will more "heavy" optimizations be available? i.e. Will the  
multi-core

 speed things up enough that really hard optimizations (speed wise)
 become reasonable?


See my first paragraph.


Re: Threading the compiler

2006-11-10 Thread Mike Stump

On Nov 10, 2006, at 5:43 PM, Paul Brook wrote:

Can you make it run on my graphics card too?


:-)  You know all the power on a bleeding edge system is in the GPU  
now.  People are already starting to migrate data processing for  
their applications to it.  Don't bet against it.  In fact, we hide  
such migration behind apis that people already know and love, and you  
might be doing it in your applications already, if you're not  
careful.  And before you start laughing too hard, they are doubling  
every 12 months, we've only managed to double every 18 months.  Let's  
just say, the CPU is doomed.


Seriously thought I don't really understand what sort of response  
you're expecting.


Just consensus building.

Do you have any justification for aiming for 8x parallelism in this  
release and 2x increase in parallelism in the next release?


Our standard box we ship today that people do compiles on tends to be  
a 4 way box.  If a released compiler made use of the hardware we ship  
today, it would need to be 4 way.  For us to have had the feature in  
the compiler we ship with those systems, the feature would have had  
to be in gcc-4.0.  Intel has already announced 4 core chips that are  
pin compatible with the 2 core chips.  Their ship date is in 3 days.   
People have already dropped them in our boxes and they have 8 way  
machines, today.  For them to make use of those cores, today, gcc-4.0  
would had to have been 8 way capable.  The rate of increase in cores  
is 2x every 18 months.  gcc releases are about one every 12-18  
months.  By the time I deploy gcc-4.2, I could use 8 way, by the time  
I stop using gcc-4.2, I could make use of 16-32 cores I suspect.  :-(



Why not just aim for 16x in the first instance?


If 16x is more work than 8x, then I can't yet pony up the work  
required for 16x myself.  If cheap enough, I'll design a system where  
it is just N-way.  Won't know til I start doing code.


You mention that "competition is already starting to make  
progress". Have they found it to be as easy as you imply?


I didn't ask if they found it easy or not.

whole-program optimisation and SMP machines have been around for a  
fair while now, so I'm guessing not.


I don't know of anything that is particularly hard about it, but, if  
you know of bits that are hard, or have pointer to such, I'd be  
interested in it.


Re: Threading the compiler

2006-11-11 Thread Mike Stump

On Nov 10, 2006, at 9:08 PM, Geert Bosch wrote:
I'd guess we win more by writing object files directly to disk like  
virtually every other compiler on the planet.


The cost of my assembler is around 1.0% (ppc) to 1.4% (x86) overhead  
as measured with -pipe -O2 on expr.c,.  If it was converted, what  
type of speedup would you expect?


Most of my compilations (on Linux, at least) use close to 100% of  
CPU. Adding more overhead for threading and communication/ 
synchronization can only hurt.


Would you notice if the cost were under 0.1%?  Would you care?


Re: strict aliasing question

2006-11-12 Thread Mike Stump

On Nov 11, 2006, at 7:56 PM, Howard Chu wrote:
You probably can't, in the case of a shared library, but you  
probably could for a static library.


I think I agree, though, a JIT can peer past a shared boundary as  
well.  A non-JIT can as well, but it has to have some mechanism to  
unpeer across the boundary and notice updates to the other side of  
the boundary.  I don't think we'll be peering across a shared  
boundary in the next 10 years, but, maybe one day.


How will you distinguish these two cases, when all you see is  
"foo.a" on the command line?


You don't, not for foo.a.


Re: optimize option in macros or somevalue (-O2 or -O3)

2006-11-12 Thread Mike Stump
Don't post to both lists, if you want to work on the compiler, gcc is  
fine, otherwise gcc-help.


On Nov 12, 2006, at 9:29 AM, Niklaus wrote:

Is there any way to specify in the code the optimization value like
(-O2 or -O3) instead of on the command line.


In Apple's branch, we've added support for #pragma to control these  
things, but in the mainline compiler, no.



The problem is i can't modify the makefile.


This is a rather poor reason to want to the feature.  Use make  
CC='gcc -O2' instead, that doesn't modify the makefile and yet  
optimizes.


Re: gmp/mpfr and multilib

2006-11-12 Thread Mike Stump

On Nov 11, 2006, at 11:19 AM, Jack Howarth wrote:
Will any of the libraries in gcc now require gmp/mpfr such that  
both 32-bit and 64-bit versions of gmp/mpfr must be installed? If  
that is the case, will the multilib build look for both a lipo 32- 
bit/64-bit combined shared library in $prefix/lib as well as  
individual versions in lib and lib64 subdirectories?


If you want to build darwin native and cross compilers and canadian  
cross compilers all at once, it is easiest to have built the  
libraries universal, other than that, no, they can be thin for the  
build system.  The search path doesn't change between 32 and 64 bit  
compilers as I recall.


Re: GCC Garbage Collection

2006-11-13 Thread Mike Stump

On Nov 12, 2006, at 10:47 PM, Brendon Costa wrote:
   I think i am having trouble with the garbage collector deleting  
the memory for tree nodes that i am still using.


You must have a reference to that data from gc managed memory.  If you  
don't use use gc to allocate the data structures, it just won't work.   
In addition, the roots of all such data structures have to be findable  
from the gc roots.  The compiler is littered with examples of how to  
do this, as an example:


  static GTY(()) tree block_clear_fn;

is findable by the gc system.  All data findable from this tree, will  
be findable by the GC system.


If you _must_ have references outside gc, you can do this, if there is  
at least 1 reference within the gc system.  For example, if you do:


static GTY(()) tree my_refereces;

void note_reference(tree decl) {
  my_references = tree_cons (NULL_TREE, decl, my_references);
}

and call note_reference (decl) for every single bit you save a  
reference to, it'll work.  Actually, no, it won't work, precompiled  
headers will fail to work because you'd need to modify the PCH writer  
to write your data, because you didn't mark your data structures with  
GTY and didn't use gc to allocate them.  See the preprocessor for an  
example of code that doesn't use GTY and yet writes the data out for  
PCH files.




How can i determine if it is deleting the memory for this node?


It is safe to assume it is.

I have read the GCC Internals manual on garbage collection, and am  
not sure how I should use it in my situation. I have a dynamic array  
of my own structs like below:


struct TypeInfoStruct
{
  tree node;
   my data
};
typedef TypeInfoStruct TypeInfo;


You're missing a GTY(()) marker, see the source code for many  
examples, one such would be:


/* Represent an entry in @TTypes for either catch actions
   or exception filter actions.  */
struct ttypes_filter GTY(())
{
  tree t;
  int filter;
};


and i have an array of these declared in a .c file:

static TypeInfo** registered_types = NULL;


Again, no GTY marker.  Compare with:

  static GTY ((length ("typevec_len"))) struct typeinfo *typevec;


I manage the memory for the registered types array and the TypeInfo  
structure instances and can not give this to the garbage collector  
to do.


If you want this to work, you have to understand the rules to play by,  
and play by them, sorry.



struct TypeInfoStruct
{
  GTY(()) tree node;


Wrong placement.


   my data
};


Re: GCC Garbage Collection

2006-11-13 Thread Mike Stump

On Nov 13, 2006, at 3:30 PM, Brendon Costa wrote:
I used the idea you showed above and it seems to work (I dont  
understand enough to know why you say it wont work and thus this  
email).


It is the difference between all features of gcc working, or just most  
of the features working.  If you want pch to work, you have to think  
about the issue and do up the appropriate code.  However, I bet you  
don't need pch to work.  If you are doing real stuff for a real  
production compiler, well, I retract that.


If you want it to work, the rules are simple, all data must be  
allocated and managed by gc and have GTY(()) markers.  You can escape  
the simplicity of this, if you want, but that requires way more  
thought and more code, slightly beyond the scope of this email.


During PCH writting, all GTY(()) data will be written out to a file  
(the output of compiling a .h file), and during #include of a PCH  
file, all that data is read in again.


I dont think I understand the relationship between the PCH files and  
the garbage collector. I thought that the PCH files were the gt-*.h  
files which are generated,


Not exactly.  PCH files are the result of compiling a header file.   
The gt-*.h files are used by gc, and the PCH mechanism uses gc to work  
its magic.


None of my data structures are being managed by the garbage  
collector, i manually malloc and free them so I figured that I did  
not need to worry about modifying a PCH writer to cater for them?


You have it backwards, if you use gc (and mark them), you don't have  
to worry, otherwise, its a lot more code, if you want everything to  
work.


Re: GCC Garbage Collection

2006-11-13 Thread Mike Stump

On Nov 13, 2006, at 5:23 PM, Brendon Costa wrote:
So are you saying that the quick hack that i did will not work for  
fixing the memory problem I have but that it will probably raise  
its ugly head again


No.


or just that PCH will not work?


Yes.

Are there any advantages to using PCH besides it may make compiling  
the GCC compiler a little faster?


No, other than compatibility with makefiles that want to use it.


At most there is about 40 lines of code in each of them.


PCH is when you have 500,000 lines of C++ code in the main .h file,  
and 20 lines in the .C file.  :-)


Re: libffi on Macintel?

2006-11-14 Thread Mike Stump

On Nov 12, 2006, at 3:21 PM, Jack Howarth wrote:
Can anyone confirm that the libffi shared libraries are properly  
built in gcc 4.2 branch (or trunk)


No, they aren't built:

The following languages will be built: c,c++,java
*** This configuration is not supported in the following subdirectories:
 target-libmudflap target-libffi target-zlib target-libjava  
target-libada gnattools target-libgfortran target-libobjc target-boehm- 
gc

(Any other directories should still work fine.)

:-(

This is rather disturbing since I assumed that Sandro's patches were  
all properly checked into gcc trunk before gcc 4.2 branched.


:-(


no one seems to be submitting testresults for i386-apple-darwin8


http://gcc.gnu.org/ml/gcc-testresults/2006-11/msg00621.html


cleaning

2006-11-14 Thread Mike Stump

While trying to clean, I noticed that

  $ make -k -j6 clean

does:

make[5]: *** [insn-recog.o] Interrupt
make[5]: *** [s-attrtab] Interrupt
make[4]: *** [all-stage1-gcc] Interrupt
make[3]: *** [stage1-bubble] Interrupt
Reaping losing child 0x00383f20 PID 18728
make[2]: *** [all] Interrupt
Removing child 0x00383f20 PID 18728 from chain.
make[1]: *** [stage1-start] Interrupt
make: *** [clean-stage1-libiberty] Interrupt

:-(  Building the entire compile to clean it isn't reasonable, honest.

Comes from:

maybe-clean-stage1-libiberty: clean-stage1-libiberty
clean-stage1: clean-stage1-libiberty
clean-stage1-libiberty:
@[ -f $(HOST_SUBDIR)/libiberty/Makefile ] || [ -f $ 
(HOST_SUBDIR)/stage1-libiberty/Makefile ] \

  || exit 0 ; \
[ $(current_stage) = stage1 ] || $(MAKE) stage1-start; \
cd $(HOST_SUBDIR)/libiberty && \
$(MAKE) $(FLAGS_TO_PASS)  \
CFLAGS="$(STAGE1_CFLAGS)" LIBCFLAGS="$ 
(STAGE1_CFLAGS)"  clean


where we do stage1-start.


Also, now make clean isn't reliable:

$ make -k clean
rm -f stage_current
mv: rename stage1-libdecnumber to prev-libdecnumber/stage1- 
libdecnumber: Directory not empty

make[1]: *** [stage2-start] Error 1
make[1]: *** No rule to make target `clean'.
make: *** [clean-stage2-gcc] Error 2
rm -f *.a TEMP errs core *.o *~ \#* TAGS *.E *.log
make: Target `clean' not remade because of errors.
morgan $ echo $?
2

If I just configure and then do a make clean, it doesn't work either:

rm -f *.a TEMP errs core *.o *~ \#* TAGS *.E *.log
cat: stage_last: No such file or directory
make: invalid option -- a
Usage: make [options] [target] ...
Options:
  -b, -m  Ignored for compatibility.
[ ... ]
make: Target `clean' not remade because of errors.


:-(




Re: gpl version 3 and gcc

2006-11-15 Thread Mike Stump

On Nov 15, 2006, at 11:07 AM, Ed S. Peschko wrote:

My concern - and I'm sure I'm not the only one so concerned - is that
if gcc goes to version 3, linux distribution maintainers will not  
choose
to go with the new version, or worse, some groups will choose to  
remain
at gpl2 and others will go to version 3, causing a fork in the gcc  
code.


This is mostly off-topic, as strange as that sounds, for this list.   
gnu.misc.discuss is the canonical place for discussions on licensing.


When the FSF is done with v3, I'd expect that gcc will switch over to  
it.  There should be enough distributors of gcc on the GPLv3 review  
teams to provide solid feedback to the FSF so that their needs can be  
met.  I'd hope that in the end, there will be enough balance to allow  
distributors to continue distributing gcc under the v3 license.  So,  
the short answer is wait and see.  Once v3 is published, we should  
know within a couple of months after that, check back then.


Re: regenerating reliably GCC configure files

2006-11-15 Thread Mike Stump

On Nov 15, 2006, at 12:57 PM, Basile STARYNKEVITCH wrote:

But I still cannot figure out how to regenerate *reliably*


My take, aside from the top level, you enable maintainer mode and  
type make with 2.59 in your path.  If it fails to work, file a bug  
report.  For the top level, you should have 2.13 in your path and  
make.  If these fail to work reliably, then I suspect it is a bug and  
you can then build reliably after that bug is fixed.


Re: Testsuite for GlobalGCC: QMTest or DejaGNU?

2006-11-16 Thread Mike Stump

On Nov 16, 2006, at 7:26 AM, Alvaro Vega Garcia wrote:
I'm beginning to work on GGCC project(1) and I proposed to continue  
with

DejaGNU Testsuite for these project when I was asked about better
testing framework.


The main problem is that any framework other than dejagnu is just  
different, and that by itself, is bad.  dejagnu can do canadian cross  
testing to a DOS box, from a UNIX box, testing an SH, few other  
frameworks will just work in that case.  gcc is benefitted I believe  
by such testing in general.


Re: building gcc4-4.3.0-20061104/11 failure on OSX 10.3

2006-11-16 Thread Mike Stump

On Nov 14, 2006, at 3:13 AM, Dominique Dhumieres wrote:
Since the problem is still there in gcc4-4.3.0-2006 and I did  
not get

any answer, I tried the following:

(1) I replaced gcc/config/darwin.h by the file from  
gcc4-4.3.0-20061028,

and the build was done without obvious problem.

(2) Using the gcc/config/darwin.h from gcc4-4.3.0-2006, I  
replaced:


#define PREFERRED_DEBUGGING_TYPE DWARF2_DEBUG

by

#define PREFERRED_DEBUGGING_TYPE DBX_DEBUG


Please remove your changes from your tree, re-pull the current  
mainline and try building again.  See my posting test results posting  
in http://gcc.gnu.org/ml/gcc-testresults/2006-11/msg00708.html for  
details on how I got those results.   You don't have to apply the  
referenced patch, as I already checked it into mainline.


You can run as -v to see what version of cctools you have.

Let me know if that works for you.


Re: building gcc4-4.3.0-20061104/11 failure on OSX 10.3

2006-11-16 Thread Mike Stump

On Nov 14, 2006, at 12:43 PM, Geoffrey Keating wrote:
Mike was considering simply declaring that GCC 4.3 won't work on  
Mac OS 10.3.


No, not really.  I'll declare that using things older than 10.3.9 are  
gonna be hard, as the required cctools package was built for 10.3.9,  
however, if one gets the sources for cctools and builds them on older  
releases, one might be able to go back father.  I don't think I care  
enough to do that much work.


10.3 is quite old now, and there will be very few users by the time  
that 4.3 is released.


I tested it out on mainline and it works just fine (now).  :-)


Re: [PATCH] Re: Canonical type nodes, or, comptypes considered harmful

2006-11-22 Thread Mike Stump

On Nov 21, 2006, at 11:06 AM, Doug Gregor wrote:

Make DEPTH=6, we get an 85% speedup:


Yes, this mirrors the type of speed up I expect for _some_ types of  
template code.  I'd love to see us go in this direction.  Anyway, I  
endorse this type of work.


Anyway, on to the review...

Any thoughts of enabling the checking code with --enable-checking and  
running it that way.  The speed type people already know how to  
disable it for speed testing, and releases turn it off automatically,  
and stage 1 runs the checker.  Something like:


#ifdef ENABLE_CHECKING
#define VERIFY_CANONICAL_TYPES 1
#endif

I'd like to see some random larger C++ code shoved though it to  
ensure if doesn't fall over, if you structure it that way, however,  
if instead you just warn:


+#ifdef VERIFY_CANONICAL_TYPES
+  result = structural_comptypes (t1, t2, strict);
+
+  if (result && TYPE_CANONICAL (t1) != TYPE_CANONICAL (t2))
+   {
+ /* The two types are structurally equivalent, but their
+canonical types were different. This is a failure of the
+canonical type propagation code.*/
+ warning (0, "internal: canonical types differ for identical  
types %T and %T",

+   t1, t2);
+ debug_tree (t1);
+ debug_tree (t2);
+   }
+  else if (!result && TYPE_CANONICAL (t1) == TYPE_CANONICAL (t2))
+   {
+ /* Two types are structurally different, but the canonical
+types are the same. This means we were over-eager in
+assigning canonical types. */
+ warning (0, "internal: same canonical type node for  
different types %T and %T",

+t1, t2);
+ debug_tree (t1);
+ debug_tree (t2);
+   }
+
+  return result;
+#else

or even maybe just a note, it'll make it just a bit safer in the  
short term.  People can then watch for these messages and report  
them.  I've been known to do this type of code before and the warning  
definitely was nicer as the complex cases came in after release in my  
case.


Other than that, this looks like a nice step in the right direction,  
thanks.


Re: what about a compiler probe?

2006-11-26 Thread Mike Stump

On Nov 26, 2006, at 3:20 PM, Basile STARYNKEVITCH wrote:
The textual protcol should permit to examine, but not change, the  
compilation state.


Sounds like a bad idea.  I agree with the debugger comment.  You're  
free to start up the compilation process with ptrace from your GUI  
and query/display/modify the state in whatever way you like.  The  
problems I see are that any internal details that are exposed to the  
protocol fixes those details forever, or, should they not be fixed,  
breaks the stability of the protocol.


Another suggestion would be to, if you want just a few status bits,  
add them to -Q and just read those bits on the output.


Re: Problem with listing i686-apple-darwin as a Primary Platform

2006-11-26 Thread Mike Stump

On Nov 6, 2006, at 7:49 PM, Andrew Pinski wrote:
Right now after patches by the Apple folks causes you to need a  
newer dwarfutils


I think that is a bug, and that bug has now been fixed.  Let me know  
if there is any other badness I missed (or introduced along the way).


Right now on the PowerPC side, Darwin before 8 (so Panther and  
before) are broken bootstrapping the mainline


Fixed.


Re: [PATCH] Re: Canonical type nodes, or, comptypes considered harmful

2006-11-27 Thread Mike Stump

On Nov 27, 2006, at 7:04 AM, Doug Gregor wrote:

So, here's a variant that might just work: add a flag variable
flag_check_canonical_types.  When it's true, we do the complete
structural checking, verify it against the canonical types result, and
warn if they differ. (This is what we do now when
VERIFY_CANONICAL_TYPES is defined).


This is a slightly softer version of mine...  Sounds reasonable.  The  
flag should be removed after a major release or two after most all the  
bugs are fixed, if any.


Re: Differences in c and c++ anon typedefs

2006-11-27 Thread Mike Stump

On Nov 27, 2006, at 12:49 PM, Brendon Costa wrote:
As a result of C types not having a "class name for linkage  
purposes", I

am finding it difficult to define a "normalised" string


Trivially, you can construct the name by composing one based upon the  
structure.  The is_compatible function then just compares the  
structure (aka name) according the the standard rules.  strcmp isn't  
itself, powerful enough.


Re: Finding canonical names of systems

2006-11-27 Thread Mike Stump
[ first, this is the wrong list to ask such question, gcc-help is the  
right one ]


On Nov 27, 2006, at 7:25 PM, Ulf Magnusson wrote:
How are you supposed to find the canonical name of a system (of  
known type) in CPU-Vendor-OS form in the general case?


In the general case, you ask someone that has such a machine to run  
config.guess, or failing that, you ask someone, or failing that, you  
just invent the obvious string and use it.


Most portable software doesn't much care just what configuration you  
specify, some very non-portable software will fail to function unless  
you provide exactly the right string.  gcc is of the later type, if  
you're interested in building it.


If you have access to a system of that particular type, you can run  
config.guess to find out, but you might not have, and that approach  
won't work for many systems anyway.


That approach always works on all host systems.  :-)  If it didn't,  
that'd be a bug and someone would fix it.


The canonical name needs to be known e.g. when cross-compiling and  
building cross-compilers.


Ah, for crosses, you have to know what you want, and what you want is  
what you specify.  If your question is, what do you want, well, you  
want what you want.  Either, it works, or, you've not ported the  
compiler.


For example, you can configure --target=arm, if you want an arm, or -- 
target=m68k if you want an m68k, or sparc, if you want sparc, or ppc  
if you want ppc, or powerpc if you want powerpc, or x86_64, if you  
want x86_64, or arm-elf, if you want arm-elf, or sparc-aout if you  
want that.  The list _is_ endless.  If you interested in a specific  
target, tell us which one and we'll answer the question specifically.


If you want pre-formed ideas for targets that might be useful, you  
can check out:


  http://gcc.gnu.org/install/specific.html
  http://gcc.gnu.org/buildstat.html
  http://gcc.gnu.org/ml/gcc-testresults/

I was thinking there was one other that tried to be exhaustive, but  
maybe we removed the complete list years ago.


Aside from that, yes, reading though the config type files is yet  
another way.


Re: [Objective-C PATCH] Canonical types (3/3)

2006-11-28 Thread Mike Stump

On Nov 28, 2006, at 7:56 AM, Doug Gregor wrote:
This is part three of three, containing changes to the Objective-C  
(and, thus, Objective-C++) front end.


Okay for mainline?


Ok, if the base patch goes in, thanks.


Re: GCC Internals Documentation

2006-11-30 Thread Mike Stump

On Nov 30, 2006, at 3:02 PM, Brendon Costa wrote:
Do i need to have any sort of agreement with FSF in order to submit  
documentation changes?


Change a few lines, no.  Add 100 lines, yes.


Should I update the latex sources for the docs or do it on the wiki?


Updating gcc/doc/*.texi is the preferred...  If your updating the  
wiki, just create an account, login and update.


After making the changes do i submit them to the patches list in  
order to be reviewed or do they go somewhere else first?


Yes.

I will not be able to get around to doing this at least until i have  
made a first release of my project (Hopefully within the next  
month), but I think i might start taking notes now of some things i  
would like to add.


Good time to start your paper work...


Re: [PATCH]: Require MPFR 2.2.1

2006-12-04 Thread Mike Stump

On Dec 4, 2006, at 8:23 AM, Richard Guenther wrote:

On 12/3/06, Kaveh R. GHAZI <[EMAIL PROTECTED]> wrote:

This patch updates configure to require MPFR 2.2.1 as promised here:
http://gcc.gnu.org/ml/gcc/2006-12/msg00054.html

Tested on sparc-sun-solaris2.10 using mpfr-2.2.1, mpfr-2.2.0 and  
an older

mpfr included with gmp-4.1.4.  Only 2.2.1 passed (as expected).

I'd like to give everyone enough time to update their personal
installations and regression testers before installing this.  Does  
one
week sound okay?  If there are no objections, that's what I'd like  
to do.


Please don't.  It'll be a hassle for us again and will cause  
automatic testers

to again miss some days or weeks during stage1


I agree, please don't, if it is at all possible to avoid it.  If you  
want to update, let's update once late in the stage3 cycle.


Re: Interface for manipulating the abstract syntax tree

2006-12-05 Thread Mike Stump

On Dec 5, 2006, at 3:14 AM, Ferad Zyulkyarov wrote:

Also, having the opportunity, I would like to ask you if there is any
function to use for deleting a tree


ggc_free if you _know_ it is free.


Re: Gfortran and using C99 cbrt for X ** (1./3.)

2006-12-05 Thread Mike Stump

On Dec 5, 2006, at 12:32 PM, Toon Moene wrote:
Couldn't libgfortran just simply borrow, errr, include the glibc  
version ?


No, not without asking the FSF (rms) as I think the license is  
different (GPL v GPL+libgcc exception).


Re: messages in objective-C

2006-12-06 Thread Mike Stump

On Dec 6, 2006, at 8:19 AM, Come Lonfils wrote:
I'm trying to know more about how messages are send to the objects  
in objective-C, how they are store,...

In which structures en how?
Where should I look in the source code of gcc to know it? I looked  
in libobjc but I'm a bit lost.


I'd probably look at this from a gdb standpoint.  You then just have  
to be able to follow along with gdb.  Do stepi and display/1i $pc  
type things to watch what goes one.  From there, you can read the  
runtime source code to fill in the gaps.


I'm not aware of a user level doc for this, but google might find  
one, I didn't check.


Re: Question on BOOT_CFLAGS vs. CFLAGS

2006-12-14 Thread Mike Stump

On Dec 14, 2006, at 5:59 PM, Paul Brook wrote:

On Friday 15 December 2006 01:37, Josh Conner wrote:

All -

When I configure with --disable-bootstrap and build with:

  CFLAGS="-g -O0"

The resultant compiler is built with the specified options.   
However, if
I --enable-bootstrap, when I build with the same CFLAGS, these  
options
are not used to build the final compiler.  I can get past this by  
using

BOOT_CFLAGS="-g -O0", but that seems a bit inconsistent.

My question is:  Is this behaving as designed or would it be  
reasonable

to file a bug and/or supply a patch to change the behavior so that
CFLAGS are respected whether bootstrapping is enabled or not?


It is working as documented:

http://gcc.gnu.org/onlinedocs/gccint/Makefile.html
http://gcc.gnu.org/install/build.html


I read that, could you please quote the part that documents the  
current behavior.


Let me offer a counter quote:

If you want to save additional space during the bootstrap and in
the final installation as well, you can build the compiler binaries
without debugging information as in the following example.  This will  
save
roughly 40% of disk space both for the bootstrap and the final  
installation.

(Libraries will still contain debugging information.)

 make CFLAGS='-O' LIBCFLAGS='-g -O2' \
   LIBCXXFLAGS='-g -O2 -fno-implicit-templates' bootstrap

I think that is pretty clear.

I think this is a bug that was unintentionally introduced by someone  
that didn't realize all the nuances of this.  If you examine the  
patch submission where this was changed, I don't recall they called  
out or discussed the changed behavior.  Do you have a pointer that  
shows otherwise?


Now, why do we do this, kinda simple, because this is a standard that  
trancends gcc and dates back a long, long time, see http:// 
www.gnu.org/prep/standards/standards.html for example.


The fix is simple, we just change BOOT_CFLAGS = -O2 -g to instead be $ 
(CFLAGS).  This way, if someone builds as documented in the manual,  
they get the results documented in the manual.


Re: Question on BOOT_CFLAGS vs. CFLAGS

2006-12-15 Thread Mike Stump

On Dec 15, 2006, at 1:02 AM, Paolo Bonzini wrote:

The counter quote is obviously wrong, thanks for the report.


Why it is important to not have CFLAGS influence the build product?   
The standard, is for it to so influence the build product.  Why is it  
important for gcc to not follow the standard?


Re: [infrastructure] what about gmp and mpfr on multilibbed builds?

2006-12-15 Thread Mike Stump

On Dec 15, 2006, at 4:11 AM, Christian Joensson wrote:

So, returning to my question here. The way I see it, should the
multilibbed enabled libraries use and gmp and/or mpfr routines, then
the gmp and mpfr libraries are needed in both 32 and 64 bit variants.


Yes.


If, on the other hand, the gmp and mpfr libraries are only needed in
the compiler itself and the libraries that are not multilibbed
enabled, then gmp and mpfr are only needed as 32 bit variants.


Yes.


So, again, if I have a 32 bit compiler multilibbed enabled, then only
32 bit variants of gmp and mpfr libraries requires that gmp and/or
mpfr routines are not used by the multilibbed libraries at all.
Correct?


Yes, exactly.


If gcc development would eventually make use of gmp and/or mpfr in the
multilibbed libraries, that would require both 32 bit and 64 bit
variants installed.


Yes.

If so, I wonder if the header files support multilibbed, 32 bit and  
64 bit, install and use... in other words, I suppose gmp and mpfr  
should be multilibbed :)


You'd get more mileage out of doing this, if you can.  For example,  
around here, we build all variations and glue them all together (ppc,  
ppc64, i386, x86_64).  This enables us to build x86_64 hosted  
compilers, i386 hosted compilers, canadian cross compilers and so on,  
no build we try would then fail because of these libraries.  There is  
a certain niceness about not having to worry about it.


Re: Question on BOOT_CFLAGS vs. CFLAGS

2006-12-15 Thread Mike Stump

On Dec 15, 2006, at 1:56 AM, Andrew Pinski wrote:

For BOOT_CFLAGS and STAGE1_CFLAGS, if we change them to be affected by
CFLAGS, we are going to run into issues where the compiler you are
building with understand an option but the bootstrapping one does not.
An example of this is building GCC with a non GCC compiler.  So how do
we handle that case, we split out STAGE1_CFLAGS and BOOT_CFLAGS.


This proves the necessity of two different controls, namely  
BOOT_CFLAGS and STAGE1_CFLAGS.  I don't propose getting rid of those  
or removing them.  What it doesn't show is why CFLAGS can't always  
influence the build product (as long as BOOT_CFLAGS isn't set, of  
course).  A setter of CFLAGS promises to not use it when it isn't  
applicable, any time those options would not be valid for both the  
bootstrap and the stage[23] compiler, they promise not to use that  
control.


To be concrete, I'd claim, these are the right semantics:

mrs $ make
stage1 builds with -g
stage2 builds with -O2 -g
mrs $ make CFLAGS=-O2
stage1 builds with -O2
stage2 builds with -O2
mrs $ make CFLAGS=-O2 STAGE1_CFLAGS=-O0
stage1 builds with -O0
stage2 builds with -O2
mrs $ make STAGE1_CFLAGS=-O0
stage1 builds with -O0
stage2 builds with -O2 -g
mrs $ make STAGE1_CFLAGS=-O0 BOOT_CFLAGS=-O3
stage1 builds with -O0
stage2 builds with -O3
mrs $ make CFLAGS=-O0 BOOT_CFLAGS=-O3
stage1 builds with -O0
stage2 builds with -O3
mrs $ make BOOT_CFLAGS=-O3
stage1 builds with -g
stage2 builds with -O3

An argument against the proposal would explain the case that you  
specifically think is wrong, and why it is wrong.  If you need to  
test an invocation not shown above to show why it is wrong, you can  
test with:


CFLAGS := default
DEFAULT_STAGE1_CFLAGS := -g
STAGE1_CFLAGS := $(shell if [ "$(CFLAGS)" = default ]; then echo "$ 
(DEFAULT_STAGE1_CFLAGS)"; else echo "$(CFLAGS)"; fi)

DEFAULT_BOOT_CFLAGS := -O2 -g
BOOT_CFLAGS := $(shell if [ "$(CFLAGS)" = default ]; then echo "$ 
(DEFAULT_BOOT_CFLAGS)"; else echo "$(CFLAGS)"; fi)

all: stage1 stage2
stage1:
@echo stage1 builds with $(STAGE1_CFLAGS)
stage2:
@echo stage2 builds with $(BOOT_CFLAGS)

The idea is that configure gets to set up DEFAULT_STAGE1_CFLAGS and  
DEFAULT_BOOT_CFLAGS anyway it wants, if the user doesn't change his  
mind, that is what is used in those situations.  Paolo, is there any  
case that you can identify that is wrong?


Re: alignment attribute for stack allocated objects

2006-12-19 Thread Mike Stump

On Dec 19, 2006, at 5:31 PM, Maurizio Vitale wrote:
I'm tying to hunt down the cause of a bug I'm experiencing and it  
all boils down to a possible misunderstanding on my part on the  
semantics of the 'aligned' attribute.



Is the 'aligned' attribute supposed to work for objects allocated  
on the stack (I'm on x86_64, gcc 4.1.1)?


Short answer, no.


Wrong list.  gcc-help is for help with gcc.

Anyway, you also left out some of the relevant details for us to help  
you, like what OS and what alignment.


Slightly longer answer would be if you want 1, 2, 4 or 8 byte  
alignment, on most OSes, you're probably Ok.


If you want more than that, allocate your objects with the OS  
provided aligned memory allocator.


On some OSes, you can also have things 16 byte aligned (darwin,  
vxworks).


Re: GCC optimizes integer overflow: bug or feature?

2006-12-19 Thread Mike Stump

On Dec 19, 2006, at 6:33 PM, Dave Korn wrote:

On 20 December 2006 02:28, Andrew Pinski wrote:


Paul Brook wrote:

Compiler can optimize it any way it wants,
as long as result is the same as unoptimized one.


We have an option for that. It's called -O0.

Pretty much all optimization will change the behavior of your  
program.


Now that's a bit TOO strong a statement, critical optimizations like
register allocation and instruction scheduling will generally not  
change
the behavior of the program (though the basic decision to put  
something
in a register will, and *surely* no one suggests avoiding this  
critical

optimization).


Actually they will with multi threaded program, since you can have  
a case
where it works and now it is broken because one thread has speed  
up so much

it writes to a variable which had a copy on another thread's stack.


Why isn't that just a buggy program with wilful disregard for the  
use of

correct synchronisation techniques?


It is that, as well as a program that features a result that is  
different from unoptimized code.


Re: A simple program with a large array segfaults

2007-01-04 Thread Mike Stump

On Jan 4, 2007, at 8:49 AM, Gowri Kumar CH wrote:

Is this one of the things which we come to know by experience?


Yes.


Is there a way to find it out from the core/code generated?


No.  You'd have to have someone tell you, or read up on a UNIX  
internals book or find a good C book.


I'm wondering how difficult it would be find this sort of errors in  
a large program.


You would be able to find this type of problem fairly easily in a  
large program, now.  :-)


Re: mixing VEC-tors of string & GTY?

2007-01-04 Thread Mike Stump

On Jan 4, 2007, at 2:26 AM, Basile STARYNKEVITCH wrote:

I cannot figure out how to have a vector of strings in a GTY-ed file



F_VEC_ALLOC_P(locstr,heap);

Any clues?


Do a vec of:

struct bar {
char *field;
}

and skip the field, and add the GTY markers.  Should work.


libgcc

2007-01-04 Thread Mike Stump

In libgcc/Makefile I find:

  MAKEINFO = @MAKEINFO@

and

  PERL = @PERL@

Seems like they should be always substituted, if they are going to  
always be in there, or, if they are never used, removed.


java building

2007-01-10 Thread Mike Stump

I tried to build java yesterday:

../../../../../../gcc/libjava/classpath/gnu/javax/crypto/jce/ 
GnuCrypto.java: In class 'gnu.j

avax.crypto.jce.GnuCrypto$1':
../../../../../../gcc/libjava/classpath/gnu/javax/crypto/jce/ 
GnuCrypto.java: In method 'gnu.

javax.crypto.jce.GnuCrypto$1.run()':
../../../../../../gcc/libjava/classpath/gnu/javax/crypto/jce/ 
GnuCrypto.java:410: error: cann
ot find file for class  
gnu.javax.crypto.jce.key.TripleDESKeyGeneratorImpl


:-(  Is it just me?


Re: Mis-handled ColdFire submission?

2007-01-10 Thread Mike Stump

On Jan 10, 2007, at 1:13 PM, Richard Sandiford wrote:
I just wanted to guage the general feeling as to whether I'd  
screwed up, and whether I should have submitted the patches in a  
different way.


I don't see a trivial way that is strictly better.  The problem is  
that some folks don't want the huge patch and some folks don't like  
the spray of 60.  Hard to please both at once.  One strategy that  
might be better would be to do them up on a development branch and  
submit one patch at a time as you develop them and then when all is  
said and done and all reviewed and approved, just merge it in.


I'm used to this style from the Ada folks, and I've managed to find  
the 1 or 2 patches I was interested in taking a closer look at, so I  
don't think the spray of 60 is all that unreasonable.  I do think  
trying to avoid patch spray is a noble goal...  though, I do think it  
is easier to review as 60 as opposed to 1 compressed tar file.


I too look forward to what others might say on the matter, as I'm  
going to be contributing lots of Objective-C patches at some point in  
the near feature I hope.  I'll probably just do it as a few, larger  
combo patches (1-5) to ease the pain of it all for me.  :-(  People  
can take this opportunity to complain in advance if they wish.


Re: Tricky(?) aliasing question.

2007-01-11 Thread Mike Stump

On Jan 11, 2007, at 6:30 AM, Sergei Organov wrote:

So "h1.f" is not an object? If it is not, it brings us back to the
validity of my boo() function from the initial post, for which 2  
persons

(3 including me) thought it's OK:


Would be nice for you to raise the issue directly with the C  
standards committee...  Would be nice to see them improve the wording  
in the tricky corners.


Re: RFC: Wextra digest (fixing PR7651)

2007-01-11 Thread Mike Stump

On Jan 11, 2007, at 3:48 PM, Ian Lance Taylor wrote:

* Taking the address of a variable which has been declared register.


Hmmm.  In the C frontend these are pedwarns.  But the C++ frontend
doesn't have pedwarns.  And maybe C++ doesn't require these warnings
anyhow, I don't know.


Just  FYI... In C++ there is no semantic content to register, it is  
rather like auto.  C is different.


Re: Mis-handled ColdFire submission?

2007-01-12 Thread Mike Stump

On Jan 12, 2007, at 4:35 AM, Nathan Sidwell wrote:
The major chunk of this reworking has been blocked from going into  
mainline because GCC was in stages 2 & 3 for much of this year.


Yeah, spending large amounts of time in stage2 and 3 does have  
disadvantages.  I'd rather have people that have regressions spend a  
year at a time in stage2-3...  :-(  Maybe we should have trunk be  
stage1, and then snap over to a stage2 branch when the stage1  
compiler is ready to begin stage2, and likewise, when the stage2  
compiler is ready to go to stage3, it then snaps to the release  
branch.  This gives a place for the preintegration of stage1 level  
work when ever that work is ready, without having to delay it for  
months at a time.


Another possibility is to use the next unreleased major release  
branch as stage 2, and the current major release branch as  
essentially stage3, with trunk remaining stage1.


Then the work doesn't need to wait for the right stage, rather, the  
work is Oked for the stage it is suitable for.


I've not spent much time thinking about this, this is just off the  
cuff, but, I thought I would post it as 9 months in  stage2-3 seems  
like a long time to me.


Re: debugging capabilities on AIX ?

2007-01-12 Thread Mike Stump

On Jan 12, 2007, at 12:56 AM, Olivier Hainque wrote:

Working on GCC 4 based GNAT port for AIX 5.[23], our testsuite to
evaluate GDB (6.4) debugging capabilities currently yields very
unpleasant results compared to what we obtain with a GCC 3.4 based
compiler (80+ extra failures out of 1800+ tests).


Could you please let us know if this is -O0 or not.  For -O0, I'd  
like to think that the debugging experience shouldn't be worse.  If  
it is at -O0, I'd encourage bug reports for all the problems.


Re: bug management: WAITING bugs that have timed out

2007-01-12 Thread Mike Stump

On Jan 11, 2007, at 10:47 PM, Joe Buck wrote:

The description of WORKSFORME sounds closest: we don't know how to
reproduce the bug.  Should that be used?


No, not generally.  This should only be used if someone says, I  
compile foo on platform bar and it didn't build and then someone  
tries building foo on a bar and it does build for them.  If the  
original didn't say which version of foo or bar or gcc they tested  
with, the person that gets a build out of it should report which  
version of foo and bar and gcc they used.  Another case, is if  
someone says a virtual call doesn't work with no testcase, and if the  
tester tests the canonical virtual call test case and it works, then  
WORKSFORME seems reasonable.



INVALID (we don't know that),


A valid bug is one that is reproducible and shows a problem with  
conformance to a language standard, or a desired direction of the  
compiler.  Everything that fails to meet that standard is invalid by  
this definition.


Now, if the bug people wanted to add an insufficient state, that  
would be better.  Around here, we use verify/insufficient  
information.  The user can then respond, or, eventually if they don't  
it then goes to closed/insufficient information.


Re: Running GCC tests on installed compiler

2007-01-12 Thread Mike Stump

On Jan 12, 2007, at 3:55 PM, Steve Ellcey wrote:

Can someone one with some deja-knowledge help me figure out how to run
the GCC tests on an installed compiler and without having to do a GCC
build?


You must be new around here:

http://gcc.gnu.org/ml/gcc-announce/1997-1998/msg0.html

:-)  Which is the I feel lucky google("site:gcc.gnu.org how to run  
installed GCC_UNDER_TEST") result.


Re: fat binaries for FSF gcc on Darwin?

2007-01-14 Thread Mike Stump

On Jan 13, 2007, at 6:13 AM, Jack Howarth wrote:
Do the Darwin gcc developers ever intend to replicate the use of  
fat binaries for FSF gcc (in gcc 4.3 perhaps) or will we always use  
separate subdirectories for 32-bit and 64-bit shared libraries?


I'd be curious to hear what people might say, pro or con about  
this...  I'm not wedded to either approach.  I do know if mpfr and  
gmp built fat and installed that way, that I wouldn't have to  
configure with the, please select the 32 bit abi option.  :-(


Re: char alignment on ARM

2007-01-17 Thread Mike Stump

On Jan 17, 2007, at 5:23 AM, Inder wrote:

void make(char* a) { *(unsigned long*)a = 0x12345678; }


stating address of the char array now starts from an unaligned  
address and is acessed by the instruction

  strbr3, [fp, #-26]

which gives a very wrong result



Is the compiler doing a right thing or is it a bug?


You asked for char alignment, and your program requires long  
alignment, your program is now, and always has been buggy.  The  
manual covers how to fix this, if you want to use __attribute__,  
otherwise, you can use a union to force any alignment you want.  So,  
in your case, yes, the compiler is doing the right thing.


If you were on a processor that handled misaligned data slowly, and  
you saw a general performance drop because of this, I'd encourage you  
to file a bug report as it might be a bug, if the results you see  
apply generally.


You're asking about behavioral differences in compilers that are  
really old.  You increase the odds that you can have these types of  
questions answered here, if you track and test mainline and ask the  
week the behavior changes, if it isn't obvious from glancing at the  
list for the past week.  :-)


Re: Miscompilation of remainder expressions

2007-01-17 Thread Mike Stump

On Jan 17, 2007, at 4:44 PM, Gabriel Dos Reis wrote:

C++ forces compilers to reveal their semantics for built-in types
through numeric_limits<>.  Every time you change the behaviour,
you also implicilty break an ABI.


No, the ABI does not document that the answer never changes between  
translation units, only that for this translation unit, what the  
answer happens to be.  If it said what you thought it said, you'd be  
able to quote it.  If you think I'm wrong, I look forward to the quote.


Consider the ABI document that says that the size of int is 4.  One  
cannot meaningfully use a compiler flag to change the size of an int  
to something other than 4 because then, that flag breaks the ABI.  An  
ABI document _can_ state that the answer to the question must be true  
for float, but, absent it stating that it is true, there isn't a  
document that says that it is true.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Mike Stump

On Jan 17, 2007, at 6:46 PM, Gabriel Dos Reis wrote:

(1) the ABI I was talking about is that of libstdc++


(2) numeric_limits<> cannot change from translation unit to  
translation
unit, within the same program otherwise you break the ODR.  I  
guess

we all agree on that.


Doh!  Did I ever say that I hate abi issues, this truly is plainly  
obvious...  and yet I still missed it.  Thanks.


Anyway, that would just mean that any abi document that attempts a C+ 
+ ABI must specify these answers.  The issue is that if one  
implemented a new C++ compiler, attempting to match the ABI of the  
previous one, I think it'd be bad form to cheat off the actual  
implementation for the answer, rather the document should specify the  
answer.


The issue reminds me of Sun's attempt to do a C+ abi that didn't talk  
about templates or EH, nice, but not as useful as one that does.


gcc doesn't build on ppc

2007-01-18 Thread Mike Stump
gcc doesn't build on powerpc-apple-darwin9:

/Volumes/mrs3/net/gcc-darwinO2/./prev-gcc/xgcc 
-B/Volumes/mrs3/net/gcc-darwinO2/./p
rev-gcc/ -B/Volumes/mrs3/Packages/gcc-061208/powerpc-apple-darwin9/bin/ -c   -g 
-O2
 -mdynamic-no-pic -DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing
-prototypes -pedantic -Wno-long-long -Wno-variadic-macros 
-Wno-overlength-strings -
Wold-style-definition -Wmissing-format-attribute -Werror -fno-common   
-DHAVE_CONFI
G_H -I. -I. -I../../gcc/gcc -I../../gcc/gcc/. -I../../gcc/gcc/../include 
-I./../int
l -I../../gcc/gcc/../libcpp/include  -I../../gcc/gcc/../libdecnumber 
-I../libdecnum
ber\
../../gcc/gcc/config/rs6000/rs6000.c -o rs6000.o
cc1: warnings being treated as errors
../../gcc/gcc/config/rs6000/rs6000.c: In function 
‘rs6000_emit_vector_compare’:
../../gcc/gcc/config/rs6000/rs6000.c:11904: warning: ISO C90 forbids mixed 
declarat
ions and code
make[3]: *** [rs6000.o] Error 1
make[3]: *** Waiting for unfinished jobs
rm cpp.pod gcc.pod
make[2]: *** [all-stage2-gcc] Error 2
make[1]: *** [stage2-bubble] Error 2
make: *** [all] Error 2

:-(


Re: Getting a tree node for a field of a variable

2007-01-19 Thread Mike Stump

On Jan 19, 2007, at 3:42 AM, Ferad Zyulkyarov wrote:

Is it possible to write a short example how a it could be referred the
tree of variable field?


Sure, just compile up the C code for what you want to do, run the  
debugger, watch what it builds and how it builds it.  If you want to  
know what it looks like in the end, just print the built tree.   
However, if you have a complex language like C++, accessing a field  
can be, well, kinda different.  You cannot employ the same techniques  
that you learn from C to C++.





innovative new build failure

2007-01-19 Thread Mike Stump

Here is an innovative new build failure, as seen on i686-apple-darwin9:

../../gcc/gcc/expmed.c:4179: warning: signed and unsigned type in  
conditional expression

make[3]: *** [expmed.o] Error 1
make[2]: *** [all-stage2-gcc] Error 2


Re: innovative new build failure

2007-01-19 Thread Mike Stump

On Jan 19, 2007, at 8:46 PM, Ian Lance Taylor wrote:

Yikes, my fault.  I wonder why it didn't fail for me?


Trivially, you've not updated your tree...  See, you did an rm -rf of  
the build tree after -Werrror was broken on Jan 4th and built, but  
you didn't update to pick up the fix for that breakage in r120947,  
and yet, you checked in r120995.  r120947 < r120995.  :-)


Top-level:

2007-01-18  Mike Stump  <[EMAIL PROTECTED]>

* configure.in: Re-enable -Werror for gcc builds.

fixed it.


Re: raising minimum version of Flex

2007-01-22 Thread Mike Stump

On Jan 21, 2007, at 11:48 PM, Ian Lance Taylor wrote:

That doesn't sound right.  It see flex being run every time I create a
new object directory, even though I don't modify the flex input files.


Sounds like a bug.  I did a quick check with a contrib/gcc_update -- 
touch and a c,treelang build and sure enough, flex is run 5 times, so,  
unless this bug is fixed, everyone that has an older flex will get  
build errors.  :-(  I think this bug should be fixed before we update  
the minimum version requirements for flex.  If it was, I'd support  
updating the min version required.


Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Mike Stump

On Jan 23, 2007, at 11:03 PM, Marcin Dalecki wrote:
That's just about a quarter million lines of code to process and you  
think the infrastructure around it isn't crap on the order of 100?


Standard answer, trivially, it is as good as you want it to be.  If  
you wanted it to be better, you'd contribute fixes to make it better.


Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Mike Stump

On Jan 24, 2007, at 11:08 AM, Marcin Dalecki wrote:
This argument fails (trivially) on the assumption that there is an  
incremental way ("fixes") to improve it in time not exceeding the  
expected remaining life span of a developer.


I welcome your existence proof for just one piece that can't be fixed  
incrementally.


Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Mike Stump

On Jan 24, 2007, at 1:12 PM, Marcin Dalecki wrote:
One thing that would certainly help as a foundation for possible  
further improvement in performance in this area would be to have  
xgcc contain all the front ends directly linked into it.


That does seem debatable.

It could be a starting point to help avoiding quite a lot of  
overhead needed to iterate over command line options for example.


Odd.  You think that time is being spent iterating over the command  
line options?  Do you have any data points to back this up?  I suspect  
we're spending less than 0.01% of the compilation time in duplicative  
argument processing.  After a quick check, yup, 0ms out of 1158 ms are  
spent in option processing.  11 ms in xgcc, 1 ms in all of xgcc, and  
10 ms in system calls.  So, I think due to measurement error, I can  
say that no more than 0.17% of the time is spent in duplicative option  
processing.


Re: transfre from c to MIPS

2007-01-24 Thread Mike Stump

On Jan 24, 2007, at 2:19 PM, meltem wrote:

I'm learning MIPS in course so i want to exercise with some MIPS code
so i will write my codes in c and translate it into MIPS assembly  
and then i
will check it with my hand write assembly code.. i don't have linux  
in my machine but i have cygwin and using windows.

Can anyone help me ???


gcc-help is the place to get help with gcc, see the website for the  
address.


Re: reading binarys

2007-01-25 Thread Mike Stump

On Jan 25, 2007, at 2:11 PM, Jason Erickson wrote:

I'm working on a project where every so often one of our games comes
back and we pull the ram off the game for saving, and sometimes for
anaylisis.  Currently the only varibles in ram that we can physically
look at are the static members.  The information that we would love to
get to is the heap memory and be able to know what dynamically
allocated structure that heap memory belongs to.


Heap objects can be found by looking at stack and global variables.


What I need to know, is there some way to read the binary with some
program to figure out which order everything in memory is being
allocated, so that I can write a program to read the memory dump and
figure out which memory locations belong to which pointer varibles.
Our code is written in C with litteraly tons of pointers.  It runs on
the i960 processor (yeah I know...soo old...but it works and it costs
a lot of money to change once its been approved).

Any ideas would be appricated to be able to read the binary to figure
out the order in which varibles get loaded onto the heap.


First, wrong list.  gcc-help is closer, but that is for compiler  
help.  Your question really has little to do with the compiler, but  
rather, debugging.  You could create a gdb remote stub for your game  
and then just fire up gdb with the `core file'.  Assumes that you've  
enhanced gdb to read your core files.  You can save off the -g built  
game and point the debugger at that code.  You then debug any data  
structures you want, using gdb.  If you just want a memory map, see  
the ld documentation.


If you're question is how do I write a debugger, please, don't do  
that, just reuse gdb.  It supports remote debugging and i960s just fine.


  1   2   3   4   5   6   7   8   9   10   >