Re: identifying c++ aliasing violations

2005-12-05 Thread Giovanni Bajo
Jack Howarth <[EMAIL PROTECTED]> wrote:

> What exactly is the implication of having a hundred or more of this in
> an application being built with gcc/g++ 4.x at -O3? Does it only risk
> random crashes in the generated code or does it also impact the
> quality
> of the generated code in terms of execution speed?


The main problem is wrong-code generation. Assuming the warning is right and
does not mark false positives, you should have those fixed. I don't think
quality of the generated code would be better with this change.

However, it's pretty strange that C++ code generation is worse with GCC 4: I
saw many C++ programs which actually got much faster due to higher lever
optimazations (such as SRA). You should really try and identify inner loops
which might have been slowed down and submit those as bugreports in our
Bugzilla.

Giovanni Bajo



Re: gcc-4.0-20051124-4.0-20051201.diff.bz2 is TERRIBLE!!!

2005-12-05 Thread Paolo Bonzini

J.C. wrote:

*** gcc-4.0-20051124/gcc/config/i386/i386.c Mon Nov  7 18:55:03 2005
--- gcc-4.0-20051201/gcc/config/i386/i386.c Thu Dec  1 01:53:01 2005

! #if defined(HAVE_GAS_HIDDEN) && defined(SUPPORTS_ONE_ONLY)

! #if defined(HAVE_GAS_HIDDEN) && (SUPPORTS_ONE_ONLY - 0)

Why did he remove the 'defined' and put the unreadable '0'?


Because it is possible that SUPPORTS_ONE_ONLY is defined to 0, in which 
case the older code was wrong.


Paolo


Re: identifying c++ aliasing violations

2005-12-05 Thread Jack Howarth
Giovanni,
I'll see what I can do in terms of profiling the xplor-nih code
with Shark on MacOS X. However in the near term, I would strongly
urge the gcc developers to backport the changes necessary to have
-Wstrict-aliasing issue warnings for c++ in gcc 4.1. I rebuilt
xplor-nih under gcc trunk (4.2) last night and the python and tcl
interface modules (built using SWIG 1.3.22) are filled with literally
hundreds of warnings that are not shown when the swig c++ code is
compiled with gcc 4.0/4.1. It seems a shame to not provide developers
upstream with that sort of information until gcc 4.2 is released.
 Jack


Re: RFD: C pointer conversions that differ in unsignedness

2005-12-05 Thread schopper-gcc
Shouldn't the compiler behave in the following way, concerning the signedness
of pointer arguments?

  void f (long *l, signed long *sl, unsigned long *ul);

  // - Make NO assumptions about the signedness of *l and accept long,
  //slong and ulong without a warning
  // - treat *sl as signed long and produce a warning if given a long
  //or unsigned long
  // - treat *ul as unsigned long and produce a warning if given a
  //long or signed long

  long *l;
  singend long *sl;
  unsigned long *ul;

  f ( l, sl, ul);//   No Warning 

  f (sl, sl, ul);//   No Warning
  f (ul, sl, ul);//   No Warning
  f ( l,  l, ul);//   Warning on second argument
  f ( l, ul, ul);//   Warning on second argument
  f ( l, sl,  l);//   Warning on third argument
  f ( l, sl, sl);//   Warning on third argument

Best regards

RĂ¼diger

> Code such as:
> 
> void f(long *, unsigned long *);
> 
> int main()
> {
> long *scp;
> unsigned long *ucp;
> 
> ucp = scp;
> scp = ucp;
> f(ucp, scp);
> }
> 
> is in violation of of the C standard.  We silently accept such code, 
> unless -pedantic (or better) is given.  I didn't happen to see this 
> listed in extentions.texi.
> 
> This was put in:
> 
> revision 1.91
> date: 1993/05/27 04:30:54;  author: rms;  state: Exp;  lines: +6 -7
> (convert_for_assignment): When allowing mixing of
> signed and unsigned pointers, compare unsigned types not type sizes.
> 
> in c-typeck.c, and before that (going back to 1.1), we always gave at 
> least a warning.  In C++, we give the warning.  Other compilers give 
> hard errors for these things.
> 
> I propose to tighten this up to a warning, unconditionally.  How to 
> others feel?  -Wall?  -std=?  Over my dead body?
> 
> Apple Radar 2535328.
> 
> 

> 2003-10-06  Mike Stump  <[EMAIL PROTECTED]>
> 
>   * c-typeck.c (convert_for_assignment): Tighted up pointer converstions
>   that differ in signedness.
> 


Re: GMP on IA64-HPUX

2005-12-05 Thread Steve Ellcey
> > >   So, in short, my questions are: is gmp-4.1.4 supposed to work on
> > >   ia64-hpux?
> > >
> > > No, it is not.  It might be possible to get either the LP64 or
> > > the ILP32 ABI to work, but even that requires the workaround you
> > > mention.  Don't expect any HP compiler to compile GMP correctly
> > > though, unless you switch off optimization.
> > >
> 
> If it's really compiler problems, this is one more reason for pulling
> gmp to the toplevel gcc, so it can be built with a sane compiler.
> 
> Richard.

FYI:  What I do to compile gmp on IA64 HP-UX is to configure gmp with
'--host=none --target=none --build=none'.  This avoids all the target
specific code.  I am sure the performance stinks this way but since it
is used by the compiler and not in the run-time I haven't found it to be
a problem.  Of course I don't compile any big fortran programs either.

Steve Ellcey
[EMAIL PROTECTED]


Re: GMP on IA64-HPUX

2005-12-05 Thread Steve Kargl
On Mon, Dec 05, 2005 at 07:57:43AM -0800, Steve Ellcey wrote:
> > > >   So, in short, my questions are: is gmp-4.1.4 supposed to work on
> > > >   ia64-hpux?
> > > >
> > > > No, it is not.  It might be possible to get either the LP64 or
> > > > the ILP32 ABI to work, but even that requires the workaround you
> > > > mention.  Don't expect any HP compiler to compile GMP correctly
> > > > though, unless you switch off optimization.
> > > >
> > 
> > If it's really compiler problems, this is one more reason for pulling
> > gmp to the toplevel gcc, so it can be built with a sane compiler.
> > 
> > Richard.
> 
> FYI:  What I do to compile gmp on IA64 HP-UX is to configure gmp with
> '--host=none --target=none --build=none'.  This avoids all the target
> specific code.  I am sure the performance stinks this way but since it
> is used by the compiler and not in the run-time I haven't found it to be
> a problem.  Of course I don't compile any big fortran programs either.

GMP/MPFR are used only in frontend (primarily for constant folding
and Fortran array indices).  The types that use GMP/MPFR are converted
in the trans-*.c files to types appropriate for the underlying hardware.
We at most need 128-bits of precision, and I believe the hardware
specific GMP code is most useful for thousands of bits of precision.
In other words, the performance impact is probably not that substantial.

-- 
Steve


Re: RFD: C pointer conversions that differ in unsignedness

2005-12-05 Thread Joe Buck
On Mon, Dec 05, 2005 at 03:27:56PM +0100, [EMAIL PROTECTED] wrote:
> Shouldn't the compiler behave in the following way, concerning the signedness
> of pointer arguments?
> 
>   void f (long *l, signed long *sl, unsigned long *ul);

"long" and "signed long" are the same type.  You are confused about how
C and C++ are defined.  Same with "int" and "signed int".  Only for "char"
are things different; it is implementation-defined (can differ from
platform to platform) whether "char" is signed or not.






Re: RFD: C pointer conversions that differ in unsignedness

2005-12-05 Thread schopper-gcc
Oh right, what I really meant was 'char' instead of 'long'.
In fact I just took the type from the referenced article. Sorry for that.

So am I right that the compiler should distinguish between char, signed char
and unsigned char in the proposed way?


> 
> "long" and "signed long" are the same type.  You are confused about how
> C and C++ are defined.  Same with "int" and "signed int".  Only for "char"
> are things different; it is implementation-defined (can differ from
> platform to platform) whether "char" is signed or not.
> 


more strict-aliasing questions

2005-12-05 Thread Jack Howarth
 Is there some place where all the existing forms of strict-aliasing
warnings are documented? So far I have only seen the error...

dereferencing type-punned pointer will break strict-aliasing rules

when building c++ code with gcc trunk (4.2). I am wondering how many other
types of warnings can -Wstrict-aliasing issue besides this one for gcc and
of those how many of those are currently checked in the g++ compiler?
 My second question is how univeral are the strict-aliasing rules used
by gcc? In other words, is it safe to say that by correcting source code
upstream to not violate any of the strict-aliasing rules in gcc trunk that
such code might achieve stability benefits as well on other third party
compilers? Or are these strict-aliasing rules pretty much gcc-specific?
Thanks in advance for any clarifications.
  Jack


Re: more strict-aliasing questions

2005-12-05 Thread Andrew Haley
Jack Howarth writes:

 >  My second question is how univeral are the strict-aliasing
 > rules used by gcc?

They are applicable to every compiler that implements ISO C++.  In
other words, code that violates aliasing constrains is not legal C++.

 > In other words, is it safe to say that by correcting source code
 > upstream to not violate any of the strict-aliasing rules in gcc
 > trunk that such code might achieve stability benefits as well on
 > other third party compilers?

Yes.

Andrew.


Re: identifying c++ aliasing violations

2005-12-05 Thread Dale Johannesen

On Dec 5, 2005, at 12:03 AM, Giovanni Bajo wrote:

Jack Howarth <[EMAIL PROTECTED]> wrote:

What exactly is the implication of having a hundred or more of this in
an application being built with gcc/g++ 4.x at -O3? Does it only risk
random crashes in the generated code or does it also impact the
quality
of the generated code in terms of execution speed?


The main problem is wrong-code generation. Assuming the warning is 
right and
does not mark false positives, you should have those fixed. I don't 
think

quality of the generated code would be better with this change.

However, it's pretty strange that C++ code generation is worse with 
GCC 4: I
saw many C++ programs which actually got much faster due to higher 
lever
optimazations (such as SRA). You should really try and identify inner 
loops

which might have been slowed down and submit those as bugreports in our
Bugzilla.


Could also be inlining differences, and you might check out whether
-fno-threadsafe-statics is applicable; that can make a big difference.
Bottom line, you're going to have to do some analysis to figure out
why it got slower.  (It sounds like you're on a MacOSX system, in
which case Shark is a good tool for this.)



Re: LTO, LLVM, etc.

2005-12-05 Thread Ian Lance Taylor
Mark Mitchell <[EMAIL PROTECTED]> writes:

> There is one advantage I see in the LTO design over LLVM's design.  In
> particular, the LTO proposal envisions a file format that is roughly at
> the level of GIMPLE.  Such a file format could easily be extended to be
> at the source-level version of Tree used in the front-ends, so that
> object files could contain two extra sections: one for LTO and one for
> source-level information.  The latter section could be used for things
> like C++ "export" -- but, more importantly, for other tools that need
> source-level information, like IDEs, indexers, checkers, etc.  (All
> tools that presently use the EDG front end would be candidate clients
> for this interface.)

It seems to me that this is clearly useful anyhow.  And it seems to me
that whether or not we use LTO, LLVM, or neither, we will still want
something along these lines.

So if anybody is inclined to work on this, they could start now.
Anything that writes our our high level tree representation (GENERIC
plus language specific codes) is going to work straightforwardly for
our low level tree representation (GIMPLE).  And we are going to want
to be able to write out the high level representation no matter what.

In short, while this is an important issue, I don't see it as strongly
favoring either side.  What it means, essentially, is that LTO is not
quite as much work as it might otherwise seem to be, because we are
going to do some of the work anyhow.  So when considering how much
work has to be done for LTO compared to how much work has to be done
for LLVM, we should take that into account.

This is more or less what you said, of course, but I think with a
different spin.

> If we do switch to LLVM, it's not going to happen before at least 4.3,
> and, if I had to guess, not before 4.4.

Allow me to be the first person to say that if we switch to LLVM, the
first release which incorporates it as the default compilation path
should be called 5.0.

Ian


Re: new gcc/g++ 4.1.0 flags?

2005-12-05 Thread Ian Lance Taylor
[EMAIL PROTECTED] (Jack Howarth) writes:

>  Where exactly are the compiler flags new to gcc 4.1.0 described.

http://gcc.gnu.org/gcc-4.1/changes.html

Ian


Re: LTO, LLVM, etc.

2005-12-05 Thread Steven Bosscher
On Saturday 03 December 2005 20:43, Mark Mitchell wrote:
> There is one advantage I see in the LTO design over LLVM's design.  In
> particular, the LTO proposal envisions a file format that is roughly at
> the level of GIMPLE.  Such a file format could easily be extended to be
> at the source-level version of Tree used in the front-ends, so that
> object files could contain two extra sections: one for LTO and one for
> source-level information.  The latter section could be used for things
> like C++ "export" -- but, more importantly, for other tools that need
> source-level information, like IDEs, indexers, checkers, etc.

I actually see this as a disadvantage.

IMVHO dumping for "export" and front-end tools and for the optimizers
should not be coupled like this.  Iff we decide to dump trees, then I
would hope the dumper would dump GIMPLE only, not the full front end
and middle-end tree representation.

Sharing a tree dumper between the front ends and the middle-end would
only make it more difficult again to move to sane data structures for
the middle end and to cleaner data structures for the front ends.

Gr.
Steven




Re: LTO, LLVM, etc.

2005-12-05 Thread Gabriel Dos Reis
Steven Bosscher <[EMAIL PROTECTED]> writes:

| On Saturday 03 December 2005 20:43, Mark Mitchell wrote:
| > There is one advantage I see in the LTO design over LLVM's design.  In
| > particular, the LTO proposal envisions a file format that is roughly at
| > the level of GIMPLE.  Such a file format could easily be extended to be
| > at the source-level version of Tree used in the front-ends, so that
| > object files could contain two extra sections: one for LTO and one for
| > source-level information.  The latter section could be used for things
| > like C++ "export" -- but, more importantly, for other tools that need
| > source-level information, like IDEs, indexers, checkers, etc.
| 
| I actually see this as a disadvantage.
| 
| IMVHO dumping for "export" and front-end tools and for the optimizers
| should not be coupled like this.

I'm wondering what the reasons are.

|  Iff we decide to dump trees, then I
| would hope the dumper would dump GIMPLE only, not the full front end
| and middle-end tree representation.
| 
| Sharing a tree dumper between the front ends and the middle-end would
| only make it more difficult again to move to sane data structures for
| the middle end and to cleaner data structures for the front ends.

Why?

-- Gaby


Accessing const object during constructor without this pointer

2005-12-05 Thread Pankaj Gupta

Hi

I have a question. Consider this code:

#include 

void global_init();

class A {
public:
  int i;
  A() : i(10) {
global_init();
  }
};

const A obj;

void global_init() {
  std::cout << "obj.i = " << obj.i << std::endl;
}

int main() {
  return EXIT_SUCCESS;
}



Here, global_init() is accessing a subobject of a const object, while its 
being constructed. I think the standard says that, if the access in not 
being made through the constructor's "this" (directly or indirectly), the 
value of the const object or any subobject is unspecified.


But when I compile using g++, I don't get any warnings about it.

Any idea if this should be giving warnings or not.


Best Regards
Pankaj



--

Pankaj Gupta

Infrastructure Team
Tower Research Capital

[EMAIL PROTECTED]  [Work]
[EMAIL PROTECTED]   [Personal]





Re: LTO, LLVM, etc.

2005-12-05 Thread Chris Lattner

On Dec 5, 2005, at 11:48 AM, Steven Bosscher wrote:

On Saturday 03 December 2005 20:43, Mark Mitchell wrote:
There is one advantage I see in the LTO design over LLVM's  
design.  In
particular, the LTO proposal envisions a file format that is  
roughly at
the level of GIMPLE.  Such a file format could easily be extended  
to be

at the source-level version of Tree used in the front-ends, so that
object files could contain two extra sections: one for LTO and one  
for
source-level information.  The latter section could be used for  
things

like C++ "export" -- but, more importantly, for other tools that need
source-level information, like IDEs, indexers, checkers, etc.



I actually see this as a disadvantage.

IMVHO dumping for "export" and front-end tools and for the optimizers
should not be coupled like this.  Iff we decide to dump trees, then I
would hope the dumper would dump GIMPLE only, not the full front end
and middle-end tree representation.

Sharing a tree dumper between the front ends and the middle-end would
only make it more difficult again to move to sane data structures for
the middle end and to cleaner data structures for the front ends.


I totally agree with Steven on this one.  It is *good* for the  
representation hosting optimization to be different from the  
representation you use to represent a program at source level.  The  
two have very different goals and uses, and trying to merge them into  
one representation will give you a representation that isn't very  
good for either use.


In particular, the optimization representation really does want  
something in "three-address" form.  The current tree-ssa  
implementation emulates this (very inefficiently) using trees, but at  
a significant performance and memory cost.  The representation you  
want for source-level information almost certainly *must* be a tree.


I think it is very dangerous to try to artificially tie link-time  
(and other) optimization together with source-level clients.  The  
costs are great and difficult to recover from (e.g. as difficult as  
it is to move the current tree-ssa work to a lighter-weight  
representation) once the path has been started.


That said, having a good representation for source-level exporting is  
clearly useful.  To be perfectly clear, I am not against a source- 
level form, I am just saying that it should be *different* than the  
one used for optimization.


-Chris


Re: c++ speed 3.3/4.0/4.1

2005-12-05 Thread Mike Stump

On Dec 4, 2005, at 3:09 PM, Jack Howarth wrote:
I have noticed that there was a significant speed regression in the  
c++ code generation between gcc 3.3 and gcc 4.0.x.


Gotta wonder if changing the inlining parameters would help you.


Problem with bugzilla account

2005-12-05 Thread Eric Weddington

Hello all,

Sorry if this is off-topic; there's not a mailing list described for 
this kind of issue.


I have a problem with making an email change for my bugzilla account. 
The old email address no longer exists, so bugzilla won't allow me to 
update my account to the new email address (because of the confirmation 
process). Is there someone who could help me with this offline?


Thanks for your time.
Eric Weddington


Re: c++ speed 3.3/4.0/4.1

2005-12-05 Thread Jack Howarth
Mike,
Do you mean using -fno-threadsafe-statics or do you have any other
inlining changes in mind?
  Jack


Re: GMP on IA64-HPUX

2005-12-05 Thread John David Anglin
> On Mon, Dec 05, 2005 at 07:57:43AM -0800, Steve Ellcey wrote:
> > > > >   So, in short, my questions are: is gmp-4.1.4 supposed to work on
> > > > >   ia64-hpux?
> > > > >
> > > > > No, it is not.  It might be possible to get either the LP64 or
> > > > > the ILP32 ABI to work, but even that requires the workaround you
> > > > > mention.  Don't expect any HP compiler to compile GMP correctly
> > > > > though, unless you switch off optimization.
> > > > >
> > > 
> > > If it's really compiler problems, this is one more reason for pulling
> > > gmp to the toplevel gcc, so it can be built with a sane compiler.
> > > 
> > > Richard.
> > 
> > FYI:  What I do to compile gmp on IA64 HP-UX is to configure gmp with
> > '--host=none --target=none --build=none'.  This avoids all the target
> > specific code.  I am sure the performance stinks this way but since it
> > is used by the compiler and not in the run-time I haven't found it to be
> > a problem.  Of course I don't compile any big fortran programs either.
> 
> GMP/MPFR are used only in frontend (primarily for constant folding
> and Fortran array indices).  The types that use GMP/MPFR are converted
> in the trans-*.c files to types appropriate for the underlying hardware.
> We at most need 128-bits of precision, and I believe the hardware
> specific GMP code is most useful for thousands of bits of precision.
> In other words, the performance impact is probably not that substantial.

Here's my 2 cents worth:

1) Although GMP/MPFR is buildable under hppa-hpux, I still have to apply
   a patch by hand.  The build/host/target configure options are inconsistent
   with GCC and other GNU packages.  This makes GMP/MPFR somewhat tricky
   to build.

2) The code optimisations are targeted more toward building with HP cc
   than GCC.

3) We have a different perspective on the PA 2.0 runtime.  GMP makes
   use of PA 2.0 64-bit registers instructions in the 32-bit runtime.
   However, the upper portion of 64-bit call-saved registers are not
   not saved in the 32-bit runtime, and the EH exception support that
   we currently have doesn't restore the 64-bit context.  Thus, the
   use of 64-bit registers is restricted to code that doesn't make
   calls and can't generate exceptions.

4) We have 128-bit precision floating-point available from the HP C library.

Thus, if possible, I would prefer not to have to use GMP under hppa-hpux.

Dave
-- 
J. David Anglin  [EMAIL PROTECTED]
National Research Council of Canada  (613) 990-0752 (FAX: 952-6602)


Re: RFD: C pointer conversions that differ in unsignedness

2005-12-05 Thread Mike Stump

On Dec 5, 2005, at 9:53 AM, [EMAIL PROTECTED] wrote:

Oh right, what I really meant was 'char' instead of 'long'.
In fact I just took the type from the referenced article. Sorry for  
that.


So am I right that the compiler should distinguish between char,  
signed char

and unsigned char in the proposed way?


Good question.  I don't believe so:

   [#5] Each of the comma-separated sets  designates  the  same
   type,  except  that  for  bit-fields,  it is implementation-
   defined whether the specifier int designates the  same  type
   as signed int or the same type as unsigned int.

[ note, I have a feeling they meant char, not int, I suspect someone  
might be able to provide a pointer to a DR for this. ]


   [#14]  The type char, the signed and unsigned integer types,
   and the floating types are  collectively  called  the  basic
   types.  Even if the implementation defines two or more basic
   types to have the same representation, they are nevertheless
   different types.29)

   [#15] The three types char, signed char, and  unsigned  char
   are   collectively   called   the   character   types.   The
   implementation shall define char to  have  the  same  range,
   representation,  and  behavior  as  either  signed  char  or
   unsigned char.30)

   30)CHAR_MIN, defined in , will  have  one  of  the
  values   0   or  SCHAR_MIN,  and  this  can  be  used  to
  distinguish the two options.  Irrespective of the  choice
  made,  char  is a separate type from the other two and is
  not compatible with either.


Did you find wording in the standard that mandates no diagnostic on  
the first three?


Re: LTO, LLVM, etc.

2005-12-05 Thread Jim Blandy
On 12/5/05, Chris Lattner <[EMAIL PROTECTED]> wrote:
> That said, having a good representation for source-level exporting is
> clearly useful.  To be perfectly clear, I am not against a source-
> level form, I am just saying that it should be *different* than the
> one used for optimization.

Debug information describes two things: the source program, and its
relationship to the machine code produced by the toolchain.  The
second is much harder to produce; each pass needs to maintain the
relation between the code it produces and the compiler's original
input.  Keeping the two representations separate (which I could easily
see being beneficial for optimization) shifts that burden onto some
new party which isn't being discussed, and which will be quite
complicated.


Re: RFD: C pointer conversions that differ in unsignedness

2005-12-05 Thread Joseph S. Myers
On Mon, 5 Dec 2005, Mike Stump wrote:

> On Dec 5, 2005, at 9:53 AM, [EMAIL PROTECTED] wrote:
> > Oh right, what I really meant was 'char' instead of 'long'.
> > In fact I just took the type from the referenced article. Sorry for that.
> > 
> > So am I right that the compiler should distinguish between char, signed char
> > and unsigned char in the proposed way?
> 
> Good question.  I don't believe so:
> 
>   [#5] Each of the comma-separated sets  designates  the  same
>   type,  except  that  for  bit-fields,  it is implementation-
>   defined whether the specifier int designates the  same  type
>   as signed int or the same type as unsigned int.
> 
> [ note, I have a feeling they meant char, not int, I suspect someone might be
> able to provide a pointer to a DR for this. ]

Bringing bit-fields into the matter is just confusing things since you 
can't have pointers to bit-fields, but anyway char is not in a 
comma-separated set with signed char or unsigned char and for DR#315 it 
was proposed to say that whether char bit-fields have the same signedness 
as non-bit-fields is unspecified.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: c++ speed 3.3/4.0/4.1

2005-12-05 Thread Mike Stump

On Dec 5, 2005, at 2:33 PM, Jack Howarth wrote:

Do you mean using -fno-threadsafe-statics or do you have any other
inlining changes in mind?


That option mentions the word inline 0 times, while interesting and  
worthwhile to test, I did mean these (from the man page):


-finline-limit=n

and --params:

   max-inline-insns-single
   Several parameters control the tree inliner used in  
gcc.  This
   number sets the maximum number of instructions  
(counted in
   GCC's internal representation) in a single function  
that the
   tree inliner will consider for inlining.  This only  
affects
   functions declared inline and methods implemented in  
a class

   declaration (C++).  The default value is 450.

   max-inline-insns-auto
   When you use -finline-functions (included in -O3), a  
lot of
   functions that would otherwise not be considered for  
inlining
   by the compiler will be investigated.  To those  
functions, a

   different (more restrictive) limit compared to functions
   declared inline can be applied.  The default value is  
90.


   inline-unit-growth
   Specifies maximal overall growth of the compilation  
unit caused
   by inlining.  This parameter is ignored when -funit- 
at-a-time
   is not used.  The default value is 50 which limits  
unit growth

   to 1.5 times the original size.

   max-inline-insns-recursive
   max-inline-insns-recursive-auto
   Specifies maximum number of instructions out-of-line  
copy of
   self recursive inline function can grow into by  
performing

   recursive inlining.

   For functions declared inline --param max-inline- 
insns-recur-
   sive is taken into acount.  For function not declared  
inline,

   recursive inlining happens only when -finline-functions
   (included in -O3) is enabled and --param max-inline- 
insns-

   recursive-auto is used.  The default value is 450.

   max-inline-recursive-depth
   max-inline-recursive-depth-auto
   Specifies maximum recursion depth used by the  
recursive inlin-

   ing.

   For functions declared inline --param max-inline- 
recursive-
   depth is taken into acount.  For function not  
declared inline,

   recursive inlining happens only when -finline-functions
   (included in -O3) is enabled and --param max-inline- 
recursive-

   depth-auto is used.  The default value is 450.

   inline-call-cost
   Specify cost of call instruction relative to simple  
arithmetics
   operations (having cost of 1).  Increasing this cost  
disqualify
   inlinining of non-leaf functions and at same time  
increase size
   of leaf function that is believed to reduce function  
size by
   being inlined.  In effect it increase amount of  
inlining for
   code having large abstraction penalty (many functions  
that just
   pass the argumetns to other functions) and decrease  
inlining
   for code with low abstraction penalty.  Default value  
is 16.


It is trivial to have these slightly alter codegen and show very  
large differences in timings.


Re: LTO, LLVM, etc.

2005-12-05 Thread Steven Bosscher
On Tuesday 06 December 2005 00:23, Jim Blandy wrote:
> Debug information describes two things: (...snip...)
> Keeping the two representations separate (which I could easily 
> see being beneficial for optimization) shifts that burden onto some
> new party which isn't being discussed, and which will be quite
> complicated.

Uh, I'm not sure what you're trying to say, but it looks like you're
missing the point.

Both the source level representation and the optimizer representation
have to represent debug information somehow.  Nobody is shifting any
burden anywhere.


For the source level, we'd have debug information in whatever kind
of data structures the front ends would be using.  In the front ends,
you want to store everything that is needed to represent the source
code as accurately as possible.  So you could have information about
class hierarchy, templates, inheritance, and so on, at this level.

But most of that information is not useful for source level debuggers
(e.g. gdb) at all.  So this detailed source level information doesn't
have to survive in the translation from the front end representation
to the optimizer's representation.

For the optimizers, you'd translate all the relevant debug stuff along
with everything else to something suitable for the optimizers: Line
number information, symbol tables, what variable goes in what register
or stack slot, etc.
Some of these things, you don't even _want_ to represent at the source
level (e.g. what is a stack slot in C++ source code? It's meaningless).


What makes EDG so great is that it represents C++ far closer to the
actual source code than G++ does.  It would be good for G++ to have
a representation that is closer to the source code than what it has
now.

The problem with using the same data structures (i.e. 'tree') for the
very-close-to-language representation and the optimizer representation
is that the two representation have completely different goals, as
Chris Lattner already explained.
I'd be surprised if there a compiler exists that runs optimizations
on EDG's C++ specific representation. I think all compilers that use
EDG translate EDG's representation to a more low-level representation.

What I'd like for GCC is to have a similar separation of front-end and
middle-end/back-end representation, instead of "everything-is-a-tree".
If that means writing two intermediate representation dumpers, tough.
It would be worth it.

Gr.
Steven



GCC 3.4.5 status?

2005-12-05 Thread Steve Ellcey

Has GCC 3.4.5 been officially released?  I don't recall seeing an
announcement in gcc@gcc.gnu.org or [EMAIL PROTECTED] and when I
looked on the main GCC page and I see references to GCC 3.4.4 but not
3.4.5.  But I do see a 3.4.5 download on the GCC mirror site that I
checked and I see a gcc_3_4_5_release tag in the SVN tags directory.

I also notice we have a "Releases" link under "About GCC" in the top
left corner of the main GCC page that doesn't look like it has been
updated in quite a while for any releases.  Should this be updated or
removed?

Steve Ellcey
[EMAIL PROTECTED]


Re: RFD: C pointer conversions that differ in unsignedness

2005-12-05 Thread Mike Stump

On Dec 5, 2005, at 3:25 PM, Joseph S. Myers wrote:
OBringing bit-fields into the matter is just confusing things since  
you

can't have pointers to bit-fields, but anyway char is not in a
comma-separated set with signed char or unsigned char and for  
DR#315 it
was proposed to say that whether char bit-fields have the same  
signedness

as non-bit-fields is unspecified.


Ah, yeah, simple misread on my part.  I meant to use it as a way of  
identifying when two types are different.  Though, [ searching ]  
curious how it doesn't state that directly, but leaves one to assume  
it.  :-(  I think I see value in a reference implementation for the  
language.  :-)


Re: MIPS: comparison modes in conditional branches

2005-12-05 Thread Jim Wilson

Adam Nemet wrote:

Now if I am correct and this last thing is really a bug then the
obvious question is whether it has anything to do with the more
restrictive form for conditional branches on MIPS64?  And of course if
I fix it then whether it would be OK to lift the mode restrictions in
the conditional branch patterns.


Yes, the last bit looks like it could be a bug; a missing use of 
TRULY_NOOP_TRUNCATION somewhere.


This isn't directly related to the current situation though.  The MIPS 
port was converted from using CC0 to using a register for condition 
codes on April 27, 1992.  The mistaken use of modes in branch tests 
occured at that time.  This happened between the gcc-2.1 and gcc-2.2 
releases.  This was long before the 64-bit support was added.  When the 
64-bit support was added later, the mistaken branch modes were expanded 
to include SImode and DImode variants.  Since this occured long ago, it 
would be difficult to determine exactly why it was done this way.  It 
was perhaps just done that way because it looked obviously correct.


Yes, it looks like fixing the combiner problem would make it possible to 
remove the mistaken mode checks.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: more strict-aliasing questions

2005-12-05 Thread Jim Wilson

Jack Howarth wrote:

 Is there some place where all the existing forms of strict-aliasing
warnings are documented? So far I have only seen the error...


We don't have such documentation unfortunately.  There are 3 errors. 
There is the one you have seem.  There is a similar one for incomplete 
types, which says "might break" instead of "will break" because whether 
there is a problem depends on how the type is completed.


There is also a third error which occurs only with -Wstrict-aliasing=2, 
which again says "might break" instead of "will break".  This option 
will detect more aliasing problems than just -Wstrict-aliasing, but it 
also may report problems where they don't exist.  Or in other words, for 
ambiguous cases where the compiler can't tell if there may or may not be 
an aliasing problem, -Wstrict-aliasing will give no warning, and 
-Wstrict-aliasing=2 will give a warning.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: problem with gcc 3.2.2

2005-12-05 Thread Jim Wilson

Mohamed Ghorab wrote:

linux, it tries to compile some files but outputs the following error:
/usr/include/c++/3.2.2/bits/fpos.h:60: 'streamoff' is used as a type,
but is not defined as a type.


This is a more appropriate question for the gcc-help list than the gcc 
list.  The gcc list is primarily for developers.


In order to help you, we will likely need more info, such as a testcase 
we can compile to reproduce the problem, and info about your operating 
system.  See the info on reporting bugs at

  http://gcc.gnu.org/bugs.html
This is probably more likely user error than a gcc bug, but we need the 
same kind of info to help with user errors.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: Problem with bugzilla account

2005-12-05 Thread Jim Wilson

Eric Weddington wrote:
I have a problem with making an email change for my bugzilla account. 


sysadmin requests can be sent to [EMAIL PROTECTED]
--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: LTO, LLVM, etc.

2005-12-05 Thread Mark Mitchell
Ian Lance Taylor wrote:

> In short, while this is an important issue, I don't see it as strongly
> favoring either side.  What it means, essentially, is that LTO is not
> quite as much work as it might otherwise seem to be, because we are
> going to do some of the work anyhow.  So when considering how much
> work has to be done for LTO compared to how much work has to be done
> for LLVM, we should take that into account.
> 
> This is more or less what you said, of course, but I think with a
> different spin.

I agree with what you've written, and you've captured my point (that
this effectively reduces the cost of LTO, since it provides something
else we want) nicely.  Again, I don't think that's a definitive
argument; it's just one item to factor in to the overall decision.

THanks,

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: LTO, LLVM, etc.

2005-12-05 Thread Mark Mitchell
Steven Bosscher wrote:
> On Saturday 03 December 2005 20:43, Mark Mitchell wrote:
> 
>>There is one advantage I see in the LTO design over LLVM's design.  In
>>particular, the LTO proposal envisions a file format that is roughly at
>>the level of GIMPLE.  Such a file format could easily be extended to be
>>at the source-level version of Tree used in the front-ends, so that
>>object files could contain two extra sections: one for LTO and one for
>>source-level information.  The latter section could be used for things
>>like C++ "export" -- but, more importantly, for other tools that need
>>source-level information, like IDEs, indexers, checkers, etc.
> 
> 
> I actually see this as a disadvantage.
> 
> IMVHO dumping for "export" and front-end tools and for the optimizers
> should not be coupled like this.  Iff we decide to dump trees, then I
> would hope the dumper would dump GIMPLE only, not the full front end
> and middle-end tree representation.

You and I have disagreed about this before, and I think we will continue
to do so.

I don't see anything about Tree that I find inherently awful; in fact,
it looks very much like what I see in other front ends.  There are
aspects I dislike (overuse of pointers, lack of type-safety, unncessary
copies of types), but I couldn't possibly justify changing the C++
front-end, for example, to use something entirely other than Tree.  That
would be a big project, and I don't see much benefit; I think that the
things I don't like can be fixed incrementally.

(For example, it occurred to me a while back that by fixing the internal
type-correctness of expressions, which we want to do anyhow, we could
eliminate TREE_TYPE from expression nodes, which would save a pointer.)

It's not that I would object to waking up one day to find out that the
C++ front-end no longer used Tree, but it just doesn't seem very
compelling to me.

> Sharing a tree dumper between the front ends and the middle-end would
> only make it more difficult again to move to sane data structures for
> the middle end and to cleaner data structures for the front ends.

The differences between GIMPLE and C++ Trees are small, structurally;
there are just a lot of extra nodes in C++ that never reach GIMPLE.  If
we had a tree dumper for one, we'd get the other one almost for free.
So, I don't think sharing the tree dumper stands in the way of anything;
you can still switch either part of the compiler to use non-Tree
whenever you like.  You'll just need a new dumper, which you would have
wanted anyhow.

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: ARM spurious load

2005-12-05 Thread Jim Wilson

Shaun Jackman wrote:

The following code snippet produces code that loads a register, r5,
from memory, but never uses the value.


You can report things like this into our bugzilla database, marking them 
as enhancement requests.  We don't keep track of issues reported to the 
gcc list.


I took a quick look.  The underlying problem is that the arm.md file has 
an iordi3 pattern, which gets split late, preventing us from recognizing 
some optimization chances here.  If I just disable the iordi3 pattern, 
then I get much better code.

ldr r0, .L3
mov r1, r0, asr #31
orr r1, r1, #34603008
@ lr needed for prologue
bx  lr
Disabling this pattern may result in worse code for other testcases 
though.  It was presumably added for a reason.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: LTO, LLVM, etc.

2005-12-05 Thread Mark Mitchell
Chris Lattner wrote:

> I totally agree with Steven on this one.  It is *good* for the 
> representation hosting optimization to be different from the 
> representation you use to represent a program at source level.  The  two
> have very different goals and uses, and trying to merge them into  one
> representation will give you a representation that isn't very  good for
> either use.

I don't think that's entirely true.  One of the nice things about WHIRL,
at least in theory, is that the representation is gradually lowered
throughout the compiler, but is never abruptly transitioned, as with
GCC's Tree->RTL conversion.  So, it's easier to reuse code, instead of
having a Tree routine and an RTL routine that do "the same thing", as we
do in several places in GCC.

As a concrete example, having a control-flow graph in the front-end is
very useful, for optimization purposes, diagnostic purposes, and for
plugging in domain-specific optimizers and analyzers.  It would be nice
to have flow-graph code that could be easily used in both places,
without having to make that code representation-independent, using
adapters to abstract away the actual representation.

That's not to say that I disagree with:

> In particular, the optimization representation really does want 
> something in "three-address" form.  The current tree-ssa  implementation
> emulates this (very inefficiently) using trees, but at  a significant
> performance and memory cost.  The representation you  want for
> source-level information almost certainly *must* be a tree.

Instead, it's a long-winded way of saying that I don't agree that
there's any inherent benefit to using completely different
representations, but that I do agree that one wants the right
representation for the job, and that Tree-SSA is not the best
representation for optimization.  So, if Tree-SSA is not replaced, it
will almost certainly need to evolve.

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: LTO, LLVM, etc.

2005-12-05 Thread Chris Lattner


On Dec 5, 2005, at 5:27 PM, Mark Mitchell wrote:

Steven Bosscher wrote:

IMVHO dumping for "export" and front-end tools and for the optimizers
should not be coupled like this.  Iff we decide to dump trees, then I
would hope the dumper would dump GIMPLE only, not the full front end
and middle-end tree representation.



It's not that I would object to waking up one day to find out that the
C++ front-end no longer used Tree, but it just doesn't seem very
compelling to me.


I agree with you.  The 'tree' data structure is conceptually what we  
want for the front-ends to represent the code.  They are quite  
similar in spirit to many AST representations.



Sharing a tree dumper between the front ends and the middle-end would
only make it more difficult again to move to sane data structures for
the middle end and to cleaner data structures for the front ends.



The differences between GIMPLE and C++ Trees are small, structurally;
there are just a lot of extra nodes in C++ that never reach  
GIMPLE.  If

we had a tree dumper for one, we'd get the other one almost for free.
So, I don't think sharing the tree dumper stands in the way of  
anything;

you can still switch either part of the compiler to use non-Tree
whenever you like.  You'll just need a new dumper, which you would  
have

wanted anyhow.


The point that I'm arguing (and I believe Steven agrees with) is that  
trees make a poor representation for optimization.  Their use in tree- 
ssa has lead to a representation that takes hundreds of bytes and  
half a dozen separate allocations for each gimple operation.  From  
the efficiency standpoint alone, it doesn't make sense to use trees  
for optimization.


Further, I would point out that it actually HURTS the front-ends to  
have the optimizers using trees.  We are getting very close to the  
time when there are not enough tree codes to go around, and there is  
still a great demand for new ones.  Many of these tree codes are  
front-end specific (e.g.  BIND_EXPR and various OpenMP nodes) and  
many of them are backend specific (e.g. the various nodes for the  
vectorizer).  Having the front-end and the back-end using the same  
enum *will* have a short term cost if the size of the tree enum field  
needs to be increased.


-Chris


Re: LTO, LLVM, etc.

2005-12-05 Thread Chris Lattner

On Dec 5, 2005, at 5:43 PM, Mark Mitchell wrote:

Chris Lattner wrote:

I totally agree with Steven on this one.  It is *good* for the
representation hosting optimization to be different from the
representation you use to represent a program at source level.   
The  two
have very different goals and uses, and trying to merge them into   
one
representation will give you a representation that isn't very   
good for

either use.



I don't think that's entirely true.  One of the nice things about  
WHIRL,

at least in theory, is that the representation is gradually lowered
throughout the compiler, but is never abruptly transitioned, as with
GCC's Tree->RTL conversion.  So, it's easier to reuse code, instead of
having a Tree routine and an RTL routine that do "the same thing",  
as we

do in several places in GCC.


I understand where you are coming from here, and agree with it.   
There *is* value to being able to share things.


However, there is a cost.  I have never heard anything good about  
WHIRL from a compilation time standpoint: the continuous lowering  
approach does have its own cost.  Further, continuous lowering makes  
the optimizers more difficult to deal with, as they either need to  
know what 'form' they are dealing with, and/or can only work on a  
subset of the particular forms (meaning that they cannot be freely  
reordered).



In particular, the optimization representation really does want
something in "three-address" form.  The current tree-ssa   
implementation

emulates this (very inefficiently) using trees, but at  a significant
performance and memory cost.  The representation you  want for
source-level information almost certainly *must* be a tree.



Instead, it's a long-winded way of saying that I don't agree that
there's any inherent benefit to using completely different
representations, but that I do agree that one wants the right
representation for the job, and that Tree-SSA is not the best
representation for optimization.  So, if Tree-SSA is not replaced, it
will almost certainly need to evolve.


What sort of form do you think it could/would reasonably take? [1]   
Why hasn't it already happened?  Wouldn't it make more sense to do  
this work independently of the LTO work, as the LTO work *depends* on  
an efficient IR and tree-ssa would benefit from it anyway?


-Chris

1. I am just not seeing a better way, this is not a rhetorical question!


Re: Possible size-opt patch

2005-12-05 Thread Bernd Schmidt

Giovanni Bajo wrote:

Bernd,

I read you're interested in code-size optimizations. I'd like to point you
to this patch:
http://gcc.gnu.org/ml/gcc-patches/2005-05/msg00554.html

which was never finished nor committed (I don't know if RTH has a newer
version though). This is would be of great help for code size issues in AVR,
but I don't know if and how much it'd help for Blackfin.


Probably not too much, since we only use multi-word regs for DImode, 
which doesn't occur that frequently.



Bernd


Re: LTO, LLVM, etc.

2005-12-05 Thread Andrew Pinski
> I don't see anything about Tree that I find inherently awful; in fact,
> it looks very much like what I see in other front ends.  There are
> aspects I dislike (overuse of pointers, lack of type-safety, unncessary
> copies of types), but I couldn't possibly justify changing the C++
> front-end, for example, to use something entirely other than Tree.  That
> would be a big project, and I don't see much benefit; I think that the
> things I don't like can be fixed incrementally.
> 
> (For example, it occurred to me a while back that by fixing the internal
> type-correctness of expressions, which we want to do anyhow, we could
> eliminate TREE_TYPE from expression nodes, which would save a pointer.)

Steven has a point here is that currently the trees are way over-bloated
because they contain both front-end and middle-end info.  This has been
a problem since trees have been in GCC (which is the begining).  I should
note that some expression nodes still will need TREE_TYPE.  Though TREE_TYPE
is only part of the issue.  TREE_CHAIN is the worst offender here in general
as it is used by the front-end to do things which make less sense than what
it is there for.  Steven can comment more on this since he tried to remove
TREE_CHAIN before but the C++ front-end was (ab)using it more than he could
fix.

There are other bits which are ovbiously front-end only bits, the following
for examples in tree_common:
  unsigned private_flag : 1; /* not in CALL_EXPR or RESULT_DECL/PARM_DECL */
  unsigned protected_flag : 1; /* not in CALL_EXPR */
  unsigned deprecated_flag : 1; /* not in IDENTIFIER_NODE, just added too */
  unsigned lang_flag_0 : 1;
  unsigned lang_flag_1 : 1;
  unsigned lang_flag_2 : 1;
  unsigned lang_flag_3 : 1;
  unsigned lang_flag_4 : 1;
  unsigned lang_flag_5 : 1;
  unsigned lang_flag_6 : 1;

in expressions:
  int complexity;

in types:
  unsigned lang_flag_0 : 1;
  unsigned lang_flag_1 : 1;
  unsigned lang_flag_2 : 1;
  unsigned lang_flag_3 : 1;
  unsigned lang_flag_4 : 1;
  unsigned lang_flag_5 : 1;
  unsigned lang_flag_6 : 1;
  unsigned needs_constructing_flag : 1;

binfo in general (why is this even in tree.h?)
And many more examples latter on in tree.h.

-- Pinski




Re: LTO, LLVM, etc.

2005-12-05 Thread Mark Mitchell
Chris Lattner wrote:

[Up-front apology: If this thread continues, I may not be able to reply
for several days, as I'll be travelling.  I know it's not good form to
start a discussion and then skip out just when it gets interesting, and
I apologize in advance.  If I'd been thinking better, I would have
waited to send my initial mesasge until I returned.]

> I understand where you are coming from here, and agree with it.   There
> *is* value to being able to share things.
> 
> However, there is a cost.  I have never heard anything good about  WHIRL
> from a compilation time standpoint: the continuous lowering  approach
> does have its own cost.

I haven't heard anything either way, but I take your comment to mean
that you have heard that WHIRL is slow, and I'm happy to believe that.
I'd agree that a data structure capable of representing more things
almost certainly imposes some cost over one capable of representing
fewer things!  So, yes, there's definitely a cost/benefit tradeoff here.

> What sort of form do you think it could/would reasonably take? [1]   Why
> hasn't it already happened?  Wouldn't it make more sense to do  this
> work independently of the LTO work, as the LTO work *depends* on  an
> efficient IR and tree-ssa would benefit from it anyway?

To be clear, I'm really not defending the LTO proposal.  I stand by my
statement that I don't know enough to have a preference!  So, please
don't read anything more into what's written here than just the plain words.

I did think a little bit about what it would take to make Tree-SSA more
efficient.  I'm not claiming that they're aren't serious or even fatal
flaws in those thoughts; this is just a brain dump.  I also don't claim
to have measurements showing how much of a difference these changes
would make.

I'm going to leave TYPE nodes out -- because they're shared with the
front-ends, and so will live on anyhow.  Similarly for the DECL nodes
that correspond to global variables and global functions.  So, that
leaves EXPR nodes and (perhaps most importantly!) DECLs for
local/temporary variables.

The first thing to do would be to simplify the local variable DECLs; all
we should really need from such a thing is its type (including its
alignment, which, despite historical GCC practice is part of its type),
its name (for debugging), its location relative to the stack frame (if
we want to be able to do optimizations based on the location on the
stack, which we may or may not want to do at this point), and whatever
mark bits or scratch space are needed by optimization passes.  The type
and name are shared across all SSA instances of "the same" variable --
so we could use a pointer to a canonical copy of that information.  (For
user-visible variables, the canonical copy could be the VAR_DECL from
the front end.)  So, local variables would collapse 176 bytes (on my
system) to something more like 32 bytes.

The second thing would be to modify expression nodes.  As I mentioned,
I'd eliminate their TYPE fields.  I'd also eliminate their
TREE_COMPLEXITY fields, which are already nearly unused.  There's no
reason TREE_BLOCK should be needed in most expressions; it's only needed
on nodes that correspond to lexical blocks.  Those changes would
eliminate a significant amount of the current size (64 bytes) for
expressions.  I also think it ought to be possible to eliminate the
source_locus field; instead of putting it on every expression, insert
line-notes into the statement stream, at least by the time we reach the
optimizers.  I'd also eliminate uses of TREE_LIST to link together the
nodes in CALL_EXPRs; instead use a vector of operands hanging off the
end of the CALL_EXPR corresponding to the number of arguments in the
call.  Similarly, I'd consider using a vector to store statements in a
block, or rather than a linked list.

Finally, if you wanted, you could flatten expressions so that each
expression was, ala LLVM, an "instruction", and all operands were
"leaves" rather than themselves trees; that's a subset of the current
tree format.  I'm not sure that step would in-and-of-itself save memory,
but it would be more optimizer-friendly.

In my opinion, the reason this work hasn't been done is that (a) it's
not trivial, and (b) there was no sufficiently pressing need.  GCC uses
a lot of memory, and that's been an issue, but it hasn't been a "killer
issue" in the sense that huge numbers of people who would otherwise have
used GCC went somewhere else.  Outside of work done by Apple and
CodeSourcery, attacking that probably hasn't been (as far as I know?)
funded by any companies.

You're correct that LTO, were it to proceed, might make this a killer
issue, and then we'd have to attack it -- and so that work should go on
the cost list for LTO.  You're also correct that some of this work would
also benefit GCC as a whole, in that the front-ends would use less
memory too, and so you're also correct that there is value in doing at
least some of the work independently of L

crtstuff sentinels

2005-12-05 Thread DJ Delorie

The m32c-elf port uses PSImode for pointers, which, for m32c (vs m16c)
only have 24 bits of precision in a 32 bit word.  The address
registers are 24 bit unsigned registers.

The "-1" sentinal used for CTOR_LIST is not a representable address,
and the code gcc ends up using compares 0x (the -1) with
0x00ff (what ends up in $a0) and it doesn't match.

Suggestions?


Re: crtstuff sentinels

2005-12-05 Thread Paul Brook
> The "-1" sentinal used for CTOR_LIST is not a representable address,
> and the code gcc ends up using compares 0x (the -1) with
> 0x00ff (what ends up in $a0) and it doesn't match.
>
> Suggestions?

Use ELF .init_array/.fini_array

Paul


Re: c++ speed 3.3/4.0/4.1

2005-12-05 Thread Jack Howarth
 Well I tried a few different builds of xplor-nih tonight with the 
following optimization flags for the gcc and g++ compilers...
testsuite in seconds
  xplorpython   tcl
-O3 -ffastmath -mtune=970 137.5454 128.7770 48.0390
-O3 -ffastmath -mtune=970 -fno-threadsafe-statics 137.0741 127.4653 48.0205
-O3 -ffastmath -mtune=970  -finline-limit=1200135.4462 127.5790 48.3680

As you can see the c/c++ code (mostly c++) in xplor-nih is immune to
improvement in gcc 4.2.0 with the -fno-threadsafe-statics and
-finline-limit=1200. The same build using Apple's gcc 3.3 would execute about
7% faster.
 Is there anything not usually enabled in -O3 that might help? I am
rather confused by the options...

  -ftree-vectorize
  -fipa-cp 

and the rest as to which ones are part of -O3 in gcc 4.2.0 and which require
enabling (as well as which are incompatible with each other). I would be
interested in trying to squeeze some more performance out of the gcc 4.2.0
compiles but am at a loss for the logical approach to doing this (short of
resorting to -fprofile-use).
   Thanks in advance for any other advice.
Jack



Re: LTO, LLVM, etc.

2005-12-05 Thread Gabriel Dos Reis
Steven Bosscher <[EMAIL PROTECTED]> writes:

[...]

| I'd be surprised if there a compiler exists that runs optimizations
| on EDG's C++ specific representation.

CFront was very good at implementing optimizations "native" compilers
could not match at the time (or with 2 years lag).  KCC did a great job.
Of course, not every optimization needs to be expressed at source
level, but definitely they are or were compilers that take/took
optimization advangate of higher-level representations.

Anyway, I think your reduction completely missed the point of the LTO
proposal. 

The point wasn't that LTO would represent the programs very close to
the source level (e.g. even higher level than GENERIC) for link-time
optimization purpose.  The observation was that the infrastructure of LTO
would make it *fairly easy to extend* to higher-level representation.

-- Gaby


Re: GCC 3.4.5 status?

2005-12-05 Thread Gabriel Dos Reis
Steve Ellcey <[EMAIL PROTECTED]> writes:

| Has GCC 3.4.5 been officially released?

Yes, tarballs are on gcc.gnu.org and ftp.gnu.org since Dec 1st.  Only
official announcement is missing.

[...]

| I also notice we have a "Releases" link under "About GCC" in the top
| left corner of the main GCC page that doesn't look like it has been
| updated in quite a while for any releases.  Should this be updated or
| removed?

Gerald is our "webmaster" and probably has a word to say.

-- Gaby


Re: GCC 3.4.5 status?

2005-12-05 Thread Kaveh R. Ghazi
 > Steve Ellcey <[EMAIL PROTECTED]> writes:
 > 
 > | Has GCC 3.4.5 been officially released?
 > 
 > Yes, tarballs are on gcc.gnu.org and ftp.gnu.org since Dec 1st.  Only
 > official announcement is missing.

What are you waiting for?

--
Kaveh R. Ghazi  [EMAIL PROTECTED]


Why is -Wstrict-null-sentinel (C++ only)?

2005-12-05 Thread Chris Shoemaker

I want to warn at the use of unadorned "NULL" as sentinel value in C
programs.  Why is this option (-Wstrict-null-sentinel) restricted to
C++ programs?  Or is there some other way to get this warning?

-chris

(Please 'cc'; not subscribed)


Re: LTO, LLVM, etc.

2005-12-05 Thread Mark Mitchell
Steven Bosscher wrote:

> What makes EDG so great is that it represents C++ far closer to the
> actual source code than G++ does.

I know the EDG front-end very well; I first worked with it in 1994, and
I have great respect for both the EDG code and the EDG people.

I disagree with your use of "far closer" above; I'd say "a bit closer".

Good examples of differences are that (before lowering) it has a
separate operator for "virtual function call" (rather than using a
virtual function table explicitly) and that pointers-to-member functions
are opaque objects, not structures.  These are significant differences,
but they're not huge differences, or particularly hard to fix in G++.

The key strengths of the EDG front-end are its correctness (second to
none), cleanliness, excellent documentation, and excellent support.  It
does what it's supposed to do very well.

> It would be good for G++ to have
> a representation that is closer to the source code than what it has
> now.

Yes, closing the gap would be good!  I'm a big proponent of introducing
a lowering phase into G++.  So, while I might disagree about the size of
gap, I agree that we should eliminate it. :-)

> I'd be surprised if there a compiler exists that runs optimizations
> on EDG's C++ specific representation. I think all compilers that use
> EDG translate EDG's representation to a more low-level representation.

I've worked on several compilers that used the EDG front-end.  In all
cases, there was eventually translation to different representations,
and I agree that you wouldn't want to do all your optimization on EDG
IL.  However, one compiler I worked on did do a fair amount of
optimization on EDG IL, and the KAI "inliner" also did a lot of
optimization (much more than just inlining) on EDG IL.

Several of the formats to which I've seen EDG IL translated (WHIRL and a
MetaWare internal format, for example) are at about the level of
"lowered" EDG IL (which is basically C with exceptions), which is the
form of EDG IL that people use when translating into their internal
representation.  In some cases, these formats are then again transformed
into a lower-level, more RTL-ish format at some point during optimization.

I'm not saying that having two different formats is necessarily a bad
thing (we've already got Tree and RTL, so we're really talking about two
levels or three), or that switching to LLVM is a bad idea, but I don't
think there's any inherent reason that we must necessarily have multiple
representations.

My basic point is that I want to see the decision be made on the basis
of the effort required to achieve our goals, not on our opinions about
what we think might be the best design in the abstract.  In other words,
I don't think that the fact that GCC currently uses the same data
structures for front-ends and optimizers is in and of itself a problem
-- but I'm happy to switch to LLVM, if we think that it's easier to make
LLVM do what we want than it is to make Tree-SSA do what we want.

-- 
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: LTO, LLVM, etc.

2005-12-05 Thread Mathieu Lacage
hi mark,

On Mon, 2005-12-05 at 21:33 -0800, Mark Mitchell wrote:

> I'm not saying that having two different formats is necessarily a bad
> thing (we've already got Tree and RTL, so we're really talking about two
> levels or three), or that switching to LLVM is a bad idea, but I don't
> think there's any inherent reason that we must necessarily have multiple
> representations.

In what I admit is a relatively limited experience (compared to that of
you or other gcc contributors) of working with a few large old sucky
codebases, I think I have learned one thing: genericity is most often
bad. Specifically, I think that trying to re-use the same data
structure/algorithms/code for widely different scenarios is what most
often leads to large overall complexity and fragility.

It seems to me that the advantages of using the LTO representation for
frontend-dumping and optimization (code reuse, etc.) are not worth the
cost (a single piece of code used for two very different use-cases will
necessarily be more complex and thus prone to design bugs). Hubris will
lead developers to ignore the latter because they believe they can avoid
the complexity trap of code reuse. It might work in the short term
because you and others might be able to achieve this feat but I fail to
see how you will be able to avoid the inevitable decay of code inherent
to this solution in the long run.

A path where different solutions for different problems are evolved
independently and then merged where it makes sense seems better to me
than a path where a single solution to two different problems is
attempted from the start. 

Which is thus why I think that "there are inherent reasons that you must
necessarily have multiple representations".

regards,
Mathieu

PS: I know I am oversimplifying the problem and your position and I
apologize for this.
--