Information regarding -fPIC support for Interix gcc

2007-03-22 Thread Mayank Kumar

Hi
I am currently looking at interix gcc and found that -fPIC generated binaries 
crash although not all binaries crash. This has been known for quite some time 
since I found a lot of posts about it. I want to know if this issue has already 
been fixed and a patch submitted or this issues is still pending on Interix gcc.
I am currently investigating this issue and was wondering if somebody has fixed 
it or not before starting investigation.


Thanks
Mayank



Re: Using SSA

2007-03-22 Thread Paolo Bonzini

> The tree_opt_pass for my pass has PROP_ssa set in the properties_required
> field.  Is this all I need to do?

You need to put your pass after pass_build_ssa.  Setting PROP_ssa does
not build SSA itself, but it will cause an assertion failure if the
pass is run while SSA is (not yet) available.

Paolo


RE: Using SSA

2007-03-22 Thread Alexander Lamaison
> > The tree_opt_pass for my pass has PROP_ssa set in the
> properties_required
> > field.  Is this all I need to do?
> 
> You need to put your pass after pass_build_ssa.  Setting PROP_ssa does
> not build SSA itself, but it will cause an assertion failure if the
> pass is run while SSA is (not yet) available.
> 
> Paolo

I think (if I'm correctly interpreting the list in passes.c) it is.  It's
right after pass_warn_function_noreturn, just before pass_mudflap_2.  Is
this right?  I don't get any assertion about SSA not being available.

Thanks.
Alex



Re: Using SSA

2007-03-22 Thread Paolo Bonzini
> I think (if I'm correctly interpreting the list in passes.c) it is.  It's
> right after pass_warn_function_noreturn, just before pass_mudflap_2.  Is
> this right?  I don't get any assertion about SSA not being available.

In this case, it is also after pass_del_ssa, which means SSA has already
been destroyed.

Paolo 


RE: Using SSA

2007-03-22 Thread Alexander Lamaison
> > I think (if I'm correctly interpreting the list in passes.c) it is.
> It's
> > right after pass_warn_function_noreturn, just before pass_mudflap_2.
> Is
> > this right?  I don't get any assertion about SSA not being available.
> 
> In this case, it is also after pass_del_ssa, which means SSA has
> already
> been destroyed.

Oh, ok. Thanks! I had assumed the mudflap passes would have SSA enabled as
the 'Tree-SSA passes' section of the GCC internal manual listed them:
http://gcc.gnu.org/onlinedocs/gccint/Tree_002dSSA-passes.html#Tree_002dSSA-p
asses. 

Thanks.
Alex.



Re: GCC priorities [Was Re: We're out of tree codes; now what?]

2007-03-22 Thread Daniel Berlin

On 3/21/07, Nicholas Nethercote <[EMAIL PROTECTED]> wrote:

On Wed, 21 Mar 2007, Paul Brook wrote:

> The problem is that I don't think writing a detailed "mission statement" is
> actually going to help anything. It's either going to be gcc contributors
> writing down what they're doing anyway, or something invented by the SC or
> FSF. I the latter case nothing's going to change because neither the SC nor
> the FSF have any practical means of compelling contributors to work on a
> particular feature.
>
> It's been said before that Mark (the GCC release manager) has no real power to
> make anything actually happen. All he can do is delay the release and hope
> things get better.

Then it will continue to be interesting, if painful, to watch.


It's not clear what you think would happen and be fixed if he did.

Realistically, compile time will not be solved until someone with an
interest in solving it does the hard work (and before starting any
huge projects, posits a reasonable way to do whatever major surgery
they need to, so they can get community buy-in. Note that this is not
actually hard, but it does require not just submitting huge patches
that do something you've never discussed before on the ML).

This won't change no matter what you do.
You simply can't brow-beat people into fixing huge things, and i'm not
aware of a well-functioning open-source project that does.

If you want to help fix compile time, then get started, instead of
commenting from the sidelines.
As they say, "Patches welcome".

--Dan


Re: Using SSA

2007-03-22 Thread Daniel Berlin

On 3/22/07, Alexander Lamaison <[EMAIL PROTECTED]> wrote:

> > The tree_opt_pass for my pass has PROP_ssa set in the
> properties_required
> > field.  Is this all I need to do?
>
> You need to put your pass after pass_build_ssa.  Setting PROP_ssa does
> not build SSA itself, but it will cause an assertion failure if the
> pass is run while SSA is (not yet) available.
>
> Paolo

I think (if I'm correctly interpreting the list in passes.c) it is.  It's
right after pass_warn_function_noreturn, just before pass_mudflap_2.  Is
this right?  I don't get any assertion about SSA not being available.



This is a bug then, btw.
You should file it.
It should have asserted that PROP_ssa was not available, because it
was destroyed by del_ssa.
In particular, this code:

#ifdef ENABLE_CHECKING
 do_per_function (verify_curr_properties,
  (void *)(size_t)pass->properties_required);
#endif

should have triggered.

(I have long believed that our properties mechanism should be used as
a mechanism to decide what analysis needs to be run, not a static
assertion mechanism, and should recompute what is necessary on demand,
but that is not currently the case).


Re: GCC priorities [Was Re: We're out of tree codes; now what?]

2007-03-22 Thread Jeffrey Law
On Thu, 2007-03-22 at 08:17 +1100, Nicholas Nethercote wrote:
> On Wed, 21 Mar 2007, Paul Brook wrote:
> 
> > The problem is that I don't think writing a detailed "mission statement" is
> > actually going to help anything. It's either going to be gcc contributors
> > writing down what they're doing anyway, or something invented by the SC or
> > FSF. I the latter case nothing's going to change because neither the SC nor
> > the FSF have any practical means of compelling contributors to work on a
> > particular feature.
> >
> > It's been said before that Mark (the GCC release manager) has no real power 
> > to
> > make anything actually happen. All he can do is delay the release and hope
> > things get better.
> 
> Then it will continue to be interesting, if painful, to watch.
True, but the structure you see is the only structure that (IMHO) could
have worked for GCC.  For better or worse, it is what it is.

If you've got ideas for how to change things for the better, they can
certainly be discussed.  In the end, everyone here just wants to build
a better compiler -- they primarily  differ in what "better" means.

Jeff




Re: Information regarding -fPIC support for Interix gcc

2007-03-22 Thread Joe Buck
On Thu, Mar 22, 2007 at 04:22:37PM +0800, Mayank Kumar wrote:

> I am currently looking at interix gcc and found that -fPIC generated
> binaries crash although not all binaries crash. This has been known for
> quite some time since I found a lot of posts about it. I want to know if
> this issue has already been fixed and a patch submitted or this issues
> is still pending on Interix gcc.  I am currently investigating this
> issue and was wondering if somebody has fixed it or not before starting
> investigation.

I took a look at Bugzilla, and the first thing I see is

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15212

which indicates that the compiler hasn't even bootstrapped on Interix in a
long time, though workarounds are proposed and a patch is given against a
4.2 snapshot.  It looks like no one has treated the platform as a
priority.  It's just a matter of not having a sufficiently motivated
volunteer, or else the sufficiently motivated volunteers are not
contributing their patches to gcc.gnu.org.

So, where are you getting your Interix gcc?  Are you using an FSF release,
or is someone trying to support a fork?






SoC Project: Finish USE_MAPPED_LOCATION

2007-03-22 Thread Per Bothner

Is this an appropriate SoC project?

Gcc can optionally be configured with --enable-mapped-locations.
This sets a conditional USE_MAPPED_LOCATION which changes how
line and column numbers are represented in the various data
structures.  We'd like to switch gcc to use this representation,
for various reasons (availability of column numbers in error
messages and debug into; hopefully improved memory efficiency;
making the various parts of the compiler more consistent).

The goal of this project is to fix various functionality so
USE_MAPPED_LOCATION works at least as well as the current
default does, so we can make it the default, and then soon
after remove the old non-USE_MAPPED_LOCATION code.  This means
investigating test-suite regressions, and fixing/implementing
any missing functionality.  The main known missing "chunk"
is support for pre-compiled-headers (pch).  In addition the
Ada language front-end does not handle USE_MAPPED_LOCATION.

I'm available to advise/mentor on this project.  However, as
I'm somewhat rusty about the current state of Gcc I'm hoping
I can get one more mentor who is more current on what has been
happening with Gcc.

(I don't consistently monitor the gcc mailing lists, so please
cc me on any gcc email you want me to see in a timely manner.)
--
--Per Bothner
[EMAIL PROTECTED]   http://per.bothner.com/


Re: We're out of tree codes; now what?

2007-03-22 Thread Ian Lance Taylor
"Doug Gregor" <[EMAIL PROTECTED]> writes:

> +#undef TREE_CODE
> +/* We redefine TREE_CODE here to omit the explicit case to "enum
> +   tree_code", which has the side-effect of silencing the "case value
> +   NNN not in enumerated type" warnings.  */
> +#define TREE_CODE(NODE) ((NODE)->base.code)
> +
> +/* Extracts an extended tree code from a node. */
> +#define LANG_TREE_CODE(NODE)\
> +  (TREE_CODE (NODE) == LANG_TYPE?   \
> + (enum cplus_tree_code)(LANG_TYPE_SUBCODE (NODE) + MAX_TREE_CODES)\
> + : (enum cplus_tree_code)(TREE_CODE (NODE)))
> +
> +/* Access the SUBCODE of a LANG_TYPE node.  */
> +#define LANG_TYPE_SUBCODE(NODE) (TYPE_LANG_SPECIFIC (NODE)->u.h.subcode)
> +
>  /* Language-specific tree checkers.  */
>  
>  #define VAR_OR_FUNCTION_DECL_CHECK(NODE) \
> @@ -176,7 +191,7 @@ struct diagnostic_context;
>TREE_CHECK4(NODE,VAR_DECL,FUNCTION_DECL,TYPE_DECL,TEMPLATE_DECL)
>  
>  #define BOUND_TEMPLATE_TEMPLATE_PARM_TYPE_CHECK(NODE) \
> -  TREE_CHECK(NODE,BOUND_TEMPLATE_TEMPLATE_PARM)
> +  LANG_TREE_CHECK(NODE,BOUND_TEMPLATE_TEMPLATE_PARM)
>  
>  #if defined ENABLE_TREE_CHECKING && (GCC_VERSION >= 2007)
>  #define NON_THUNK_FUNCTION_CHECK(NODE) __extension__ \
> @@ -192,10 +207,41 @@ struct diagnostic_context;
>   || !__t->decl_common.lang_specific->decl_flags.thunk_p) \
>tree_check_failed (__t, __FILE__, __LINE__, __FUNCTION__, 0);  \
>   __t; })
> +#define LANG_TREE_CHECK(T,CODE) __extension__\
> +({  const tree __x = (T);\
> +if (LANG_TREE_CODE ((T)) != (CODE))  \
> +  tree_check_failed (__x, __FILE__, __LINE__, __FUNCTION__, \
> + (CODE), 0); \
> +__x; })
> +#define LANG_TREE_CHECK3(T, CODE1, CODE2, CODE3) __extension__   \
> +({  const tree __y = (T);\
> +const enum cplus_tree_code __code = LANG_TREE_CODE (__y);\
> +if (__code != (CODE1)\
> + && __code != (CODE2)\
> + && __code != (CODE3))   \
> +  tree_check_failed (__y, __FILE__, __LINE__, __FUNCTION__,  \
> +  (CODE(CODE2), (CODE3), 0); \
> +__y; })
>  #else
>  #define NON_THUNK_FUNCTION_CHECK(NODE) (NODE)
>  #define THUNK_FUNCTION_CHECK(NODE) (NODE)
> +#define LANG_TREE_CHECK(T,CODE) (T)
> +#define LANG_TREE_CHECK3(T, CODE1, CODE2, CODE3) (T)
>  #endif

Would it make a --enable-checking build faster if you did this:

#define LANG_TREE_CHECK(T,CODE) __extension__   \
({  const tree __x = (T);   \
if (TREE_CODE (__x) != LANG_TYPE\
|| LANG_TYPE_SUBCODE (__x) != (CODE) - MAX_TREE_CODES)  \
  tree_check_failed (__x, __FILE__, __LINE__, __FUNCTION__, \
 (CODE), 0);\
__x; })

and similar for LANG_TREE_CHECK3. 

Ian


Re: SoC Project: Finish USE_MAPPED_LOCATION

2007-03-22 Thread Ian Lance Taylor
Per Bothner <[EMAIL PROTECTED]> writes:

> Is this an appropriate SoC project?

Yes, certainly.  I added a link to gcc's SoC project page
(http://gcc.gnu.org/wiki/SummerOfCode).

Ian


Re: GCC 4.1.2 generates different pentium instructions

2007-03-22 Thread fafa

Am 21.03.2007, 23:38 Uhr, schrieb Ian Lance Taylor <[EMAIL PROTECTED]>:


"H. J. Lu" <[EMAIL PROTECTED]> writes:


On Wed, Mar 21, 2007 at 09:19:44PM +0100, fafa wrote:
> Hi all,
>
> I noticed that G++ 4.1.2 (on a Pentium 4) generates different  
instructions

> for
>   lea0x0(%esi),%esi
> or
>   lea0x0(%edi),%edi
> with the same meaning but different encoding depending on the switch
> "-momit-leaf-frame-pointer".
>

They are generated by assembler for different alignment adjustments.


To expand on that, note that those instructions do nothing.  They are
nops inserted for alignment purposes.  The size of the instruction
varies depending upon how many bytes it has to take up.

Ian



I see. But why not simple "nop" instructions ?
Is it just for the compactness of the listing, or does it
optimize the instruction prefetching of the CPU ?

Maett



Re: GCC 4.1.2 generates different pentium instructions

2007-03-22 Thread Mike Stump

On Mar 22, 2007, at 12:03 PM, fafa wrote:

I see. But why not simple "nop" instructions ?


They are the wrong size or too slow.  Anyway, this is the wrong list  
for such questions generally.  This list is for developers of gcc.


Re: GCC 4.1.2 generates different pentium instructions

2007-03-22 Thread Ian Lance Taylor
fafa <[EMAIL PROTECTED]> writes:

> >> > I noticed that G++ 4.1.2 (on a Pentium 4) generates different
> >> instructions
> >> > for
> >> >   lea0x0(%esi),%esi
> >> > or
> >> >   lea0x0(%edi),%edi
> >> > with the same meaning but different encoding depending on the switch
> >> > "-momit-leaf-frame-pointer".
> >> >
> >>
> >> They are generated by assembler for different alignment adjustments.
> >
> > To expand on that, note that those instructions do nothing.  They are
> > nops inserted for alignment purposes.  The size of the instruction
> > varies depending upon how many bytes it has to take up.
> >
> > Ian
> >
> 
> I see. But why not simple "nop" instructions ?
> Is it just for the compactness of the listing, or does it
> optimize the instruction prefetching of the CPU ?

It is faster for a typical CPU to execute a single long instruction
with no effect than it is for it to execute a series of small
instructions with no effect.

Ian


Re: We're out of tree codes; now what?

2007-03-22 Thread Mike Stump

On Mar 22, 2007, at 9:13 AM, Doug Gregor wrote:

8-bit tree code (baseline):

real0m51.987s
user0m41.283s
sys 0m0.420s

subcodes (this patch):

real0m53.168s
user0m41.297s
sys 0m0.432s

9-bit tree code (alternative):

real0m56.409s
user0m43.942s
sys 0m0.429s


I hate to ask, did we see the time for 16 bit codes?  If it is faster  
than subcodes (51-53), maybe we just take the 4.5% memory hit and  
move on.  I ask, as I'd hate to saddle everyone with subcodes in the  
name of compile time, when there is any alternative that is better  
that doesn't cost as much in compile time.


I did some quick C measurements compiling expr.o from the top of the  
tree, with an -O0 built compiler with checking:


for a -g 8-bit code compile:
real0m4.234s
user0m4.104s
sys 0m0.126s

for a -g -O2 8-bit code compile:
real0m23.202s
user0m22.408s
sys 0m0.773s

for a -g 16-bit code compile:
real0m4.249s0.35% slower
user0m4.121s
sys 0m0.124s

for a -g -O2 16-bit compile:
real0m23.391s   0.81% slower
user0m22.613s
sys 0m0.767s

If the disable-checking numbers hold up...  I think I'd rather eat  
the memory, retain most of the speed, and eliminate the complexity.


For a stage2 compiler (-O2) compiling expr.c:

for a -g 8-bit code compile:
real0m2.633s
user0m2.510s
sys 0m0.120s

for a -g -O2 8-bit code compile:
real0m12.961s
user0m12.195s
sys 0m0.755s

for a -g 16-bit code compile:
real0m2.629s0.15% slower
user0m2.504s
sys 0m0.121s

for a -g -O2 16-bit code compile:
real0m12.958s   0.023% slower
user0m12.190s
sys 0m0.754s

All timings are best of 5 to get nice stable, comparable numbers.   
I'd anticipate that the slowdowns for C hold true to all languages  
and optimization passes.  I didn't try and optimize the memory  
savings, layout or time in creating my 16-bit patch, I applied the 8- 
>16 obvious version.  My hope is that disable-checking numbers hold  
up reasonable well, and we can just use that version.  I'll accept a  
0.15% compiler.


Re: We're out of tree codes; now what?

2007-03-22 Thread Joe Buck
On Thu, Mar 22, 2007 at 12:28:15PM -0700, Mike Stump wrote:
> On Mar 22, 2007, at 9:13 AM, Doug Gregor wrote:
> >8-bit tree code (baseline):
> >
> >real0m51.987s
> >user0m41.283s
> >sys 0m0.420s
> >
> >subcodes (this patch):
> >
> >real0m53.168s
> >user0m41.297s
> >sys 0m0.432s
> >
> >9-bit tree code (alternative):
> >
> >real0m56.409s
> >user0m43.942s
> >sys 0m0.429s
> 
> I hate to ask, did we see the time for 16 bit codes?  If it is faster  
> than subcodes (51-53), maybe we just take the 4.5% memory hit and  
> move on.  I ask, as I'd hate to saddle everyone with subcodes in the  
> name of compile time, when there is any alternative that is better  
> that doesn't cost as much in compile time.

But these numbers show that subcodes don't cost *ANY* time, or the
cost is in the noise, unless enable-checking is on.  The difference
in real-time seems to be an artifact, since the user and sys times
are basically the same.


Re: We're out of tree codes; now what?

2007-03-22 Thread Steven Bosscher

On 3/22/07, Joe Buck <[EMAIL PROTECTED]> wrote:

But these numbers show that subcodes don't cost *ANY* time, or the
cost is in the noise, unless enable-checking is on.  The difference
in real-time seems to be an artifact, since the user and sys times
are basically the same.


The subcodes cost complexity. And the cost with checking enabled is
IMHO unacceptable.

Gr.
Steven


Re: We're out of tree codes; now what?

2007-03-22 Thread Steven Bosscher

On 3/22/07, Doug Gregor <[EMAIL PROTECTED]> wrote:

The results, compile time:


For what test case?


For a bootstrapped, --disable-checking compiler:

8-bit tree code (baseline):

real0m51.987s
user0m41.283s
sys 0m0.420s

subcodes (this patch):

real0m53.168s
user0m41.297s
sys 0m0.432s

9-bit tree code (alternative):

real0m56.409s
user0m43.942s
sys 0m0.429s


Did the 9-bit tree code include Alexandre Oliva's latest bitfield
optimization improvements patch
(http://gcc.gnu.org/ml/gcc-patches/2007-03/msg01397.html)?

What about the 16-bit tree code?

Gr.
Steven


RE: Information regarding -fPIC support for Interix gcc

2007-03-22 Thread Mayank Kumar
I work for Microsoft SFU(services for unix) group and I am currently 
investigating this fPIC issue for gcc 3.3 which is available with sfu 3.5. I 
have currently no intention of supporting the latest gcc for interix but I am 
more interested in fixing this fPIC issue for Interix as well as contributing 
this fix to gcc users on Interix.
Has anybody before looked at this issue or is aware of it ?



Thanks
Mayank

-Original Message-
From: Joe Buck [mailto:[EMAIL PROTECTED]
Sent: Thursday, March 22, 2007 10:20 PM
To: Mayank Kumar
Cc: gcc@gcc.gnu.org
Subject: Re: Information regarding -fPIC support for Interix gcc

On Thu, Mar 22, 2007 at 04:22:37PM +0800, Mayank Kumar wrote:

> I am currently looking at interix gcc and found that -fPIC generated
> binaries crash although not all binaries crash. This has been known for
> quite some time since I found a lot of posts about it. I want to know if
> this issue has already been fixed and a patch submitted or this issues
> is still pending on Interix gcc.  I am currently investigating this
> issue and was wondering if somebody has fixed it or not before starting
> investigation.

I took a look at Bugzilla, and the first thing I see is

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15212

which indicates that the compiler hasn't even bootstrapped on Interix in a
long time, though workarounds are proposed and a patch is given against a
4.2 snapshot.  It looks like no one has treated the platform as a
priority.  It's just a matter of not having a sufficiently motivated
volunteer, or else the sufficiently motivated volunteers are not
contributing their patches to gcc.gnu.org.

So, where are you getting your Interix gcc?  Are you using an FSF release,
or is someone trying to support a fork?






Re: Information regarding -fPIC support for Interix gcc

2007-03-22 Thread Paul Brook
On Thursday 22 March 2007 20:20, Mayank Kumar wrote:
> I work for Microsoft SFU(services for unix) group and I am currently
> investigating this fPIC issue for gcc 3.3 which is available with sfu 3.5.

gcc3.3 is really quite old, and hasn't been maintained for quite some time.
You're unlikely to get a particularly useful response from this list (or any 
volunteer gcc developers) unless you're working with current gcc.

Of course there are organisations who will provide you with commercial support 
for older gcc releases. That's a separate issue though.

Paul


We're out of tree codes; now what?

2007-03-22 Thread Tarmo Pikaro
> As for what is best to do, I don't know. But I do know that complexity is 
> bad, and that GCC is very complex. You are absolutely right about there 
> being hard limits. There are trade-offs required. Whether the current and 
> ongoing trade-offs are the right ones is an open question.

I'm completely newcomer here on gcc mail list, and the reason why I came here is
because I've become intrested about compiler and what it does. I have already 
by myself 
came accross the problem of not only to make something, but quite often it's 
required
to make some sort of reverse operation of produced object 
(talking about file generators / parsers now). I've came to realize that 
performing one-way 
operation gets rid of some important information which was there previously in 
input
and that one could be useful later on as well.

In computer world we are dealing with three main levels of codes:
1. input source codes (C++ code, simple to write of human), 
2. it's altered presentation - how compiler sees it - in newest gcc it's called 
a generic
(run-time structure trees, simple to understand by application)
3. and output (binary executable, simple to understand by cpu).

Gcc performs conversion of 1 to 3. (Really high level thinking)

But in order of not to loose relevant/ important information I would like to 
get rid 
of compiler as such and "edit" a programm directly in some form of "generic" 
language.
Instead of editing source code I would prefer to edit "application tree". One 
of question
is what kind of editor there will be and when input data will be translated 
into cpu 
assembly instruction. I see that UI is also important since it reflects to the 
speed of
how fast you can edit/create existing/new application. I guess from normal text 
editor till some sort of mind map graphical editor could be developed.

If you consider different languages - c, c++, java - they are not much different
- syntax somehow vary, but you can basically create the same application using
different languages. "Generic" tries to generalize structures available in all 
languages
into common form. I think common form is good, but why again on earth we should 
stick
to existing languages ? Let's take this more common language, remove all syntax 
which is
not commonly used, simplify a bit, and voila - we have completely new language, 
which
is not bound to lexical and semantical syntax analysis (because it's edited 
directly),
which can be edited much faster, and require minimum effort for recompilation 
(don't need
to recompile whole application just because you have edited one line of code).
Language which syntax can change more easily (since you don't have to consider 
what 
kind of reduce/shift conflict you came accross). Language for which you don't 
need 
to use compiler / linker anymore.

 
--
Have a nice day!
Tarmo.


 

Food fight? Enjoy some healthy debate 
in the Yahoo! Answers Food & Drink Q&A.
http://answers.yahoo.com/dir/?link=list&sid=396545367


Re: We're out of tree codes; now what?

2007-03-22 Thread Brooks Moses

Tarmo Pikaro wrote:

If you consider different languages - c, c++, java - they are not much different
- syntax somehow vary, but you can basically create the same application using
different languages. "Generic" tries to generalize structures available in all 
languages
into common form. I think common form is good, but why again on earth we should 
stick
to existing languages ? Let's take this more common language, remove all syntax 
which is
not commonly used, simplify a bit, and voila - we have completely new language, 
which
is not bound to lexical and semantical syntax analysis (because it's edited 
directly),
which can be edited much faster, and require minimum effort for recompilation 
(don't need
to recompile whole application just because you have edited one line of code).
Language which syntax can change more easily (since you don't have to consider what 
kind of reduce/shift conflict you came accross). Language for which you don't need 
to use compiler / linker anymore.


One advantage of most computer languages (with the arguable exception of 
C, but even it has preprocessor macros) is that they provide high-level 
constructs that make it easier to write programs.  I believe that many 
of these high-level constructs are reduced to more verbose lower-level 
constructs in some of the language front ends (I know that this is true 
in Fortran; I'm not as sure about other front ends), which means that 
programming in Generic will require programming at a much lower level. 
I don't think your expected advantages to editing the compiler's 
representation directly will counteract that disadvantage.


But I could be wrong.

- Brooks



Re: We're out of tree codes; now what?

2007-03-22 Thread Mike Stump

On Mar 22, 2007, at 12:28 PM, Mike Stump wrote:

for a -g 16-bit code compile:
real0m2.629s0.15% slower
user0m2.504s
sys 0m0.121s

for a -g -O2 16-bit code compile:
real0m12.958s   0.023% slower
user0m12.190s
sys 0m0.754s


Oops, both of those should say faster.


My hope is that disable-checking numbers hold up reasonable well


Anyway, for --disable-checking, expr,c, I get:

-g 8-bit code:
real0m0.950s
user0m0.867s
sys 0m0.081s

-g -O2 8-bit:
real0m3.107s
user0m2.956s
sys 0m0.147s

-g 16-bit code:
real0m0.957s0.74% slower
user0m0.872s
sys 0m0.083s

-g -O2 16-bit code:
real0m3.127s0.64% slower
user0m2.974s
sys 0m0.148s

I think I want to argue for the 16-bit patch version.  I think the  
hit in compile speed is paid for by the flexibility of not having to  
ever again worry about the issue, and never having to subtype.  In  
addition, this speeds up compilation of any language that would be  
forced to use subcodes.


Also, the correctness of:

Doing diffs in tree.h.~1~:
--- tree.h.~1~  2007-03-20 19:07:00.0 -0700
+++ tree.h  2007-03-22 15:05:03.0 -0700
@@ -363,7 +363,7 @@ union tree_ann_d;
struct tree_base GTY(())
{
-  ENUM_BITFIELD(tree_code) code : 8;
+  ENUM_BITFIELD(tree_code) code : 16;
   unsigned side_effects_flag : 1;
   unsigned constant_flag : 1;
--

is more obvious than the correctness of the subcoding. Thoughts?


Re: We're out of tree codes; now what?

2007-03-22 Thread Steven Bosscher

On 3/22/07, Mike Stump <[EMAIL PROTECTED]> wrote:

is more obvious than the correctness of the subcoding. Thoughts?


I fully agree.

Gr.
Steven


Re: We're out of tree codes; now what?

2007-03-22 Thread Mark Mitchell
Steven Bosscher wrote:
> On 3/22/07, Joe Buck <[EMAIL PROTECTED]> wrote:
>> But these numbers show that subcodes don't cost *ANY* time, or the
>> cost is in the noise, unless enable-checking is on.  The difference
>> in real-time seems to be an artifact, since the user and sys times
>> are basically the same.
> 
> The subcodes cost complexity. And the cost with checking enabled is
> IMHO unacceptable.

Doug, thanks for doing all the experiments!

OK, we've got to pick our poison.

1. We can go to 9-bit codes.  They're slower, independent of checking.
Maybe we can make bitfields go faster, and get some of that back.  Of
course, if we can make bitfields go faster, GCC would probably still go
even faster with 8-bit codes than it will with 9-bit codes, since all
the other bitfield accesses in GCC will go faster.  The good news is
that this is simple, from a coding perspective, and probably uses no
more memory.

2. We can go to 16-bit codes.  They're slower than 8-bit codes, perhaps
because they use more memory, but not much -- and certainly seem less
slower than 9-bit codes.  This is just as simple, from a coding
perspective as (1).

3. We can go to subcodes.  These are no slower and use no less memory
for production compilers, but they make checking compilers unbelievably
slow.  (Frankly, I already find checking painful.  I run tests that way,
of course, but I'm always amazed how much faster release-branch
compilers go.)  They're complex.

I think I'm inclined to agree with Mike: I'm starting to prefer (2).
It's a simple solution, and pretty efficient.  Anyone want to champion
(1) or (3)?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: We're out of tree codes; now what?

2007-03-22 Thread Doug Gregor

On 3/22/07, Steven Bosscher <[EMAIL PROTECTED]> wrote:

On 3/22/07, Doug Gregor <[EMAIL PROTECTED]> wrote:
> The results, compile time:

For what test case?


All the numbers I've reported are for tramp3d, compiled with -O2
-funroll-loops -ffast-math on i686-pc-linux-gnu.


Did the 9-bit tree code include Alexandre Oliva's latest bitfield
optimization improvements patch
(http://gcc.gnu.org/ml/gcc-patches/2007-03/msg01397.html)?


It did not. Here are the 8-bit and 9-bit numbers with that patch:

--enable-bootstrap, --disable-checking, Alexandre's patch, 8-bit tree codes:

real0m51.815s
user0m41.282s
sys 0m0.470s

--enable-bootstrap, --disable-checking, Alexandre's patch, 9-bit tree codes:

real0m54.627s
user0m43.574s
sys 0m0.406s

Looks like a 1% improvement. Not bad, but not spectacular.


What about the 16-bit tree code?


I don't have these around, and I mistakenly updated my tree, so the
numbers below are, unfortunately, incomparable to the numbers above.
The disturbing fact is that mainline seems to be significantly slower
now than it was in my previous tests (from just a few days ago), and
the slowdown (20%) is much greater than any of the slowdowns we've
been discussing in this thread. Has anyone else noticed this, or
perhaps it's something in my environment?


Anyway, today's results for tramp3d, run a couple times until the
numbers stabilized:

8-bit tree code, --enable-bootstrap, --disable-checking:

real0m50.445s
user0m49.648s   baseline
sys 0m0.481s

9-bit tree code, ---enable-bootstrap, -disable-checking:

real0m53.550s
user0m52.645s6% slower
sys 0m0.477s

9-bit tree code, ---enable-bootstrap, -disable-checking, with
Alexandre's latest patch:

real0m52.787s
user0m52.304s5% slower
sys 0m0.464s

16-bit tree code, --enable-bootstrap, --disable-checking:

real0m50.965s
user0m50.315s1% slower
sys 0m0.477s

 Cheers,
 Doug


Re: We're out of tree codes; now what?

2007-03-22 Thread Doug Gregor

On 3/22/07, Mark Mitchell <[EMAIL PROTECTED]> wrote:

Doug, thanks for doing all the experiments!

OK, we've got to pick our poison.

1. We can go to 9-bit codes.  They're slower, independent of checking.
Maybe we can make bitfields go faster, and get some of that back.  Of
course, if we can make bitfields go faster, GCC would probably still go
even faster with 8-bit codes than it will with 9-bit codes, since all
the other bitfield accesses in GCC will go faster.  The good news is
that this is simple, from a coding perspective, and probably uses no
more memory.

2. We can go to 16-bit codes.  They're slower than 8-bit codes, perhaps
because they use more memory, but not much -- and certainly seem less
slower than 9-bit codes.  This is just as simple, from a coding
perspective as (1).

3. We can go to subcodes.  These are no slower and use no less memory
for production compilers, but they make checking compilers unbelievably
slow.  (Frankly, I already find checking painful.  I run tests that way,
of course, but I'm always amazed how much faster release-branch
compilers go.)  They're complex.

I think I'm inclined to agree with Mike: I'm starting to prefer (2).
It's a simple solution, and pretty efficient.


I, too, prefer solution (2).

I'd like to tweak it in two ways:

 (a) We should put a comment in struct tree_base like the following:

 /* There are 24 remaining padding bits in this structure. DO NOT USE
THESE BITS.
 When we are able to remove 8 more bits, the size of all tree
nodes will shrink by
 one word, improving memory usage by ~4%.  */

 (b) We should accept the part of the 9-bit code patch that removes
lang_flag_5 from tree_base, and try to encourage movement away from
using the lang_flag_x bits in tree_base. One day, we'll kill all of
those bits and get our word back in tree_base, just like the comment
above says.

 Cheers,
 Doug


Re: We're out of tree codes; now what?

2007-03-22 Thread Richard Kenner
> 1. We can go to 9-bit codes.  They're slower, independent of checking.
> Maybe we can make bitfields go faster, and get some of that back.  

I think it worth understanding why this is.  One or even two instructions
should be lost in the noise on a modern machine when they are always
accompanied by a load.


Re: We're out of tree codes; now what?

2007-03-22 Thread Richard Henderson
On Thu, Mar 22, 2007 at 03:58:43PM -0700, Mike Stump wrote:
> Also, the correctness of: ...
> is more obvious than the correctness of the subcoding. Thoughts?

Totally agreed.


r~


Re: We're out of tree codes; now what?

2007-03-22 Thread Paul Brook
On Thursday 22 March 2007 23:24, Richard Kenner wrote:
> > 1. We can go to 9-bit codes.  They're slower, independent of checking.
> > Maybe we can make bitfields go faster, and get some of that back.
>
> I think it worth understanding why this is.  One or even two instructions
> should be lost in the noise on a modern machine when they are always
> accompanied by a load.

If nothing else you've got bigger code (increased icache pressure).

Paul


Re: why not use setjmp/longjmp within gcc?

2007-03-22 Thread Jim Wilson

Basile STARYNKEVITCH wrote:

It is quite standard since a long time, and I don't understand why it should
be avoided (as some old Changelog suggest).


Which old ChangeLog?  What exactly does it say?  We can't help you if we 
don't know what you are talking about.


There used to be setjmp calls in cse.c and fold-const.c.  This was in 
the FP constant folding code.  We would call setjmp, then install a 
signal handler that called longjmp, in case an FP instruction generated 
a signal, so we could recover gracefully and continue compiling.  But 
nowadays we use software emulated FP for folding operations, even when 
native, so we no longer have to worry about getting FP signals here, and 
the setjmp calls are gone.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: Adding Profiling support - GCC 4.1.1

2007-03-22 Thread Jim Wilson

Rohit Arul Raj wrote:

1. The function mcount: While building with native gcc, the mcount
function is defined in glibc. Is the same mcount function available in
newlib? or is it that we have to define it in our back-end as SPARC
does (gmon-sol2.c).


Did you try looking at newlib?  Try something like this
  find . -type f | xargs grep mcount
That will show you all of the mcount support in newlib/libgloss.

sparc-solaris is a special case.  Early versions of Solaris shipped 
without the necessary support files.  (Maybe it still does?  I don't 
know, and don't care to check.)  I think that there were part of the 
add-on extra-cost compiler.  This meant that people using gcc only were 
not able to use profiling unless gcc provided the mcount library. 
Otherwise it never would have been put here.  mcount belongs in the C 
library.



2. Is it possible to reuse the existing mcount definition or is it
customized for every backend?


It must be customized for every backend.


3. Any other existing back-ends that support profiling.


Pretty much all targets do, at least ones for operating systems.  It is 
much harder to make mcount work for an embedded target with no file system.


If you want to learn how mcount works, just pick any existing target 
with mcount support, and study it.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: We're out of tree codes; now what?

2007-03-22 Thread Richard Kenner
> If nothing else you've got bigger code (increased icache pressure).

Yeah, but a 3% peformance hit due to that?  It's hard to argue with
measurements, but something sounds fishy to me ..


GCC 4.2.0 Status Report (2007-03-22)

2007-03-22 Thread Mark Mitchell
There are still a number of GCC 4.2.0 P1s, including the following which
are new in GCC 4.2.0 (i.e., did not occur in GCC 4.1.x), together with
-- as near as I can tell, based on Bugzilla -- the responsibility parties.

PR 29585 (Novillo): ICE-on-valid
PR 30700 (Sayle): Incorrect constant generation
PR 31136 : Bitfield code-generation bug
PR 31187 (Merrill): C++ declaration incorrectly considered to have
internal linkage
PR 31273 (Mitchell): C++ bitfield conversion problem

Diego, Roger, Jason, would you please let me know if you can work on the
issues above?  I'm going to try to test Jim's patch for PR 31273 tonight.

Joseph, would you please take a look at PR 31136?  Andrew believes this
to be a front-end bug.

My hope is that we can fix these PRs, and then declare victory on the
GCC 4.2.0 release.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2.0 Status Report (2007-03-22)

2007-03-22 Thread Mark Mitchell
Mark Mitchell wrote:
> There are still a number of GCC 4.2.0 P1s, including the following which
> are new in GCC 4.2.0 (i.e., did not occur in GCC 4.1.x), together with
> -- as near as I can tell, based on Bugzilla -- the responsibility parties.
> 
> PR 29585 (Novillo): ICE-on-valid
> PR 30700 (Sayle): Incorrect constant generation

Sorry, that's PR 30704.

PR 30700 is also critical:

PR 30700 (Guenther): Incorrect undefined reference

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2.0 Status Report (2007-03-22)

2007-03-22 Thread Joseph S. Myers
On Thu, 22 Mar 2007, Mark Mitchell wrote:

> Joseph, would you please take a look at PR 31136?  Andrew believes this
> to be a front-end bug.

I don't think this is a front-end bug.

I applied the patch below to make the dumps give more meaningful 
information than .  The format of output is the same as 
used by the C pretty-printer.  (OK to commit this patch to mainline 
subject to the usual testing?)  With it, the .gimple dump is:

main ()
{
   D.1530;
   D.1531;
   D.1532;
   D.1533;
  int D.1534;
  short unsigned int D.1535;
  short unsigned int D.1536;

  s.b6 = 31;
  D.1530 = s.b6;
  D.1531 = () D.1530;
  s.b4 = D.1531;
  D.1532 = s.b4;
  D.1533 = () D.1532;
  s.b6 = D.1533;
  D.1535 = BIT_FIELD_REF ;
  D.1536 = D.1535 & 1008;
  D.1534 = D.1536 != 240;
  return D.1534;
}

As far as I can see, this has all the required conversions.  The 
conversions seem correct as of the .ccp dump, so I think the problem is in 
the FRE pass as identified in the original bug report.

Specifically, I blame use of STRIP_NOPS or STRIP_SIGN_NOPS somewhere in 
the optimizers, removing a conversion that preserves the mode when such 
conversion is necessary to truncate to a narrower bit-field type.  If I 
make those macros require the precision to be unchanged then the testcase 
passes.  But such a change clearly needs more testing, and it should 
probably be more conservative (conversions to a wider precision should be 
OK to remove as well as those to the same precision, except when they 
convert signed to unsigned).  If this is the cause, the problem is 
probably present in 4.1 as well, though whether it can be triggered 
depends on how much the tree optimizers do with conversions to bit-field 
types (which in normal code only arise through stores to bit-fields).

Note, I haven't tested at all for C++, and the bug is described as for 
both C and C++.

Index: tree-pretty-print.c
===
--- tree-pretty-print.c (revision 123147)
+++ tree-pretty-print.c (working copy)
@@ -539,6 +539,14 @@
dump_generic_node (buffer, TREE_TYPE (node), 
   spc, flags, false);
  }
+   else if (TREE_CODE (node) == INTEGER_TYPE)
+ {
+   pp_string (buffer, (TYPE_UNSIGNED (node)
+   ? "");
+ }
else
   pp_string (buffer, "");
  }

-- 
Joseph S. Myers
[EMAIL PROTECTED]


gcj install failed

2007-03-22 Thread Annapoorna R

Hi,

am trying to istall the GCJ 4.1.2 version on my SUNOs.

steps i followed:

1. downloaded GCJ4.1.2 core and java tar from GNU site. and extracted it 
to GCC4.1


after extracting folder GCC-4.1.2 is created(automatically while 
extracting).


the frontend part (java tar) was extraced to /gcc-4.1.2/libjava.

Did ./configure from libjava folder.--successful

did make from libjava. giving compilation errors.

Please let me know am i wrong in the steps followed?

--
Regards,
Annapoorna.R



This e-mail is bound by the terms and conditions described at 
http://www.subexazure.com/mail-disclaimer.html



Re: gcj install failed

2007-03-22 Thread Brooks Moses

Annapoorna R wrote:

steps i followed:

1. downloaded GCJ4.1.2 core and java tar from GNU site. and extracted it 
to GCC4.1


after extracting folder GCC-4.1.2 is created(automatically while 
extracting).


the frontend part (java tar) was extraced to /gcc-4.1.2/libjava.

Did ./configure from libjava folder.--successful

did make from libjava. giving compilation errors.

Please let me know am i wrong in the steps followed?


These process is, indeed, incorrect.  There are four errors.

1.) The appropriate mailing list for asking for help using GCC is 
[EMAIL PROTECTED]  The gcc@gcc.gnu.org list, which you sent this to, 
is for people who are developing GCC.


2.) You need to build the entire GCC compiler, not just the Java runtime 
library (which is what "libjava" is).  Thus, the relevant directory for 
running "configure" from is the top-level folder -- in your case, gcc-4.1.2.


3.) For a number of reasons, it is not recommended to build GCC within 
the source directory.  Instead, you should create an empty build 
directory, and then run gcc-4.1.2/configure from within that empty build 
directory, and then run "make" in the build directory.


4.) In order to build GCJ, you need to specify the 
"--enable-languages=java" option to the configure command.  There may be 
other options you may wish to specify as well.


All of this is explained in more detail in the GCC installation manual, 
which is online at http://gcc.gnu.org/install/, and is also included in 
the INSTALL subdirectory -- as is explained in the README file which you 
will find at gcc-4.1.2/README on your computer.


- Brooks