Re: Inlining and estimate_num_insns

2005-03-01 Thread Richard Guenther
On Tue, 1 Mar 2005 02:03:57 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:

> OK, I will put this higher in the TODO list (but this is not 4.0
> either).  What was those less unrealistic tests?  I remember seeing node
> removal in profiles, but edge removal comes first here.  Looks like I
> finally recovered everything from disc crash so will have time to look
> into this in the rest of the week...

I see those functions high up in the profiles with tramp3d and leafify turned
on, too.  I remember bugging you about this a few month ago...

http://gcc.gnu.org/ml/gcc/2004-11/msg00737.html

Richard.


Re: GIV optimizations

2005-03-01 Thread Zdenek Dvorak
Hello,

> Giv optimizations are just features which not 
> implemented yet in the new loop unroller, so I think 
> put it in bugzilla is not appropriate.

it most definitely is appropriate.  This is a performance
regression.  Even if it would not be, feature requests
can be put to Bugzilla.

The best of course would be if you could create a small testcase
demonstrating what you would like the compiler to achieve.

Zdenek


at web: /install/specific.html

2005-03-01 Thread Alec Voropay
Hi!

 It seems, the local  on the GCC web page
http://gcc.gnu.org/install/specific.html
does not work due to wrong HTML format.

--
-=AV=-


Re: GCC 4.1 Projects

2005-03-01 Thread Richard Earnshaw
On Mon, 2005-02-28 at 00:05, Zdenek Dvorak wrote:

> at the beginning of the stage 1, there always is lot of major changes
> queued up.  It never lead to unmanageable amount of "breakage and
> disruption".

Then you clearly haven't tried to maintain a port other than x86-* or
*-linux.

The fact is that the vast majority of changes are only being tested on
these platforms and get very little exposure elsewhere.  If one change
goes into the trunk and breaks things that's manageable.  If two get in
at the same time then the nightmare starts to unfold.

Merging too many things at a time makes tracking down the root cause of
the problem a nightmare.  How do you bootstrap the latest version to
check that a fix has worked when something else has been broken in the
mean time.

It takes time to test things properly and some platforms are slower at
doing this than others.  There are maintained architectures where a full
bootstrap/test sequence takes at least 48 hours.  So introducing changes
more rapidly than this is going to lead to problems sometime, and the
bigger the change the more likely the problem.

R.


Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Richard Earnshaw
On Mon, 2005-02-28 at 12:51, Vladimir Ivanov wrote:
> Hello all,
> 
> While compiling this:
>   http://sourceforge.net/projects/raytracer/
> 
> I think I've spotted a bug in ARM port of G++.
> 
> The problem is that many method functions tend to save all callee-saved FP 
> registers, while they use few or none of them.
> 
> Here's a small snippet from "base3d.cpp" file:
> 
> 0528 <_ZN6Base3d8rotateV1Ed>:
>   528:   e1a0c00dmov ip, sp
>   52c:   e92ddff0stmdb   sp!, {r4, r5, r6, r7, r8, r9, sl, 
> fp, ip, lr, pc}
>   530:   ed2d420csfm f4, 4, [sp, #-48]!
>   534:   e24cb004sub fp, ip, #4  ; 0x4
>   538:   e24dd0ccsub sp, sp, #204; 0xcc
>   53c:   e59f3288ldr r3, [pc, #648]  ; 7cc <.text+0x7cc>
>   ...
> 
> Function uses only F0 register, although F4-F7 are saved/restored. This 
> leads to great speed penalty, especially when coprocessors like Crunch 
> have many registers.
> 
> PowerPC port shows no such problem, so I think it's something in ARM port.
> 
> Sorry I cannot provide small enough example, C++ is not my area of 
> expertise.

[I tried to reply to this yesterday, but the response has failed to show
up here.  I suspect it was mailer problems at my end, but if you get
this twice I apologise.]

I think this is most likely a consequence of SJLJ exceptions.  It should
be fixed when we move to the EABI unwinding tables.

R.


Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Vladimir Ivanov
Hello,
[Richard]:
Does this mean that GCC-3.4.x won't be fixed?
[Paul]:
Is this problem present also in CSL-3.4.x branch?
Best regards,
  -- Vladimir
On Tue, 1 Mar 2005, Richard Earnshaw wrote:
On Mon, 2005-02-28 at 12:51, Vladimir Ivanov wrote:
Hello all,
While compiling this:
http://sourceforge.net/projects/raytracer/
I think I've spotted a bug in ARM port of G++.
The problem is that many method functions tend to save all callee-saved FP
registers, while they use few or none of them.
Here's a small snippet from "base3d.cpp" file:
0528 <_ZN6Base3d8rotateV1Ed>:
  528:   e1a0c00dmov ip, sp
  52c:   e92ddff0stmdb   sp!, {r4, r5, r6, r7, r8, r9, sl, fp, 
ip, lr, pc}
  530:   ed2d420csfm f4, 4, [sp, #-48]!
  534:   e24cb004sub fp, ip, #4  ; 0x4
  538:   e24dd0ccsub sp, sp, #204; 0xcc
  53c:   e59f3288ldr r3, [pc, #648]  ; 7cc <.text+0x7cc>
  ...
Function uses only F0 register, although F4-F7 are saved/restored. This
leads to great speed penalty, especially when coprocessors like Crunch
have many registers.
PowerPC port shows no such problem, so I think it's something in ARM port.
Sorry I cannot provide small enough example, C++ is not my area of
expertise.
[I tried to reply to this yesterday, but the response has failed to show
up here.  I suspect it was mailer problems at my end, but if you get
this twice I apologise.]
I think this is most likely a consequence of SJLJ exceptions.  It should
be fixed when we move to the EABI unwinding tables.
R.


Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Richard Earnshaw
On Tue, 2005-03-01 at 10:54, Vladimir Ivanov wrote:
> Hello,
> 
> [Richard]:
>   Does this mean that GCC-3.4.x won't be fixed?
> 

Most certainly it won't be.  3.4 is in regression-fix only mode and this
is not a regression.

R.



Different sized data and code pointers

2005-03-01 Thread Thomas Gill
Hi all.
I'm working on a GCC backend for a small embedded processor. We've got a
 Harvard architecture with 16 bit data addresses and 24 bit code
addresses. How well does GCC support having different sized pointers for
this sort of thing? The macros POINTER_SIZE and Pmode seem to suggest 
that there's one pointer size for everything.

The backend that I've inherited gets most of the way with some really
horrible hacks, but it would be nice if those hacks weren't necessary. 
In any case, the hacks don't cope with casting function pointers to 
integers.

Thanks,
Ned.
**
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they
are addressed. If you have received this email in error please notify
the system manager.
**


Re: Inlining and estimate_num_insns

2005-03-01 Thread Jan Hubicka
> On Tue, 1 Mar 2005 02:03:57 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> 
> > OK, I will put this higher in the TODO list (but this is not 4.0
> > either).  What was those less unrealistic tests?  I remember seeing node
> > removal in profiles, but edge removal comes first here.  Looks like I
> > finally recovered everything from disc crash so will have time to look
> > into this in the rest of the week...
> 
> I see those functions high up in the profiles with tramp3d and leafify turned
> on, too.  I remember bugging you about this a few month ago...
> 
> http://gcc.gnu.org/ml/gcc/2004-11/msg00737.html

OK, thanks, this time I will remember this ;))  I believe that I
oprofiled it that shot remove_edge down from profile, but I have to
re-try.

Honza
> 
> Richard.


Re: Inlining and estimate_num_insns

2005-03-01 Thread Richard Guenther
On Tue, 1 Mar 2005 13:30:41 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> > On Tue, 1 Mar 2005 02:03:57 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> >
> > > OK, I will put this higher in the TODO list (but this is not 4.0
> > > either).  What was those less unrealistic tests?  I remember seeing node
> > > removal in profiles, but edge removal comes first here.  Looks like I
> > > finally recovered everything from disc crash so will have time to look
> > > into this in the rest of the week...
> >
> > I see those functions high up in the profiles with tramp3d and leafify 
> > turned
> > on, too.  I remember bugging you about this a few month ago...
> >
> > http://gcc.gnu.org/ml/gcc/2004-11/msg00737.html
> 
> OK, thanks, this time I will remember this ;))  I believe that I
> oprofiled it that shot remove_edge down from profile, but I have to
> re-try.

Try this,. bootstrapped on i686-pc-linux-gnu in progress.

Richard.


cgraph
Description: Binary data


Re: Inlining and estimate_num_insns

2005-03-01 Thread Steven Bosscher
On Mar 01, 2005 01:35 PM, Richard Guenther <[EMAIL PROTECTED]> wrote:
> Try this,. bootstrapped on i686-pc-linux-gnu in progress.

If this works, maybe we should consider this for 4.0 (as a compiler
speedup), but for 4.1 we should really look into VECs instead of
doubly linked lists.
If your patch works, I'll try it for 8361 with -fobey-inline too ;-)

Gr.
Steven




Re: Inlining and estimate_num_insns

2005-03-01 Thread Jan Hubicka
> On Tue, 1 Mar 2005 13:30:41 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> > > On Tue, 1 Mar 2005 02:03:57 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> > >
> > > > OK, I will put this higher in the TODO list (but this is not 4.0
> > > > either).  What was those less unrealistic tests?  I remember seeing node
> > > > removal in profiles, but edge removal comes first here.  Looks like I
> > > > finally recovered everything from disc crash so will have time to look
> > > > into this in the rest of the week...
> > >
> > > I see those functions high up in the profiles with tramp3d and leafify 
> > > turned
> > > on, too.  I remember bugging you about this a few month ago...
> > >
> > > http://gcc.gnu.org/ml/gcc/2004-11/msg00737.html
> > 
> > OK, thanks, this time I will remember this ;))  I believe that I
> > oprofiled it that shot remove_edge down from profile, but I have to
> > re-try.
> 
> Try this,. bootstrapped on i686-pc-linux-gnu in progress.

Looks nice, you might also consider turing next_clone list into doubly
linked list to speedup remove_node somewhat.  Not sure how much that
one counts.  Can you post --disable-checking benchmarks on your testcase
with leafify?

Honza
> 
> Richard.




Re: Inlining and estimate_num_insns

2005-03-01 Thread Richard Guenther
On Tue, 1 Mar 2005 13:41:27 +0100 (CET), Steven Bosscher
<[EMAIL PROTECTED]> wrote:
> On Mar 01, 2005 01:35 PM, Richard Guenther <[EMAIL PROTECTED]> wrote:
> > Try this,. bootstrapped on i686-pc-linux-gnu in progress.
> 
> If this works, maybe we should consider this for 4.0 (as a compiler
> speedup), but for 4.1 we should really look into VECs instead of
> doubly linked lists.
> If your patch works, I'll try it for 8361 with -fobey-inline too ;-)

Bootstrapped ok.

tramp3d-v3 with -O2 and new size estimate:
 integration   :  16.95 (12%) usr   0.21 ( 4%) sys  17.90 (12%) wall
 TOTAL : 138.06 5.37   144.96

above with the cgraph doubly-linked-list patch:
 integration   :  12.88 (10%) usr   0.17 ( 3%) sys  13.41 (10%) wall
 TOTAL : 128.46 5.17   136.07

That's 7% faster.

we additionally to the patch can replace the commonly used
while (node->callees)
cgraph_remove_edge (node->callees);
with
cgraph_node_remove_callees (node);

I'll re-bootstrap with that and see if it even helps more.


Pascal front-end integration

2005-03-01 Thread James A. Morrison
Hi,

  I've decided I'm going to try to take the time and cleanup and update the
Pascal frontend for gcc and try it get it integrated into the upstream source.
I'm doing this because I wouldn't like to see GPC work with GCC 4+.
I don't care at all at supporting GPC on anything less than GCC 4.1 so
I've started
by ripping out as much obviously compatibility code as I can and removing any
traces of GPC being a separate project.

  So far I have only accomplished converting lang-options.h to
lang.opt.  I'm going
to continue cleaning up the GPC code, then once I am happy with how the code
looks with respect to the rest of the GCC code, I'm going to get it to
compile with
the current version of GCC mainline.  I'm starting with the boring
conflict happy
whitespace changes first so the code is easier for me to read and so that I can
try to get an idea what the GPC frontend is doing.

  My current changes are available through bazaar (an arch implementation) which
people can get with:
 baz register-archive http://www.csclub.uwaterloo.ca/~ja2morri/arch
 baz get [EMAIL PROTECTED]/gcc-pascal--mainline--0.3

 Once I get the front-end to compile I'll look into creating a branch
on gcc.gnu.org,
but until that point it really isn't worth while.  Patches are
welcome, but please
have your copyright assignment filed with the FSF and add a changelog entry to
ChangeLog.gcc.

 I don't have any timeline for this work.

-- 
Thanks,
Jim
http://phython.blogspot.com


RE: Plan for cleaning up the "Addressing Modes" macros

2005-03-01 Thread Dave Korn
Original Message
>From: Zack Weinberg
>Sent: 28 February 2005 20:48

> I think I've addressed all the points you bring up in responses to
> other people.  If I missed something, please let me know?
> 
> zw

  Nope, everything seems sound to me.  This is a worthy bit of tidying up,
good luck and thanks!  :)


cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Pascal front-end integration

2005-03-01 Thread Steven Bosscher
On Mar 01, 2005 02:17 PM, James A. Morrison <[EMAIL PROTECTED]> wrote:

> Hi,
> 
>   I've decided I'm going to try to take the time and cleanup and update the
> Pascal frontend for gcc and try it get it integrated into the upstream source.

Since the GNU Pascal maintainers have publicly stated they do not
want to be integrated, I suppose you are doing this without their
support?

Gr.
Steven



Question about GTY machinery (cgraph_edge)

2005-03-01 Thread Richard Guenther
Hi!

struct cgraph_edge is currently member of two lists, i.e.
it contains two "next" pointers, but is annotated like

struct cgraph_edge GTY((chain_next ("%h.next_caller")))
{
  struct cgraph_node *caller;
  struct cgraph_node *callee;
  struct cgraph_edge *next_caller;
  struct cgraph_edge *next_callee;
  tree call_expr;
  PTR GTY ((skip (""))) aux;
  /* When NULL, inline this call.  When non-NULL, points to the
explanation
 why function was not inlined.  */
  const char *inline_failed;
};

Is it possible and beneficial to have both next pointers
annotated with chain_next?  The internals documentation doesn't
say anything about this.  For optimization purposes, the
cgraph_node where the edges are hanging from has:

struct cgraph_node GTY((chain_next ("%h.next"), chain_prev
("%h.previous")))
{
  tree decl;
  struct cgraph_edge *callees;
  struct cgraph_edge *callers;
  struct cgraph_node *next;
...

i.e. callees first - would it the be beneficial to chain
the callee list in the edge structure to not have the marker
recursively go down this list first?

Thanks,
Richard.

--
Richard Guenther 
WWW: http://www.tat.physik.uni-tuebingen.de/~rguenth/



Re: Pascal front-end integration

2005-03-01 Thread Steven Bosscher
Oh, and please do not include [EMAIL PROTECTED] in the CC, because that
is not a public list so reply-to-all messages bounce.

Gr.
Steven



Re: Question about GTY machinery (cgraph_edge)

2005-03-01 Thread Steven Bosscher
On Mar 01, 2005 02:28 PM, Richard Guenther <[EMAIL PROTECTED]> wrote:
> Is it possible and beneficial to have both next pointers
> annotated with chain_next?

Unfortunately it is not.  There are other places where this would
be useful, but gengtype does not support this at the moment.

Gr.
Steven



Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Paul Brook
> [Paul]:
>   Is this problem present also in CSL-3.4.x branch?

That depends which target you are using. It Richard's analysis is correct this 
is an ABI limitation rather than a compiler problem.

The "old" arm-none-elf and arm-linux targets still use SJLJ exceptions. They 
will probably never be "fixed" as this would involve an ABI change.

The newer eabi based arm-none-eabi and arm-linux-gnueabi targets use unwinding 
tables. They should not have this problem.

Paul


Re: Inlining and estimate_num_insns

2005-03-01 Thread Richard Guenther
On Tue, 1 Mar 2005 13:46:22 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:

> Looks nice, you might also consider turing next_clone list into doubly
> linked list to speedup remove_node somewhat.  Not sure how much that
> one counts.

I'd like to see profiles after the patch first - changing that one would only
help cgraph_remove_node and we still would have a same complexity
list walk to check if we still need the function body (solvable with
introduction
of another counter, though).  The edge stuff really was sort of O(N^2), this one
is clearly linear.

> Can you post --disable-checking benchmarks on your testcase
> with leafify?

I'll try to do some more benchmarking, but for this I need to move over the
patch to my custom tree.  Will do so and submit the patch for 4.0/4.1.

Richard.


Re: Pascal front-end integration

2005-03-01 Thread Prof A Olowofoyeku (The African Chief)
On 1 Mar 2005 at 8:17, James A. Morrison  wrote:

> Hi,
> 
>   I've decided I'm going to try to take the time and cleanup and update
>   the
> Pascal frontend for gcc and try it get it integrated into the upstream
> source. I'm doing this because I wouldn't like to see GPC work with GCC
> 4+. I don't care at all at supporting GPC on anything less than GCC 4.1
> so I've started by ripping out as much obviously compatibility code as I
> can and removing any traces of GPC being a separate project.
[...]

Instead of starting a totally separate project, wouldn't it be better 
to coordinate your efforts with the GPC development team?

Best regards, The Chief

Prof. Abimbola A. Olowofoyeku (The African Chief) 
web:  http://www.greatchief.plus.com/



Re: Pascal front-end integration

2005-03-01 Thread James A. Morrison

Steven Bosscher <[EMAIL PROTECTED]> writes:

> On Mar 01, 2005 02:17 PM, James A. Morrison <[EMAIL PROTECTED]> wrote:
> 
> > Hi,
> > 
> >   I've decided I'm going to try to take the time and cleanup and update the
> > Pascal frontend for gcc and try it get it integrated into the upstream 
> > source.
> 
> Since the GNU Pascal maintainers have publicly stated they do not
> want to be integrated, I suppose you are doing this without their
> support?
> 
> Gr.
> Steven

 I made this post, with my changes posted, to see if I would get any support.

-- 
Thanks,
Jim

http://www.student.cs.uwaterloo.ca/~ja2morri/
http://phython.blogspot.com
http://open.nit.ca/wiki/?page=jim


Re: Different sized data and code pointers

2005-03-01 Thread E. Weddington
Thomas Gill wrote:
Hi all.
I'm working on a GCC backend for a small embedded processor. We've got a
 Harvard architecture with 16 bit data addresses and 24 bit code
addresses. How well does GCC support having different sized pointers for
this sort of thing? The macros POINTER_SIZE and Pmode seem to suggest 
that there's one pointer size for everything.

The backend that I've inherited gets most of the way with some really
horrible hacks, but it would be nice if those hacks weren't necessary. 
In any case, the hacks don't cope with casting function pointers to 
integers.

I too would be interested in different size pointer support as this may 
be needed to further improve the AVR port (add new bigger devices).

Eric


Re: Inlining and estimate_num_insns

2005-03-01 Thread Jan Hubicka
Hi,
so after reading the whole discussion, here are some of my toughts
for sanity check combined in one to reduce inbox pollution ;)

Concerning Richard's cost tweaks:

There is longer story why I like it ;)
I originally considered further tweaking of cost function as mostly lost
case as the representation of program at that time is way too close to
source level to estimate it's cost properly.  This got even bit worse in
4.0 times by gimplifier introducing not very predictable noise.

Instead I went to plan of optimizing functions early that ought to give
better estimates.  It seems to me that we need to know both - the code
size and expected time consumed by function to have chance to predict
the benefits in some way.  On tree-profiling and some my local patches I
hope to sort out soonish I am mostly there and I didn some limited
benchmarking.  Overall the early optimization seems to do good job for
SPEC (over 1% speedup in whole program mode is more than I expected),
but it does almost nothing to C++ testcases (about 10% speedup to POOMA
and about 0 to Gerald's application).  I believe the reason is that C++
testcases consist of little functions that are unoptimizable by
themselves so the context is not big enought.

In parallel with Richard's efforts, I tought that problem there is ineed
with the "abstraction functions", ie functions just accepting arguments
and calling other function or returning some field.  There is extremly
high amount of those (from profiling early one can see that for every
operation executed in resulting program, there are hunderds of function
calls elliminated by inlining) Clearly with any inlining limits if the
cost function computes non-zero cost to such a forwarders, we are going
to have dificult time finding tresholds.

I planed to write an pattern matching for these functions to bump them
to 0 cost, but it looks like Richard's patch is pretty interesting idea.
His results with limits set to 50 shows that he ineed managed to get
those forwarders very cheap, so I believe that this idea might ineed
work well, with some additional tweaking.

Only what I am affraid of is the fact that number of inlines will no
longer be linear function of code size esitmate increase that is limited
by linear fraction of whole unit.  However only "forwarders" having at
most one call comes out free, so this is still dominated by the longest
path in callgraph consisting of these in the program.  Unfortunately
this can be high and we can produce _a lot_ of grabage inlining these.

One trick that I came across is to do two stage inlining - first inline
just functions whose growth estimates are <= 0 in the bottom-up approach
, do early optimizations to reduce garbage and then do "real inlining
job".  This way we might trottle amount of garbage produced by inliner
and get more realistic estimates of the function bodies, but I am not at
all sure about this.  It would also help profiling performance on
tramp3d definitly.

Concerning -fobey-inline:

I really doubt this is going to help C++ programmers.  I think it might
be usefull to kernel and I can make slightly cleaner implementation
(without changes in frontends) if there is really good use for it.  Can
someone point me to existing codebase where -fobey-inline brings
considerable improvements over defaultinlining heuristics?  I've seen a
lot of argumenting in this direction but never actual bigger application
that needs it.

It might be also possible to strengten the function "inline" keywords
have for heuristics - either multiply priority by 2 for functions
declared inline so the candidates gets in first or do two stage
inlining, first for inline functions and other for autoinline.  But this
is probably not going to help those folks complaining mostly about -O2
ignoring inline, right?

Concerning multiple heuristics:

I really don't like this idea ;)  Still think we can make heuristics to
adapt to the programming style it is fed by just because often programs
consist of multiple such styles.

Concerning compilation time/speed tradeoffs:

Since whole task of inliner is to slow down compiler in order to improve
resulting code, it is dificult to blame it for doing it's job.  While I
was in easy position with original heuristics where the pre-cgraph code
produced just that much of inlining so it was easy to speedup both, now
we obviously do too little of inlining, so we need to expect some
slodowns.  I would define a sucess of heuristics if it results in faster
and smaller code, the compilation time is kind of secondary.  However
definitly for code bases (like SPEC) where extra inlining don't help, we
should not slow down seriously (over 1% I guess)

Concerning growth limits:

If you take a look on when -finline-unit-growth limit hits, it is clear
that it hits very often on small units (several times in the kernel,
testsuite and such) just because there is tinny space to manuever.  It
hits almost never on medium units (in GCC bootstrap it hits almost
nev

Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Petko Manolov
On Tue, 1 Mar 2005, Paul Brook wrote:
The "old" arm-none-elf and arm-linux targets still use SJLJ exceptions. They
will probably never be "fixed" as this would involve an ABI change.
Didn't understand that.  How is all non scratch FP registers save at the 
prologue related to the exceptions?

		Petko


3.3.2 <--> 3.4.2 compatibility?

2005-03-01 Thread Steve Snyder
Should there be binary compatibility between GCC versions 3.3.2 and 
3.4.2?

I have a large (~40 source files) C++ application that builds and runs 
correctly with GCC v3.3.2 (in Fedora Core #1).  With GCC v3.4.2 (FC3) 
the application builds correctly, yet fails to run correctly.

No errors or warnings are seen in either build.  I did static builds 
both times to minimize environmental factors when I do test runs on 
the same machine.  The code is supposed to communicate with a PCI 
device.  Communications are fine with the v3.3.2 build and fail 
completely with the v3.4.2 build.

Since it would ber a major job to determine which snippet of code had 
silently mis-compiled, my brainstorm was to replace the objects files 
in the "good" build, one by one, until the app failed.  Alas, this 
scheme consistently gets me unresolved external references.

Specifically, I get many, many occurances of this (on the 3.4.3 
system):

/usr/include/c++/3.3.2/bits/stl_alloc.h:232: undefined reference to 
`std::__default_alloc_template::allocate(unsigned int)'

Shouldn't there be binary compatibility between these 2 versions of 
GCC?

Thanks.


Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Paul Brook
On Tuesday 01 March 2005 15:29, Petko Manolov wrote:
> On Tue, 1 Mar 2005, Paul Brook wrote:
> > The "old" arm-none-elf and arm-linux targets still use SJLJ exceptions.
> > They will probably never be "fixed" as this would involve an ABI change.
>
> Didn't understand that.  How is all non scratch FP registers save at the
> prologue related to the exceptions?

Because SJLJ exception handling is implemented using 
__builtin_setjmp/__builtin_longjmp. These do not save/restore FP register 
state.
If an exception is thrown the FP state will be that at the throwing location.
Thus the prologue must save all call-saved registers if there is a possibility 
that an exception will be caught.

Paul


Re: [arm] possible bug in G++ 3.4.x

2005-03-01 Thread Richard Earnshaw
On Tue, 2005-03-01 at 15:29, Petko Manolov wrote:
> On Tue, 1 Mar 2005, Paul Brook wrote:
> 
> > The "old" arm-none-elf and arm-linux targets still use SJLJ exceptions. They
> > will probably never be "fixed" as this would involve an ABI change.
> 
> Didn't understand that.  How is all non scratch FP registers save at the 
> prologue related to the exceptions?

SJLJ exceptions implement unwinding by using built-in setjmp and
longjump (hence SJLJ).  The built-in setjmp is what you are seeing. 
It's saving all the registers that might possibly be affected by any
functions that this routine might call and which might throw exceptions.

It has no real information about which registers might be altered by
other routines, so it must save all call-saved registers.

SJLJ exceptions are very slow to set up, but can be very fast to unwind
if throwing from deep down a call-stack.  The converse is generally true
for table-based unwinders.

R.


Re: 3.3.2 <--> 3.4.2 compatibility?

2005-03-01 Thread Paolo Carlini
Steve Snyder wrote:
Specifically, I get many, many occurances of this (on the 3.4.3 
system):

/usr/include/c++/3.3.2/bits/stl_alloc.h:232: undefined reference to 
`std::__default_alloc_template::allocate(unsigned int)'

Shouldn't there be binary compatibility between these 2 versions of 
GCC?
 

The quick answer is no, sorry. Have a look to this document, for details:
   http://gcc.gnu.org/onlinedocs/libstdc++/abi.html
You can see that going from 3.3.3 to 3.4.0 the major *.so number jumped from
5 to 6, signaling an hard incompatibility.
Paolo.


Re: Inlining and estimate_num_insns

2005-03-01 Thread Richard Guenther
On Tue, 1 Mar 2005 16:14:14 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:

> Concerning growth limits:
> 
> If you take a look on when -finline-unit-growth limit hits, it is clear
> that it hits very often on small units (several times in the kernel,
> testsuite and such) just because there is tinny space to manuever.  It
> hits almost never on medium units (in GCC bootstrap it hits almost
> never) and almost always on big units
> 
> My intuition alwas has been that for larger units the limits should be
> much smaller and pooma was major counter example.  If we suceed solving
> this, I would guess we can introduce something like small-unit-insns
> limit and limit size of units that exceeds this.  Does this sound sane?

POOMA hitting the unit-growth limit is caused by the abstraction penalty
of the inliner and is no longer an issue with the new code size estimate.
Though with the new estimate our INSNS_PER_CALL is probably too high,
so we're reducing the unit-size too much inlining one-statement functions
and as such getting more room for further inlining and finally regress badly
in compile-time (as max-inline-insns-single is so high we're inlining stuff
we shouldn't, but as it fit's the unit-growth limit now, we do ...)

Richard.


Re: Inlining and estimate_num_insns

2005-03-01 Thread Richard Guenther
On Tue, 1 Mar 2005 16:49:04 +0100, Richard Guenther
<[EMAIL PROTECTED]> wrote:
> On Tue, 1 Mar 2005 16:14:14 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> 
> > Concerning growth limits:
> >
> > If you take a look on when -finline-unit-growth limit hits, it is clear
> > that it hits very often on small units (several times in the kernel,
> > testsuite and such) just because there is tinny space to manuever.  It
> > hits almost never on medium units (in GCC bootstrap it hits almost
> > never) and almost always on big units
> >
> > My intuition alwas has been that for larger units the limits should be
> > much smaller and pooma was major counter example.  If we suceed solving
> > this, I would guess we can introduce something like small-unit-insns
> > limit and limit size of units that exceeds this.  Does this sound sane?
> 
> POOMA hitting the unit-growth limit is caused by the abstraction penalty
> of the inliner and is no longer an issue with the new code size estimate.
> Though with the new estimate our INSNS_PER_CALL is probably too high,
> so we're reducing the unit-size too much inlining one-statement functions
> and as such getting more room for further inlining and finally regress badly
> in compile-time (as max-inline-insns-single is so high we're inlining stuff
> we shouldn't, but as it fit's the unit-growth limit now, we do ...)

Experimenting further with replacing INSNS_PER_CALL by one plus the
move cost of the function arguments shows promising results, also not
artificially overestimating RDIV_EXPR and friends helps us not regress
on some testcases if we lower the default limits to 100.  But I have too many
patches pending now - I'll let the dust settle down for now.  Maybe we
should create a branch for the inlining stuff, or use tree-profiling branch
(though that has probably too many unrelated stuff).

Richard.


Re: Identifying rule responsible for lookahead

2005-03-01 Thread Henrik Sorensen
On Tuesday 01 March 2005 19.02, Soumitra Kumar wrote:
> Henrik,
> So, if I get the following output (rule no after a
> lookahead symbol), finding the ambiguous rules is
> trivial.
well, you can search for all the states that have a goto your state 10.


Re: No way to scan-tree-dump .i01.cgraph?

2005-03-01 Thread Janis Johnson
On Mon, Feb 28, 2005 at 10:23:56AM -0700, Jeffrey A Law wrote:
> On Mon, 2005-02-28 at 17:08 +0100, Richard Guenther wrote:
> > Hi!
> > 
> > It seems the current dg infrastructure does not support scanning
> > tree-dumps dumped via -fdump-ipa-XXX because they are labeled
> > differently.  I worked around this by replacing
> > 
> > set output_file "[glob [file tail $testcase].t??.[lindex $args 1]]"
> > 
> > with
> > 
> > set output_file "[glob [file tail $testcase].???.[lindex $args 1]]"
> > 
> > but I'm not sure if this is the right way.
> It's as good as any.  If you wanted to solve an even bigger problem,
> find a clean way that we can delete the bloody files.  I got lost in
> the maze of tcl/expect code when I tried :(

I also find it annoying that the dump files aren't cleaned up.  Should
the dump files for failing tests be left, or would it be OK to remove
all of them?

> > Also I need to do more complex matching like the number X in line
> > matching PATTERN should be the same as Y in line matching PATTERN2.
> > Is there a way to do this with dg?  Or is it better to output
> > an extra line to the dump file during compile for the condition
> > I want to check?
> I'm not immediately aware of a way to do this.  One of the major 
> limitations of the framework is the inability to do anything other
> than scan for simple patterns and count how often the pattern
> occurs.
> jeff

Adding extra lines in dump files for use by tests seems good to me, as
long as there are comments in the code explaining what it's for so it
doesn't change.

Janis


Re: testsuite execution question

2005-03-01 Thread Janis Johnson
On Mon, Feb 28, 2005 at 08:45:17PM -0500, Daniel Jacobowitz wrote:
> On Mon, Feb 28, 2005 at 04:14:12PM -0800, Janis Johnson wrote:
> > > DejaGnu's definition of ${tool}_load has an optional argument for flags
> > > to pass to the test program, but none of the procedures in DejaGnu or in
> > > gcc/testsuite/* are set up to pass such flags.  It would be fairly
> > > straightforward to provide a local version of gfortran_load to intercept
> > > calls to the global one, and have it add flags specified with a new test
> > > directive to the DejaGnu version of ${tool}_load.  That directive could
> > > be something like:
> > > 
> > >   { dg-program-options options [{ target selector }] }
> > > 
> > > Would something like this be useful for other languages as well, or is
> > > Fortran the only one in GCC that has support to process a program's
> > > command line?
> > > 
> > > I'm willing to implement something like this if it looks worthwhile.
> > 
> > It's supposed to be possible to drop in replacements to DejaGnu in the
> > GCC testsuite; do other test frameworks of interest handle passing
> > arguments to the test program in a way that could support this?  (Sorry
> > for talking to myself here.)
> 
> I don't think that's the concern here - it's more a matter of whether
> the target, and DejaGNU, support this.  Lots of embedded targets seem
> to have trouble with it.  Take a look at "noargs" in the DejaGNU board
> files for a couple of examples, IIRC.  GDB jumps through some hoops to
> test this, and gets it wrong in a bunch of places too.

Is command line processing relevant for embedded targets?  (I have no
idea.)  Tests that pass options to the test program could be skipped
for embedded targets and for other kinds of testing where it isn't
reliable.  The dg-program-options directive could warn when it's used
in an environment for which it's not supported.

Janis


Re: No way to scan-tree-dump .i01.cgraph?

2005-03-01 Thread Andrew Pinski
On Mar 1, 2005, at 1:25 PM, Janis Johnson wrote:
I also find it annoying that the dump files aren't cleaned up.  Should
the dump files for failing tests be left, or would it be OK to remove
all of them?
I find it even more annoying as on targets which uses case insensitive 
storing
of files, causes some of the C++ testcases to fail because the file 
names
will only differ in case.  (Powerpc-darwin by default uses HFS which is 
case
insensitive).

-- Pinski 



Re: Question about GTY machinery (cgraph_edge)

2005-03-01 Thread Geoffrey Keating
Richard Guenther <[EMAIL PROTECTED]> writes:

> Hi!
> 
> struct cgraph_edge is currently member of two lists, i.e.
> it contains two "next" pointers, but is annotated like
> 
> struct cgraph_edge GTY((chain_next ("%h.next_caller")))
> {
>   struct cgraph_node *caller;
>   struct cgraph_node *callee;
>   struct cgraph_edge *next_caller;
>   struct cgraph_edge *next_callee;
>   tree call_expr;
>   PTR GTY ((skip (""))) aux;
>   /* When NULL, inline this call.  When non-NULL, points to the
> explanation
>  why function was not inlined.  */
>   const char *inline_failed;
> };
> 
> Is it possible and beneficial to have both next pointers
> annotated with chain_next?

No, it's not possible.  I doubt it's beneficial, either.



Re: Inlining and estimate_num_insns

2005-03-01 Thread Steven Bosscher
On Tuesday 01 March 2005 02:30, Steven Bosscher wrote:
> On Tuesday 01 March 2005 02:03, Jan Hubicka wrote:
> > You still didn't get into the fun part of actually inlining all the
> > inlines in in Gerald's testcase ;)
>
> I'll let it run to the end tomorrow, for at most one full day ;-)

It got killed on a box with 4GB of RAM after 11 hours and 43 minutes
with "virtual memory exhausted: Cannot allocate memory".  In the
latest oprofiles, the cgraph functions still accounted for more than
95% of the time, so probably it was still inlining things.

Gr.
Steven




Re: No way to scan-tree-dump .i01.cgraph?

2005-03-01 Thread Janis Johnson
On Tue, Mar 01, 2005 at 01:29:48PM -0500, Andrew Pinski wrote:
> 
> On Mar 1, 2005, at 1:25 PM, Janis Johnson wrote:
> 
> >I also find it annoying that the dump files aren't cleaned up.  Should
> >the dump files for failing tests be left, or would it be OK to remove
> >all of them?
> 
> I find it even more annoying as on targets which uses case insensitive 
> storing
> of files, causes some of the C++ testcases to fail because the file 
> names
> will only differ in case.  (Powerpc-darwin by default uses HFS which is 
> case
> insensitive).

So fix the names of the test files so the names of generated files won't
conflict.  In this case it's a problem because the generated files aren't
removed, but there are also tests that fail only when two sets of tests
happen to run at the same time.

Janis


Re: Inlining and estimate_num_insns

2005-03-01 Thread Jan Hubicka
> On Tue, 1 Mar 2005 16:49:04 +0100, Richard Guenther
> <[EMAIL PROTECTED]> wrote:
> > On Tue, 1 Mar 2005 16:14:14 +0100, Jan Hubicka <[EMAIL PROTECTED]> wrote:
> > 
> > > Concerning growth limits:
> > >
> > > If you take a look on when -finline-unit-growth limit hits, it is clear
> > > that it hits very often on small units (several times in the kernel,
> > > testsuite and such) just because there is tinny space to manuever.  It
> > > hits almost never on medium units (in GCC bootstrap it hits almost
> > > never) and almost always on big units
> > >
> > > My intuition alwas has been that for larger units the limits should be
> > > much smaller and pooma was major counter example.  If we suceed solving
> > > this, I would guess we can introduce something like small-unit-insns
> > > limit and limit size of units that exceeds this.  Does this sound sane?
> > 
> > POOMA hitting the unit-growth limit is caused by the abstraction penalty
> > of the inliner and is no longer an issue with the new code size estimate.
> > Though with the new estimate our INSNS_PER_CALL is probably too high,
> > so we're reducing the unit-size too much inlining one-statement functions
> > and as such getting more room for further inlining and finally regress badly
> > in compile-time (as max-inline-insns-single is so high we're inlining stuff
> > we shouldn't, but as it fit's the unit-growth limit now, we do ...)
> 
> Experimenting further with replacing INSNS_PER_CALL by one plus the
> move cost of the function arguments shows promising results, also not
> artificially overestimating RDIV_EXPR and friends helps us not regress
> on some testcases if we lower the default limits to 100.  But I have too many
> patches pending now - I'll let the dust settle down for now.  Maybe we
> should create a branch for the inlining stuff, or use tree-profiling branch
> (though that has probably too many unrelated stuff).

Problem with separate branch would be that tree-profiling already
contains a lot of code for 4.1 that seriously affect inlining decisions,
so probably 4.1 stuff should be tuned against that. We need some
stabilization on tree-profiling right now and I hope to move on inlining
stuff there later this week.

Honza
> 
> Richard.


Re: No way to scan-tree-dump .i01.cgraph?

2005-03-01 Thread Diego Novillo
Janis Johnson wrote:
I also find it annoying that the dump files aren't cleaned up.  Should
the dump files for failing tests be left, or would it be OK to remove
all of them?
Much as I don't use the failing executables left behind by the 
testsuite, I wouldn't use the dump files.  They can be easily recreated.

But, I can see valid reasons to wanting dump files for failing tests be 
left behind.  The dump files for successful should be removed, though.

Diego.


Re: Hot and Cold Partitioning (Was: GCC 4.1 Projects)

2005-03-01 Thread Caroline Tice
I apologize for not responding to these messages sooner; I was out of 
town for a few days and only
just read them.

In the first place, I am a little confused about exactly what Joern is 
objecting to.  If I am reading your
emails correctly, you seem to feel that the hot/cold partitioning 
optimization, as currently designed,
has a problem because sometimes it will increase the size of the hot 
section by an amount that
will not be compensated for by the removal of the cold code to another 
section.  You also seem to
be expressing concerns that some branch instructions will not be able 
to span the distance between
hot and cold sections, and it appears that you therefore don't want 
this optimization to be put in.  It
sounds as if you don't want this optimization to go in at all, but in 
actuality it is already there, and what
I am proposing to do is fix parts of it that are still a little bit 
broken.

As with all optimizations, hot/cold partitioning is an educated guess 
at how to improve the program.
Therefore it will on occasion make a wrong guess.  By using profiling 
data (at other people indicated)
the number of wrong guesses will be greatly reduced, but not entirely 
eliminated.  While most of the
time it will either have no effect or will improve program performance, 
it can and will occasionally
slow it down.  This is one of the reasons that the optimization is 
controlled by a flag, and is not
turned on by default.  If you find the optimization is giving you 
trouble, you can always turn it off.

The optimization was designed to take into account the fact that on 
many architectures, various
branch instructions might not be able to span the distance between 
hot/cold sections.  As others
have indicated, this is done by adding a level of indirection to the 
jumps.  This is conditioned on
macros that can (should) be defined by each architecture, so the 
indirection won't be performed on
architectures where it isn't needed.

There might be some validity in the idea of modifying this 
optimization, in the future, to consider
the size of a basic block in addition to it's "hot-ness", when deciding 
which partition to put it into.
I expect this would not be that difficult to implement, and would 
probably address your concerns.

However, at the moment, I would first like to get the "correctness" 
fixes for the hot/cold partitioning
optimization into FSF mainline.  But I am open to persuasion, and if 
the FSF community in general
feels that I really ought to add the size test as well at this time, I 
will do so.

What do other people think?
-- Caroline Tice
[EMAIL PROTECTED]
On Feb 28, 2005, at 12:09 PM, Joern RENNECKE wrote:
Dale Johannesen wrote:

No, you should not turn on partitioning in situations where code 
size is important to you.

You are missing the point.  In my example, with perfect profiling 
data, you still end up with
more code in the hot section,

Yes.
i.e. more pages are actually swapped in.

Unless the cross-section branch is actually executed, there's no 
reason the unconditional
jumps should get paged in, so this doesn't follow.
If you separate the unconditional jumps from the rest of the function, 
you just have created a
per-function cold section.  Except for corner cases, there would have 
to be a lot of them to
save a page of working set.  And if you have that many, it will mean 
that the condjump can't
reach.  And it is still utterly pointless to put blocks into the 
inter-function cold section
if that only makes the intra-function cold section larger.
So we've come from 4 bytes, on cycle:

bf 0f
mov #0,rn
over 6 bytes, BR issue slot during one cycle:
bt L2
L1:
..
L2:
bra L1
mov #0,n
to 10 bytes in hot part of the hot section, 12 bytes in cold part of 
the hot
section, and another 10 to 12 bytes in the cold section, while the 
execution
time in the hot path is now two cycles (if we manage to get a good
schedule, we might execute two other instructions in these cycles, but 
still,
this is no better than we started out with):

.hotsection:
bf L2
mov.w 0f,rn
braf @rn
nop
0: .word L2-0b
L1:
...
L2:
mov.l 0f,rn
jmp @rn
nop
.balign 4
0: .long L3
.coldsection
L3:
mov.l 0f,rn
jmp @rn
mov #0,rn
.balign 4
0: .long L1




Re: Hot and Cold Partitioning (Was: GCC 4.1 Projects)

2005-03-01 Thread Mark Mitchell
Caroline Tice wrote:
There might be some validity in the idea of modifying this optimization, 
in the future, to consider
the size of a basic block in addition to it's "hot-ness", when deciding 
which partition to put it into.
I expect this would not be that difficult to implement, and would 
probably address your concerns.

However, at the moment, I would first like to get the "correctness" 
fixes for the hot/cold partitioning
optimization into FSF mainline. 
I agree.  The optimization is already checked in; as I understand it, 
all you are trying to do is make it work better on more systems, such as 
those with DWARF.  I think that Joern's objections can be dealt with 
after you get those fixes in place; I would imagine that you would just 
mark small basic blocks directly reachable from hot blocks as "hot", 
whether or not profiling data actually suggested them to be hot.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Question regarding c++ constructors

2005-03-01 Thread Mike Stump
On Feb 21, 2005, at 3:45 AM, Mile Davidovic wrote:
Functions are completely the same.
What is the reason for such compilere behaviour?
Just lack of code in the compiler to do better, see 
http://gcc.gnu.org/ml/gcc-patches/2002-08/msg00354.html for some of the 
details and starting point, should you want to develop the code 
further  With that compiler, one can get:

__ZN4testC2Ev:
LFB4:
b __ZN4testC4Ev
LFE4:
.align 2
.globl __ZN4testC1Ev
.section __TEXT,__text,regular,pure_instructions
.align 2
__ZN4testC1Ev:
LFB6:
b __ZN4testC4Ev
LFE6:
.align 2
.globl __ZN4testC4Ev
.section __TEXT,__text,regular,pure_instructions
.align 2
__ZN4testC4Ev:
LFB7:
blr
LFE7:
While this case doesn't show it, imagine if the code were long to very 
long, the savings increase.




SVN repo updated, http access, and commit mail format

2005-03-01 Thread Daniel Berlin
The SVN repo has been updated again.

Again, because different tags were included in this dump, you will have
to recheckout a working copy.
The only tags excluded from this run were tags with "merge" in the name,
and tags with ".*-ss-.*" and ".*_ss_.*".


For those who want http access, i can give you access to the dberlin.org
copy of the repo.

I have begun moving our various hooks over.
Rather than move over log_accum, i plan on replacing it.

The commit mails currently look like this:

> Author: dberlin
> Date: 2005-03-01 15:26:25 -0500 (Tue, 01 Mar 2005)
> New Revision: 77017
> 
> Log:
> Undo me changes
> 
> 
> WebSVN: http://dberlin.org/cgi-bin/viewcvs.cgi?
> view=rev&root=gccrepo&rev=77017
> 
> Modified:
>trunk/gcc/ChangeLog
>trunk/gcc/tree-ssa-alias.c
>trunk/gcc/version.c
> 
> 


(The >'s are not in the message, obviously)
This seemed close to what we have now.  Note that the link included
above does work and will take you to the web page giving you the
differences.  Please don't browse the revision logs of files using
viewcvs. I don't have the network bandwidth on dberlin.org

Note that in the commit message, There is only need for one link,
because the revision view will display the entire changeset.

If you had added or deleted files, those would show up under "Added" or
"Deleted" headings respectively.

Merges are significantly faster for me than CVS.

You can merge roughly as fast as the server can send you differences in
between revisions, which is fast.

I will next convert the syncing scripts that we use to sync the web
pages.

Note that due to the fact that svn.toolchain.org runs svn 1.1, blame
will still be *very* slow, and various operations may be slower than
they could be due to delta combination problems solved in 1.2 (server
side problems, these were).

Anyone who really wants to complain about speed of operations, i'm happy
to let you play against dberlin.org, but you will be limited by my
network bandwidth.

--Dan




re: cross compiling

2005-03-01 Thread Daniel Kegel
vivek sukumaran <[EMAIL PROTECTED]> wrote:
Are there any ready to use gcc rpms for,
 host:x-86,redhat9.0
 target:alpha
The right mailing list to ask is the one at
   http://sources.redhat.com/ml/crossgcc/
When you do post there, be sure to mention what OS
the target will be running.
If the target is alpha-linux, you might want to build
your own using http://kegel.com/crosstool
http://kegel.com/crosstool/crosstool-0.28-rc37/buildlogs/0.28/
shows that crosstool-0.28-rc37 builds at least semi-working
alpha-linux toolchains.
- Dan


Re: SVN repo updated, http access, and commit mail format

2005-03-01 Thread Joseph S. Myers
On Tue, 1 Mar 2005, Daniel Berlin wrote:

> > Date: 2005-03-01 15:26:25 -0500 (Tue, 01 Mar 2005)

I take it the time will be shown in gcc.gnu.org's timezone (fixed at UTC), 
not depending on the timezone of the person making the commit?

> > WebSVN: http://dberlin.org/cgi-bin/viewcvs.cgi?
> > view=rev&root=gccrepo&rev=77017
> > 
> > Modified:
> >trunk/gcc/ChangeLog
> >trunk/gcc/tree-ssa-alias.c
> >trunk/gcc/version.c
> > 
> > 
> 
> 
> (The >'s are not in the message, obviously)

Does the message also have the URL on one line?  It ought to.

> This seemed close to what we have now.  Note that the link included
> above does work and will take you to the web page giving you the
> differences.  Please don't browse the revision logs of files using
> viewcvs. I don't have the network bandwidth on dberlin.org
> 
> Note that in the commit message, There is only need for one link,
> because the revision view will display the entire changeset.

Can viewcvs be configured so that the links at the URL go to unidiffs by 
default, as in the present URLs?  I find unidiffs much more readable than 
coloured diffs.

What will the message / the diffs look like for changes to properties 
rather than file contents?

What will things (both the messages and the diffs) look like where files 
or directories are copied (including copies of the whole tree, as for 
tagging) or moved or copied-and-modified in one commit?  (Of course that 
there are messages for tags at all is an improvement on the present 
situation.)

I take it with SVN the messages to gcc-cvs-wwwdocs can work like the 
gcc-cvs ones, i.e. including the URL for diffs?  That is, there won't be 
any problems like those mentioned in comments in bugzilla-checkout and 
htdocs-checkout and cgibin-checkout with the new scripts.

Please ensure that the new script ends up version controlled, whether in 
maintainer-scripts in GCC SVN (maybe preferable) or in sourceware 
infra/bin CVS.  I don't like the present situation where 
log_accum_bugzillafied isn't version-controlled although its predecessor 
was.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: testsuite execution question

2005-03-01 Thread Daniel Jacobowitz
On Tue, Mar 01, 2005 at 10:29:45AM -0800, Janis Johnson wrote:
> Is command line processing relevant for embedded targets?  (I have no
> idea.)  Tests that pass options to the test program could be skipped
> for embedded targets and for other kinds of testing where it isn't
> reliable.  The dg-program-options directive could warn when it's used
> in an environment for which it's not supported.

Sounds good to me, at least in theory.

-- 
Daniel Jacobowitz
CodeSourcery, LLC


Re: SVN repo updated, http access, and commit mail format

2005-03-01 Thread Daniel Berlin
On Tue, 2005-03-01 at 21:00 +, Joseph S. Myers wrote:
> On Tue, 1 Mar 2005, Daniel Berlin wrote:
> 
> > > Date: 2005-03-01 15:26:25 -0500 (Tue, 01 Mar 2005)
> 
> I take it the time will be shown in gcc.gnu.org's timezone (fixed at UTC), 
> not depending on the timezone of the person making the commit?

I believe this can be configured. 

> 
> > > WebSVN: http://dberlin.org/cgi-bin/viewcvs.cgi?
> > > view=rev&root=gccrepo&rev=77017
> > > 
> > > Modified:
> > >trunk/gcc/ChangeLog
> > >trunk/gcc/tree-ssa-alias.c
> > >trunk/gcc/version.c
> > > 
> > > 
> > 
> > 
> > (The >'s are not in the message, obviously)
> 
> Does the message also have the URL on one line?  It ought to.

Yes, it does.
Evolution decided it would be cute to turn my paste into a word-wrapped
message.

> 
> > This seemed close to what we have now.  Note that the link included
> > above does work and will take you to the web page giving you the
> > differences.  Please don't browse the revision logs of files using
> > viewcvs. I don't have the network bandwidth on dberlin.org
> > 
> > Note that in the commit message, There is only need for one link,
> > because the revision view will display the entire changeset.
> 
> Can viewcvs be configured so that the links at the URL go to unidiffs by 
> default, as in the present URLs?  I find unidiffs much more readable than 
> coloured diffs.
Yes, it can be configured to do so.


> 
> What will the message / the diffs look like for changes to properties 
> rather than file contents?


Author:
Revision:
Property name:
Action: (added or modify or deleted)



> 
> What will things (both the messages and the diffs) look like where files 
> or directories are copied (including copies of the whole tree, as for 
> tagging) or moved or copied-and-modified in one commit?

Just like they do for regular adds/deletes/modifications, except the
path names will be different.

Tagging just looks like a directory add with history.

...
Log:
Testing a branch merge in a second

Added:
   branches/testbranch/
  - copied from r77013, /trunk/



> I take it with SVN the messages to gcc-cvs-wwwdocs can work like the 
> gcc-cvs ones, i.e. including the URL for diffs? 
Yes.


>  That is, there won't be 
> any problems like those mentioned in comments in bugzilla-checkout and 
> htdocs-checkout and cgibin-checkout with the new scripts.

I don't understand what the real problem is, because the comments claim
something that is so strange to me i have a hard time believing it.
I can tell you you can run multiple scripts, and it will properly
substitute things, etc.
So in fact, it seems like the checkout scripts don't need to send mail
like they do now.
The mailer script can be told that mail for paths matching certain
regexps go include diffs in the mail, or go to different email lists.
Same with repos matching different regexps.


> Please ensure that the new script ends up version controlled,

All hooks will be version controlled, with a script to sync the version
controlled versions with the ones in the actual hooks dir of the repo.

>  whether in 
> maintainer-scripts in GCC SVN (maybe preferable) or in sourceware 
> infra/bin CVS.  I don't like the present situation where 
> log_accum_bugzillafied isn't version-controlled although its predecessor 
> was.

This is in part because i didn't have write access to the directory
containing the rcs version.





Re: Inlining and estimate_num_insns

2005-03-01 Thread Benjamin Redelings I
Hello,
	I would be interested in testing patches that you produce.  It seems 
that inlining heuristics have quite a large effect on my code, compared 
to other codes.

	As an aside, do you (or anyone) know what kind of compile-time speedup 
can be gained by boostrapping with more extreme options?  For example 
"-O3 -mpentium4 -fomit-frame-pointer" on PIV processors?

-BenRI


Re: SVN repo updated, http access, and commit mail format

2005-03-01 Thread Joseph S. Myers
On Tue, 1 Mar 2005, Daniel Berlin wrote:

> >  That is, there won't be 
> > any problems like those mentioned in comments in bugzilla-checkout and 
> > htdocs-checkout and cgibin-checkout with the new scripts.
> 
> I don't understand what the real problem is, because the comments claim
> something that is so strange to me i have a hard time believing it.
> I can tell you you can run multiple scripts, and it will properly
> substitute things, etc.
> So in fact, it seems like the checkout scripts don't need to send mail
> like they do now.

I don't know the real cause, but the mere existence of the log_accum 
system where CVS runs a script for each directory and the script needs to 
merge the logs and send a single message if it's on the last directory 
(rather than just having a hook for the whole commit) seems rather fragile 
and a way in which CVS is simply broken which can be avoided with SVN.

Will wwwdocs be handled as a separate repository (with its own repository 
revision numbers) or as a directory in a single GCC repository?  (I don't 
know if anything else in the existing repository, e.g. "benchmarks", is 
live to need conversion to SVN rather than just staying available 
read-only from the old repository.)

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Constant pointer to (member) function, indirect call

2005-03-01 Thread Mark Mitchell
Helge Bahmann wrote:
void (A::*function2)(void) throw()=&A::function2;
(a.*function2)();

however for the call through pointer function2 gcc will always generate an
indirect call, i386 assembly for example looks like:
Yes, it should be able to do so, but it's not.  This is probably 
something that could be done via range propagation.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: SVN repo updated, http access, and commit mail format

2005-03-01 Thread Daniel Berlin
On Tue, 2005-03-01 at 22:21 +, Joseph S. Myers wrote:
> On Tue, 1 Mar 2005, Daniel Berlin wrote:
> 
> > >  That is, there won't be 
> > > any problems like those mentioned in comments in bugzilla-checkout and 
> > > htdocs-checkout and cgibin-checkout with the new scripts.
> > 
> > I don't understand what the real problem is, because the comments claim
> > something that is so strange to me i have a hard time believing it.
> > I can tell you you can run multiple scripts, and it will properly
> > substitute things, etc.
> > So in fact, it seems like the checkout scripts don't need to send mail
> > like they do now.
> 
> I don't know the real cause, but the mere existence of the log_accum 
> system where CVS runs a script for each directory and the script needs to 
> merge the logs and send a single message if it's on the last directory 
> (rather than just having a hook for the whole commit) seems rather fragile 
> and a way in which CVS is simply broken which can be avoided with SVN.

Right. SVN has a hook for the whole commit
> 
> Will wwwdocs be handled as a separate repository (with its own repository 
> revision numbers) or as a directory in a single GCC repository?  (I don't 
> know if anything else in the existing repository, e.g. "benchmarks", is 
> live to need conversion to SVN rather than just staying available 
> read-only from the old repository.)

Your choice. It really doesn't matter either way to me.
Viewcvs allows multiple repos with different names, and putting the
right repo in the url is already done.



Re: Pascal front-end integration

2005-03-01 Thread Mark Mitchell
James A. Morrison wrote:
 I made this post, with my changes posted, to see if I would get any support.
I'd also suggesting contacting the GCC SC to see what their reaction 
would be.

Personally, I'm not necessarily convinced that adding Pascal to GCC is a 
good idea.  I like Pascal just fine, but because every new language adds 
to the load on everyone.  (In my ideal world, we'd have stable enough 
interfaces that it was easy to maintain front ends separately from the 
rest of the compiler, but, though I've been extolling that vision for 
years, I've made little progress in realizing it...)

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Pascal front-end integration

2005-03-01 Thread E. Weddington
Mark Mitchell wrote:
James A. Morrison wrote:
 I made this post, with my changes posted, to see if I would get any 
support.

I'd also suggesting contacting the GCC SC to see what their reaction 
would be.

Personally, I'm not necessarily convinced that adding Pascal to GCC is 
a good idea.  I like Pascal just fine, but because every new language 
adds to the load on everyone.  (In my ideal world, we'd have stable 
enough interfaces that it was easy to maintain front ends separately 
from the rest of the compiler, but, though I've been extolling that 
vision for years, I've made little progress in realizing it...)

So, the changes for 4.0/4.1 still don't help the situation enough? Or is 
it too early to tell?

Or would you just want to make sure that the Pascal maintainers play "by 
the rules, as everybody else", a la Ada?

Eric


Re: Pascal front-end integration

2005-03-01 Thread Mark Mitchell
E. Weddington wrote:
Personally, I'm not necessarily convinced that adding Pascal to GCC is 
a good idea.  I like Pascal just fine, but because every new language 
adds to the load on everyone.  (In my ideal world, we'd have stable 
enough interfaces that it was easy to maintain front ends separately 
from the rest of the compiler, but, though I've been extolling that 
vision for years, I've made little progress in realizing it...)

So, the changes for 4.0/4.1 still don't help the situation enough? Or is 
it too early to tell?

Or would you just want to make sure that the Pascal maintainers play "by 
the rules, as everybody else", a la Ada?
I've no cause to worry about that; I don't know the Pascal maintainers 
at all.  It's just that every new language imposes costs, such as:

* Changes to interfaces require more places be updated.
* Major reworks require buy-in/work from more languages.  (For example, 
it took a long time to get rid of the RTL inliner because we needed to 
convert all of the languages.)

* Bugs show up that can only ever affect one language, but then people 
feel they ought to fix them, rather than fixing something else.

* Things get added to the middle end that help only one language, 
sometimes causing breakage

* Downloads, cvs update, etc. get slower.
It's my personal opinion (not that of the SC, or the FSF) that before we 
add a language we ought to convince ourselves of more than just the fact 
that someone's willing to maintain it; we ought to convince ourselves 
that the benefit to users will be sufficiently great that it's worth 
imposing these costs.

--
Mark Mitchell
CodeSourcery, LLC
[EMAIL PROTECTED]
(916) 791-8304


Re: Pascal front-end integration

2005-03-01 Thread Joseph S. Myers
On Tue, 1 Mar 2005, Prof A Olowofoyeku (The African Chief) wrote:

> Instead of starting a totally separate project, wouldn't it be better 
> to coordinate your efforts with the GPC development team?

Effective coordination will require, for a start, the GPC mailing list to 
accept messages from nonsubscribers.  As is those who CC messages there 
just get bounces, so it's best not to try to CC them; anyone reading this 
discussion only on the GPC list will get a very partial view.

If GPC developers are interested in having GPC integrated in GCC 4.1 and 
are willing to have it play by the same rules as the rest of GCC - note 
that the Ada maintainers made substantial changes to how they contributed 
patches to GCC in order to follow usual GCC practice more closely - then 
of course coordination would be desirable.  If the GPC developers would 
prefer to continue to develop GPC independently of GCC, this need not stop 
integration of some version of GPC in GCC.  I would hope in that case, 
however, there would still be better and closer cooperation between the 
two lines of development than there has been after the g95/gfortran fork 
(for example, that the GPC developers would be willing to make the version 
control repository used for actual development accessible to the public so 
individual patches can be extracted and merged as such).

The GCC development processes are documented on .  
Some of the technical requirements on front ends are documented at 
 but that is just a 
checklist of pieces of a front end rather than full details of what is 
good practice (for example, the rules in the Make-lang.in file should be 
as similar as possible to those used for other languages where there is a 
common pattern used).

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Pascal front-end integration

2005-03-01 Thread James A. Morrison

Mark Mitchell <[EMAIL PROTECTED]> writes:

> James A. Morrison wrote:
> 
> >  I made this post, with my changes posted, to see if I would get any 
> > support.
> 
> I'd also suggesting contacting the GCC SC to see what their reaction
> would be.

 That's a good point.  However, if I do get far enough I'm willing to
start a branch on gcc.gnu.org for Pascal.  Once the branch works, then it
would be time to go to the SC to see if merging the branch in is a good idea
or not.
 
> Personally, I'm not necessarily convinced that adding Pascal to GCC is
> a good idea.  I like Pascal just fine, but because every new language
> adds to the load on everyone.  (In my ideal world, we'd have stable
> enough interfaces that it was easy to maintain front ends separately
> from the rest of the compiler, but, though I've been extolling that
> vision for years, I've made little progress in realizing it...)

-- 
Thanks,
Jim

http://www.student.cs.uwaterloo.ca/~ja2morri/
http://phython.blogspot.com
http://open.nit.ca/wiki/?page=jim


Re: Extension compatibility policy

2005-03-01 Thread Bernardo Innocenti
Giovanni Bajo wrote:
Mike Hearn <[EMAIL PROTECTED]> wrote:

In your __FUNCTION__ case, we are basically in the latter group. __FUNCTION__
is a well-documented extension in C90 (it's part of C99 in some form now), and
it was never documented to be a macro. The fact that was named like a macro and
worked like a macro for years is indeed unfortunate. Notwithstanding that, GCC
maintainers acknowledged its widespread use and the bug of it working like a
macro was deprecated for around 3 years. We cannot do more than that.
For the record, I've recently faced an unsolvable problem
due to __FUNCTION__ now working as a variable definition.
In an embedded AVR application, I use a macro like this
for debugging purposes:
 #define TRACEMSG(msg,...) __tracemsg(__func__, msg, ## __VA_ARGS__)
This causes __func__ to be allocated in the data section,
thus wasting lots of precious data memory.
To move strings into program memory, there's a macro like this:
#define PSTR(s) ({ static const char __c[] PROGMEM = (s); &__c[0]; })
But this wouldn't work because __func__ does not work like
a string literal:
#define TRACEMSG(msg,...) __tracemsg(PSTR(__func__), msg, ## __VA_ARGS__)
C99's __func__ is supposed to work as if a "const char __func__[]".
The __FUNCTION__ extension could instead be made to work like a
string literal.   We could live without string pasting capabilities
if it helps keeping the interface between cpp and the C frontend
cleaner.
--
 // Bernardo Innocenti - Develer S.r.l., R&D dept.
\X/  http://www.develer.com/


Re: Different sized data and code pointers

2005-03-01 Thread Paul Schlie
> Thomas Gill wrote:
> I'm working on a GCC backend for a small embedded processor. We've got a
> Harvard architecture with 16 bit data addresses and 24 bit code
> addresses. How well does GCC support having different sized pointers for
> this sort of thing? The macros POINTER_SIZE and Pmode seem to suggest that
> there's one pointer size for everything.
>
> The backend that I've inherited gets most of the way with some really
> horrible hacks, but it would be nice if those hacks weren't necessary. In
> any case, the hacks don't cope with casting function pointers to integers.

With the arguable exception of function pointers (which need not be literal
address) all pointers are presumed to point to data, not code; therefore
may be simplest to define pointers as being 16-bits, and call functions
indirectly through a lookup table constructed at link time from program
memory, assuming it's readable via some mechanism; as the call penalty
incurred would likely be insignificant relative to the potential complexity
of attempting to support 24-bit code pointers in the rare circumstances
they're typically used, on an otherwise native 16-bit machine.

(and just as a heads up, there seems to be no exiting mechanism to enable
 the convenient access of static constant data stored in an orthogonal
 address space relative to read-write data memory; although suspect one
 could implement a scheme in which every address is discriminated at
 run-time based on some address range split, but likely not worth the
 run-time overhead to do so, but should work if desperate to conserve ram)





Re: Extension compatibility policy

2005-03-01 Thread Joseph S. Myers
On Wed, 2 Mar 2005, Bernardo Innocenti wrote:

> To move strings into program memory, there's a macro like this:
> 
> #define PSTR(s) ({ static const char __c[] PROGMEM = (s); &__c[0]; })
> 
> 
> But this wouldn't work because __func__ does not work like
> a string literal:
> 
> #define TRACEMSG(msg,...) __tracemsg(PSTR(__func__), msg, ## __VA_ARGS__)
> 
> C99's __func__ is supposed to work as if a "const char __func__[]".
> The __FUNCTION__ extension could instead be made to work like a
> string literal.   We could live without string pasting capabilities
> if it helps keeping the interface between cpp and the C frontend
> cleaner.

How about calling decl_attributes from fname_decl so a target 
insert_attributes hook can add attributes to __func__?  Would that suffice 
to solve your problem?

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


matching constraints in asm operands question

2005-03-01 Thread Peter Barada

I'm trying to improve atomic operations for ColdFir ein a 2.4 kernel, and
I tried the following following the current online manual at:
http://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/Extended-Asm.html#Extended-Asm

static __inline__ void atomic_inc(atomic_t *v)
{
__asm__ __volatile__("addql #1,%0" : "=m" (*v) : "0" (*v));
}

but that genreates *lots* of warning messages about "matching
contstaint doesn't allow a register".  The manual states that if I
*don't* use "0", then the compiler may have the input and output
operand in seperate locations, and predicts unkown bad things can
happen.

Searching the archives I see that people are using:

static __inline__ void atomic_inc(atomic_t *v)
{
__asm__ __volatile__("addql #1,%0" : "=m" (*v) : "m" (*v));
}

which seems to work, but I'm really concerned about the manuals
warning of the input and output operads being in seperate places.

Which form is correct?

-- 
Peter Barada
[EMAIL PROTECTED]


Re: matching constraints in asm operands question

2005-03-01 Thread Andrew Pinski
On Mar 1, 2005, at 7:54 PM, Peter Barada wrote:
which seems to work, but I'm really concerned about the manuals
warning of the input and output operads being in seperate places.
Which form is correct?
static __inline__ void atomic_inc(atomic_t *v)
{
__asm__ __volatile__("addql #1,%0" : "+m" (*v));
}
Works just fine, every where I know of.  It is the same as you last
example also.
-- Pinski



Re: Extension compatibility policy

2005-03-01 Thread Paul Schlie
> Joseph S. Myers writes:
> How about calling decl_attributes from fname_decl so a target
> insert_attributes hook can add attributes to __func__?  Would that suffice
> to solve your problem?

Might it be possible to alternatively add an attribute symbol hook so that a
target may easily define an arbitrary target specific named attribute which
may be utilized without having to patch the parser, etc. to do so?

Thereby one could easily define a ROM and/or PMEM attribute hypothetically
for not only __FUNCTION__, but any arbitrary declared type or parameter
declaration, preserved through to the back end to aid in target specific
code generation and/or memory allocation?

PMEM __FUNCTION__ 

ROM static const x[] = "some string"

char y[] = ROM "some string"

struct {int a; int b;} z = PMEM {5312, 3421};

For example? (with a little luck this could kill two bird with one stone)





Re: matching constraints in asm operands question

2005-03-01 Thread Hans-Peter Nilsson
On Tue, 1 Mar 2005, Peter Barada wrote:
>
> I'm trying to improve atomic operations for ColdFir ein a 2.4 kernel, and
> I tried the following following the current online manual at:
> http://gcc.gnu.org/onlinedocs/gcc-3.4.3/gcc/Extended-Asm.html#Extended-Asm
>
> static __inline__ void atomic_inc(atomic_t *v)
> {
>   __asm__ __volatile__("addql #1,%0" : "=m" (*v) : "0" (*v));
> }
>
> but that genreates *lots* of warning messages about "matching
> contstaint doesn't allow a register".

Sounds like a bug to me.

brgds, H-P


Re: hacking frameworks for linux

2005-03-01 Thread Mike Stump
On Feb 25, 2005, at 9:35 AM, Rogelio M.Serrano Jr. wrote:
what is darwin_register_objc_includes in gcc/config/darwin-c.c for? is
it needed for linux?
/* Register the GNU objective-C runtime include path if STDINC.  */
  /* Register the GNU OBJC runtime include path if we are compiling  
OBJC
with GNU-runtime.  */

?  Do you have a next-runtime for linux?  I suspect not, if not, then 
it would seem you don't need this.




Re: reinventing frameworks in gnu toolchain (fwd)

2005-03-01 Thread Mike Stump
On Feb 25, 2005, at 7:41 PM, Rogelio M.Serrano Jr. wrote:
I have also moved all my changes to gcc.c and c-incpath.c into
config/linux.h and config/frameworks.c. the latter is just darwin-c.c
with the pragma stuff removed.
Sounds reasonable.
I also have a problem with  -F switch it makes gcc hang.
Surely this should be a few minutes to debug.
Maybe I will use "-FDIR" instead. Same for binutils.
No.  Don't do that.
Since you didn't post your work in progress, I cannot comment on it 
further.  Try posting it, maybe someone can spot what is wrong with it; 
that, or fire up gdb on it.




Re: hacking frameworks for linux

2005-03-01 Thread Rogelio M . Serrano Jr .
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2005-03-02 10:03:56 +0800 Mike Stump <[EMAIL PROTECTED]> wrote:
On Feb 25, 2005, at 9:35 AM, Rogelio M.Serrano Jr. wrote:
what is darwin_register_objc_includes in gcc/config/darwin-c.c for? 
is
it needed for linux?
/* Register the GNU objective-C runtime include path if STDINC.  */
  /* Register the GNU OBJC runtime include path if we are compiling 
OBJC
with GNU-runtime.  */

?  Do you have a next-runtime for linux?  I suspect not, if not, then 
it 
would seem you don't need this.



Yes I took it out. Frameworks is working fine for me now.
- --
Got Sharapova?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using the GPG bundle for GNUMail
iD8DBQFCJSBLyihxuQOYt8wRAnymAJ9ymmoXTLzOVGoGXr8bg4GvreZ7SQCdFmNh
dmvU6BJZZTSKfh5FOn1TUxE=
=47uI
-END PGP SIGNATURE-


Re: reinventing frameworks in gnu toolchain (fwd)

2005-03-01 Thread Rogelio M . Serrano Jr .
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2005-03-02 10:07:25 +0800 Mike Stump <[EMAIL PROTECTED]> wrote:
On Feb 25, 2005, at 7:41 PM, Rogelio M.Serrano Jr. wrote:
I have also moved all my changes to gcc.c and c-incpath.c into
config/linux.h and config/frameworks.c. the latter is just darwin-c.c
with the pragma stuff removed.
Sounds reasonable.
I also have a problem with  -F switch it makes gcc hang.
Surely this should be a few minutes to debug.
Maybe I will use "-FDIR" instead. Same for binutils.
No.  Don't do that.
Since you didn't post your work in progress, I cannot comment on it 
further. 
Try posting it, maybe someone can spot what is wrong with it; that, 
or fire 
up gdb on it.



Binutils is using "-F" for the filter object, thats why its hanging. I
find the -F unnecessary actually. In my binutils hack the only switch
that matters is "-framework". Well im using frameworks almost
exclusively now so I hijacked the library_search path and added the
default frameworks search dirs to it. That way I dont need to change
ldso at all.
- --
Got Sharapova?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using the GPG bundle for GNUMail
iD8DBQFCJSIqyihxuQOYt8wRAjszAJ4or4XMZuuF5qtfFW/XiQB+eqSA7QCgsP0I
FU+HA6g7cQb3KGUkWVhlkcA=
=OKIj
-END PGP SIGNATURE-


Re: Extension compatibility policy

2005-03-01 Thread Joseph S. Myers
On Tue, 1 Mar 2005, Paul Schlie wrote:

> Might it be possible to alternatively add an attribute symbol hook so that a
> target may easily define an arbitrary target specific named attribute which
> may be utilized without having to patch the parser, etc. to do so?
> 
> Thereby one could easily define a ROM and/or PMEM attribute hypothetically
> for not only __FUNCTION__, but any arbitrary declared type or parameter
> declaration, preserved through to the back end to aid in target specific
> code generation and/or memory allocation?

The insert_attributes hook *already exists*.  It can insert arbitrary 
attributes on arbitrary declarations completely under target control.  
You can have target command-line options to control what it does.  You can 
have target pragmas that control what it does.  And of course source code 
can explicitly use attributes in any place documented in "Attribute 
Syntax".  I was simply suggesting filling a lacuna by applying this hook 
to implicit __func__ declarations as well as to explicit declarations.

> PMEM __FUNCTION__ 

This syntax doesn't make sense and you haven't explained what you want it 
to mean.  But you can have a pragma or command line option to put 
__FUNCTION__ in a special section if you want.

> ROM static const x[] = "some string"

You can already apply attributes to variables if you want.

> char y[] = ROM "some string"

We know that being able to control sections of string constants is 
desirable.  Bug 192 concerns one case.  Attributes in addition to 
command-line options is a natural extension.  (I noted some implementation 
issues in , though 
without considering the issue of suitable syntax for attributes on the 
string constants themselves which doesn't involve syntactic ambiguity for 
C or C++; it may be best not to allow attributes on individual strings, 
only a strings_section attribute to control the section of strings within 
an object or function definition.)

> struct {int a; int b;} z = PMEM {5312, 3421};

This syntax makes even less sense.  A brace-enclosed initializer is not an 
object!  If z has static storage duration, put the attribute on z.  If it 
doesn't, how it is initialized is a matter of compiler optimization and 
specifying attributes of where a copy of the initializer might go doesn't 
seem to make sense.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: about gcc -XLinker -M

2005-03-01 Thread Mike Stump
On Feb 26, 2005, at 8:01 AM, [EMAIL PROTECTED] wrote:
gcc -XLinker -M test.c 2>test.map
would output some usful information about locating
function to lib and ...
The detail analyze of them would be very useful.
Where can I find some introduce document about them?
This list isn't for such questions...  It is for the development _of_ 
gcc.




Re: Pascal front-end integration

2005-03-01 Thread Ed Smith-Rowland
On 1 Mar 2005 at 8:17, James A. Morrison wrote:
Hi,
  I've decided I'm going to try to take the time and cleanup and update
  the
Pascal frontend for gcc and try it get it integrated into the upstream
source. I'm doing this because I wouldn't like to see GPC work with GCC
4+. I don't care at all at supporting GPC on anything less than GCC 4.1
so I've started by ripping out as much obviously compatibility code as I
can and removing any traces of GPC being a separate project.
My guess is that inclusion of Pascal into gcc would give that language
more exposure and would lead to faster development.
By many accounts gcc-4 is getting faster.  It would be nice to see pascal
take advantage of this rather than being marooned on 3.x.
I, for one, am more likely to play with a gpascal that bootstraps with
mainline than to try to build one with, perhaps unusual, dependencies
and some different version of gcc.
I am learning gcc internals slowly (this is a part-time after-work effort :-P)
but I would be interested in helping wherever I can.
  So far I have only accomplished converting lang-options.h to
lang.opt.  I'm going
to continue cleaning up the GPC code, then once I am happy with how the code
looks with respect to the rest of the GCC code, I'm going to get it to
compile with
the current version of GCC mainline.  I'm starting with the boring
conflict happy
whitespace changes first so the code is easier for me to read and so that I can
try to get an idea what the GPC frontend is doing.
Before we get too far with this I think we should keep an eye on a trend in 
gcc
at least through 3.4 and 4.0:  Front ends are increasingly written by hand 
rather
than with flex and bison.  This is true for C++ as of 3.4 and for C as of 4.1.
I'm pretty sure it's true for gfortran too.  I think this is true for gcjx too.
The latter is written in C++ to boot.
My understandng is that gpc uses flex/bison in a p2c - a pascal to C translator.
I would like to know why folks think hand written parsers are better.  My guess 
is that
they are easier to maintain and that they support more lookahead.
A gpascal front end effort might do well to take a hard look at the new front 
ends
for C and C++ (and Java) and consider a rewrite from scratch using these as 
models.
  My current changes are available through bazaar (an arch implementation) which
people can get with:
 baz register-archive http://www.csclub.uwaterloo.ca/~ja2morri/arch 

 baz get [EMAIL PROTECTED]/gcc-pascal--mainline--0.3
There is another trend in gcc: a move toward Subversion from CVS.  I 
realize this
is a first-try effort but there would probably be less regret later if we adopt
the standard toolchain.  The decision to go to Subversion was not taken lightly.
Ed Smith-Rowland



Re: Pascal front-end integration

2005-03-01 Thread Marcin Dalecki
On 2005-03-02, at 03:22, Ed Smith-Rowland wrote:
On 1 Mar 2005 at 8:17, James A. Morrison wrote:
Hi,
  I've decided I'm going to try to take the time and cleanup and 
update
  the
Pascal frontend for gcc and try it get it integrated into the upstream
source. I'm doing this because I wouldn't like to see GPC work with 
GCC
4+. I don't care at all at supporting GPC on anything less than GCC 
4.1
so I've started by ripping out as much obviously compatibility code 
as I
can and removing any traces of GPC being a separate project.
My guess is that inclusion of Pascal into gcc would give that language
more exposure and would lead to faster development.
I object it. There is no single application of importance in pascal for 
me
other then TeX, which GPC doesn't handle anyway. It's not worth the 
bandwidth for me and
like java it's another candidate which will drag a ton of library 
framework with
itself later on. Like java it would significantly impede any attempt to 
do full coverage
builds of the whole compiler tree.



Re: Pascal front-end integration

2005-03-01 Thread Andrew Pinski
On Mar 1, 2005, at 9:29 PM, Marcin Dalecki wrote:
On 2005-03-02, at 03:22, Ed Smith-Rowland wrote:
On 1 Mar 2005 at 8:17, James A. Morrison wrote:
Hi,
  I've decided I'm going to try to take the time and cleanup and 
update
  the
Pascal frontend for gcc and try it get it integrated into the 
upstream
source. I'm doing this because I wouldn't like to see GPC work with 
GCC
4+. I don't care at all at supporting GPC on anything less than GCC 
4.1
so I've started by ripping out as much obviously compatibility code 
as I
can and removing any traces of GPC being a separate project.
My guess is that inclusion of Pascal into gcc would give that language
more exposure and would lead to faster development.
I object it. There is no single application of importance in pascal 
for me
other then TeX, which GPC doesn't handle anyway. It's not worth the 
bandwidth for me and
like java it's another candidate which will drag a ton of library 
framework with
itself later on. Like java it would significantly impede any attempt 
to do full coverage
builds of the whole compiler tree.
Actually I disagree with you GPC is much smaller than Java, and doing 
full converage
for a large project like GCC is sometimes a hard thing to do anyways.  
In fact it
is even harder than you thing, especially with code added (in reload) 
to do any
full coverage.  Who cares it gets slower or more impractical to do what 
you are doing
as it gives really more coverage to GCC and the middle-end more than 
any doing nothing.

I don't know why I replied to this thread but I did.  Well I think we 
should  have
no more on this thread unless it is about technical reasons why GPC 
cannot be
included, or political (FSF/SC decides it is not a good thing).

-- Pinski


Fortran libs.

2005-03-01 Thread Marcin Dalecki
After trying to build the fortran compiler I'm convinced that at a cut
down version of the multi precision libraries it requires should be 
included
in to the compiler tree. The reasons are as follows:

1. They should be there for reference in bug checking.
2. This would make installation on systems which don't fall in to the 
category of
   JBLD (Joe Users Bloated Linux Distro) much easier.
3. Stuff not required for the proper operation of the compiler could be 
taken out.
   It's actually just a tinny subset of the library, which the compiler 
truly required.
4. It would see just to be consequent in face of a copy of the zip 
library./
5. It would make it easier to guarantee that the source code setup 
choices between what the
   fortran compiler expects and how the library was build match.
6. Since there are multiple releases of the libraries in question this 
would just reduce
   the combinatorial complexity of the maintainance issues.



Re: Pascal front-end integration

2005-03-01 Thread Marcin Dalecki
On 2005-03-02, at 03:36, Andrew Pinski wrote:
Actually I disagree with you GPC is much smaller than Java,
If you have only USCDII in mind yes. But not if you look after any of 
the usable, aka
Delfi, implementation of it. You always have to have runtime libraries.

and doing full converage
for a large project like GCC is sometimes a hard thing to do anyways.
So the reasoning is: "it is pain, give me more of it."?
In fact it is even harder than you thing, especially with code added 
(in reload) to do any full coverage.
Hugh? I see the argument that another front-end will exercise more of 
the back-end, since
chances are that it will trigger code paths in it which other languages 
don't use.
However I can hardly see any Pascal language feature/construct, which 
wouldn't be already
covered by the C++ or Java ABI.

Who cares it gets slower or more impractical to do what you are doing
I care :-).


Re: Pascal front-end integration

2005-03-01 Thread Andrew Pinski
On Mar 1, 2005, at 9:46 PM, Marcin Dalecki wrote:
Hugh? I see the argument that another front-end will exercise more of 
the back-end, since
chances are that it will trigger code paths in it which other 
languages don't use.
However I can hardly see any Pascal language feature/construct, which 
wouldn't be already
covered by the C++ or Java ABI.
Actually Pascal excises non-local gotos along with more than that and 
nested
functions (which is used a lot in Ada yes but another language will 
catch bugs
which can be found easier and the Ada testsuite is not really complete 
with
a lot of bugs fixed lately).

See I replied, but I wanted to put you straight on a technical detail 
which
you keep on forgetting.

-- Pinski


Re: Question about ObjC++ state

2005-03-01 Thread Mike Stump
On Feb 28, 2005, at 3:41 AM, Lars Sonchocky-Helldorf wrote:
I'd like to know what the 'official' position regarding ObjC++ is now.

Anybody willing to clear up?
Sure, why not...  Either, someone will submit a clean, safe patch and 
it will be reviewed and OKed and it will be checked in, or that's won't 
happen.  I've added a pool to the wiki pages where you can register 
your vote.  If no patch is submitted, there is little point in 
considering/contemplating/arguing what would happen.  If such a patch 
were submitted, it would have to go into mainline first anyway, if it 
proves safe there and people want to propose a version of it for 4.0.x, 
then I think the RM would have to reevaluate it on its merits and risks 
and the timing.  I don't see the need for the RM to declare at this 
point what a future decision would be.  I suspect that if done well, 
the RM would entertain allowing it in 4.0.[n+1]; but that is just 
speculation.

P.S.: cc'ed to the GNUstep list just for informational purpose
[ assuming that list is still closed, since, you didn't say they opened 
it ]  Will you please stop doing that...  It drives us nuts, absolutely 
nuts.  If it is now an open list, never mind...




Re: Question about ObjC++ state

2005-03-01 Thread Rogelio M . Serrano Jr .
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 2005-03-02 10:52:38 +0800 Mike Stump <[EMAIL PROTECTED]> wrote:
[snipped..]
P.S.: cc'ed to the GNUstep list just for informational purpose
[ assuming that list is still closed, since, you didn't say they 
opened it ] 
Will you please stop doing that...  It drives us nuts, absolutely 
nuts.  If 
it is now an open list, never mind...



Several people in that list, yours truly included are holding their
breath waiting for objc++ to happen. Although that language is ugly
there is a lot tangible benefits in its availability. Im pretty
willing to help but I just started climbing the learning curve.
- --
Got Sharapova?
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Using the GPG bundle for GNUMail
iD8DBQFCJS5ZyihxuQOYt8wRAjX6AJ475gLpiFoxEwrmb/pe3pKYmoMopwCgm0P4
8+4+Y2uL5HpuWdaDJFZawPA=
=+zmW
-END PGP SIGNATURE-


Re: Pascal front-end integration

2005-03-01 Thread James A. Morrison

Ed Smith-Rowland <[EMAIL PROTECTED]> writes:

> On 1 Mar 2005 at 8:17, James A. Morrison wrote:
> 
> > Hi,
> >   I've decided I'm going to try to take the time and cleanup and
> > update
> >   the
> > Pascal frontend for gcc and try it get it integrated into the upstream
> > source. I'm doing this because I wouldn't like to see GPC work with GCC
> > 4+. I don't care at all at supporting GPC on anything less than GCC 4.1
> > so I've started by ripping out as much obviously compatibility code as I
> > can and removing any traces of GPC being a separate project.
> 
> My guess is that inclusion of Pascal into gcc would give that language
> more exposure and would lead to faster development.
> 
> By many accounts gcc-4 is getting faster.  It would be nice to see pascal
> take advantage of this rather than being marooned on 3.x.
> 
> I, for one, am more likely to play with a gpascal that bootstraps with
> mainline than to try to build one with, perhaps unusual, dependencies
> and some different version of gcc.
> 
> I am learning gcc internals slowly (this is a part-time after-work effort :-P)
> but I would be interested in helping wherever I can.

 Grab the source and see what you can do.
 
> >   So far I have only accomplished converting lang-options.h to
> > lang.opt.  I'm going
> > to continue cleaning up the GPC code, then once I am happy with how the code
> > looks with respect to the rest of the GCC code, I'm going to get it to
> > compile with
> > the current version of GCC mainline.  I'm starting with the boring
> > conflict happy
> > whitespace changes first so the code is easier for me to read and so that I 
> > can
> > try to get an idea what the GPC frontend is doing.
> 
> Before we get too far with this I think we should keep an eye on a trend in 
> gcc
> at least through 3.4 and 4.0:  Front ends are increasingly written by hand 
> rather
> than with flex and bison.  This is true for C++ as of 3.4 and for C as of 4.1.
> I'm pretty sure it's true for gfortran too.  I think this is true for gcjx 
> too.
> The latter is written in C++ to boot.
> 
> My understandng is that gpc uses flex/bison in a p2c - a pascal to C 
> translator.
> I would like to know why folks think hand written parsers are better.  My 
> guess is that
> they are easier to maintain and that they support more lookahead.
> 
> A gpascal front end effort might do well to take a hard look at the new front 
> ends
> for C and C++ (and Java) and consider a rewrite from scratch using these as 
> models.

 Feel free to write your own parser, I have no desire to do that.

> >   My current changes are available through bazaar (an arch implementation) 
> > which
> > people can get with:
> >  baz register-archive http://www.csclub.uwaterloo.ca/~ja2morri/arch 
> > 
> >  baz get [EMAIL PROTECTED]/gcc-pascal--mainline--0.3
> 
> There is another trend in gcc: a move toward Subversion from CVS.  I realize 
> this
> is a first-try effort but there would probably be less regret later if we 
> adopt
> the standard toolchain.  The decision to go to Subversion was not taken 
> lightly.
> 
> Ed Smith-Rowland

 I don't think it makes a difference.  If this little project of mine does
start moving I'll put the code in CVS/SVN at that time.  Until then, I'm
taking an opportunity to play with bazaar.

-- 
Thanks,
Jim

http://www.student.cs.uwaterloo.ca/~ja2morri/
http://phython.blogspot.com
http://open.nit.ca/wiki/?page=jim


Re: Pascal front-end integration

2005-03-01 Thread Waldek Hebisch
James A. Morrison wrote:
> I've decided I'm going to try to take the time and cleanup and update
> the Pascal frontend for gcc and try it get it integrated into the 
> upstream source.

Nice to hear that you want to work on Pascal. However did you notice
that gpc _is_ changing. In particular, the latest snapshot is
gpc-20050217.tar.bz2 (it looks that you started from earlier version).
Since you plan to work on your own branch, it is wise to plan for 
easy merge with frontend changes. 

Joseph S. Myers wrote:

> If GPC developers are interested in having GPC integrated in GCC 4.1 and
> are willing to have it play by the same rules as the rest of GCC - note
> that the Ada maintainers made substantial changes to how they contributed
> patches to GCC in order to follow usual GCC practice more closely - then
> of course coordination would be desirable.  

I would like to see GPC integrated in GCC. However, I feel that playing
by the GCC rules I could do substantially less work for GPC that I am
doing now. GCC rules pay off when there is critical mass of developers.
My ipression was that GPC do not have that critical mass -- so it was
better to keep GPC outside of GCC. Jim contibution can change that.

> If the GPC developers would
> prefer to continue to develop GPC independently of GCC, this need not stop
> integration of some version of GPC in GCC.  I would hope in that case,
> however, there would still be better and closer cooperation between the
> two lines of development than there has been after the g95/gfortran fork
> (for example, that the GPC developers would be willing to make the version
> control repository used for actual development accessible to the public so
> individual patches can be extracted and merged as such).

ATM GPC does not use version control. Frank Heckenbach just periodically
collects flowing patches and his changes into releases. 

-- 
  Waldek Hebisch
[EMAIL PROTECTED] 









Re: Extension compatibility policy

2005-03-01 Thread Paul Schlie
> From: "Joseph S. Myers" <[EMAIL PROTECTED]>
>> On Tue, 1 Mar 2005, Paul Schlie wrote:
>> Might it be possible to alternatively add an attribute symbol hook so that a
>> target may easily define an arbitrary target specific named attribute which
>> may be utilized without having to patch the parser, etc. to do so?
>> 
>> Thereby one could easily define a ROM and/or PMEM attribute hypothetically
>> for not only __FUNCTION__, but any arbitrary declared type or parameter
>> declaration, preserved through to the back end to aid in target specific
>> code generation and/or memory allocation?
> 
> The insert_attributes hook *already exists*.  It can insert arbitrary
> attributes on arbitrary declarations completely under target control.
> You can have target command-line options to control what it does.  You can
> have target pragmas that control what it does.  And of course source code
> can explicitly use attributes in any place documented in "Attribute
> Syntax".  I was simply suggesting filling a lacuna by applying this hook
> to implicit __func__ declarations as well as to explicit declarations.

- Got it, I think. Sorry for being dense. So in summary:

  - an attribute may be defined, such as:

#define ROM __attribute__("ROM");

  - and used following the above referenced ""Attribute Syntax", as
either a variable, or function parameter declaration/implementation:

  int ROM x = 3;

  int foo (int ROM y)

where the parameter's attribute is visible when ever that parameter
is used as an operand within the function tree, and correspondingly
during rtl/template matching; and further if there's an attribute
mismatch between the function argument and it's parameter, the
compiler will warn? (is there any way to force an error instead?)

And just to double check with respect to your other comments:

>> char y[] = ROM "some string"
>
> We know that being able to control sections of string constants is
> desirable ... it may be best not to allow attributes on individual strings,
> only a strings_section attribute to control the section of strings within
> an object or function definition.

- understood.

>> struct {int a; int b;} z = PMEM {5312, 3421};
> 
> This syntax makes even less sense.  A brace-enclosed initializer is not an
> object!  If z has static storage duration, put the attribute on z.  If it
> doesn't, how it is initialized is a matter of compiler optimization and
> specifying attributes of where a copy of the initializer might go doesn't
> seem to make sense.

- except that when GCC treats it as a static constant value, accessed during
  run-time to initialized the declared variable, just as strings and arrays
  are;  it must also have the same attribute the back end is relying on
  to identify such references as needing to be accessed differently than
  references to other variables are. (so suspect it would be appropriate to
  be able to define programmatically an attribute which may be attached to
  all such references to initializing data not just strings if not optimized
  away, although agree it's not necessary to specify each individually)


Thanks again, and apologize for my confusion.

-paul-




Re: Pascal front-end integration

2005-03-01 Thread Ed Smith-Rowland
James A. Morrison wrote:
Ed Smith-Rowland <[EMAIL PROTECTED]> writes:
 

On 1 Mar 2005 at 8:17, James A. Morrison wrote:
   

Hi,
 I've decided I'm going to try to take the time and cleanup and
update
 the
Pascal frontend for gcc and try it get it integrated into the upstream
source. I'm doing this because I wouldn't like to see GPC work with GCC
4+. I don't care at all at supporting GPC on anything less than GCC 4.1
so I've started by ripping out as much obviously compatibility code as I
can and removing any traces of GPC being a separate project.
 

My guess is that inclusion of Pascal into gcc would give that language
more exposure and would lead to faster development.
By many accounts gcc-4 is getting faster.  It would be nice to see pascal
take advantage of this rather than being marooned on 3.x.
I, for one, am more likely to play with a gpascal that bootstraps with
mainline than to try to build one with, perhaps unusual, dependencies
and some different version of gcc.
I am learning gcc internals slowly (this is a part-time after-work effort :-P)
but I would be interested in helping wherever I can.
   

Grab the source and see what you can do.
 

I will do that.
 

 So far I have only accomplished converting lang-options.h to
lang.opt.  I'm going
to continue cleaning up the GPC code, then once I am happy with how the code
looks with respect to the rest of the GCC code, I'm going to get it to
compile with
the current version of GCC mainline.  I'm starting with the boring
conflict happy
whitespace changes first so the code is easier for me to read and so that I can
try to get an idea what the GPC frontend is doing.
 

Before we get too far with this I think we should keep an eye on a trend in gcc
at least through 3.4 and 4.0:  Front ends are increasingly written by hand 
rather
than with flex and bison.  This is true for C++ as of 3.4 and for C as of 4.1.
I'm pretty sure it's true for gfortran too.  I think this is true for gcjx too.
The latter is written in C++ to boot.
My understandng is that gpc uses flex/bison in a p2c - a pascal to C translator.
I would like to know why folks think hand written parsers are better.  My guess 
is that
they are easier to maintain and that they support more lookahead.
A gpascal front end effort might do well to take a hard look at the new front ends
for C and C++ (and Java) and consider a rewrite from scratch using these as models.
   

Feel free to write your own parser, I have no desire to do that.
 

I have no desire to write my own parser.  I want to steal someone else's 
;-).

I was just wondering what the best one to steal was given the long term 
goals of
1) Integration into GCC-4.x
2) Happy maintainable code
3) Entertainment

In fact, I'm somewhat curious what caused folks to jump into the breach 
with parsers.  From reading the lists it seems to be maintainability and 
stomping out corner case problems for the most part.

Perhaps a parser toolset is emerging that will decouple the front ends 
from the middle and back ends to a greater degree.  I think I will look 
at the new C/C++ parsers and see what's what.

 My current changes are available through bazaar (an arch implementation) which
people can get with:
baz register-archive http://www.csclub.uwaterloo.ca/~ja2morri/arch 
baz get [EMAIL PROTECTED]/gcc-pascal--mainline--0.3
 

There is another trend in gcc: a move toward Subversion from CVS.  I realize 
this
is a first-try effort but there would probably be less regret later if we adopt
the standard toolchain.  The decision to go to Subversion was not taken lightly.
Ed Smith-Rowland
   

I don't think it makes a difference.  If this little project of mine does
start moving I'll put the code in CVS/SVN at that time.  Until then, I'm
taking an opportunity to play with bazaar.
 

I'm all for grabbing new tools and playing with them.  I have a huge 
relational database collection and a script language collection.  In 
fact, I think I'll check out bazaar and arch and start a revision 
control system collection :-).

Ed Smith-Rowland


Re: Pascal front-end integration

2005-03-01 Thread Marcin Dalecki
On 2005-03-02, at 05:20, Ed Smith-Rowland wrote:
In fact, I'm somewhat curious what caused folks to jump into the 
breach with parsers.  From reading the lists it seems to be 
maintainability and stomping out corner case problems for the most 
part.

Perhaps a parser toolset is emerging that will decouple the front ends 
from the middle and back ends to a greater degree.  I think I will 
look at the new C/C++ parsers and see what's what.
You know this "shift/reduce conflict" stuff the former parsers where 
barking at you?
Contrary to C and C++ a LR-grammar parser generator like bison or yacc 
is a fully adequate
tool for pascal.



Re: matching constraints in asm operands question

2005-03-01 Thread Peter Barada

>> which seems to work, but I'm really concerned about the manuals
>> warning of the input and output operads being in seperate places.
>>
>> Which form is correct?
>
>static __inline__ void atomic_inc(atomic_t *v)
>{
>   __asm__ __volatile__("addql #1,%0" : "+m" (*v));
>}
>
>Works just fine, every where I know of.  It is the same as you last
>example also.

Ugh, in the hopes of simplifying the example, I made it somewhat trivial...

static __inline__ void atomic_add(atomic_t *v, int i)
{
__asm__ __volatile__("addl %2,%0" : "=m" (*v) : "m" (*v), "d" (i));
}

Is that correct?  And if so, then isn't the documentation wrong?

-- 
Peter Barada
[EMAIL PROTECTED]