Wiki pages on tests cases

2005-11-27 Thread Jonathan Wakely

Why are there two separate wiki pages on test case writing?

http://gcc.gnu.org/wiki/HowToPrepareATestcase
http://gcc.gnu.org/wiki/TestCaseWriting

The second one seems fairly gfortran-specific, but doesn't mention that
fact anywhere.  If the second page adds info that is generally useful to
the whole compiler, that info should be in the first page.  If it
doesn't, it should clearly say it is gfortran-specific, or should be
removed.

Yes, I know it's a wiki and I can do this myself, but I only have so
much spare time and maybe the second one was added for a good reason.

jon



Re: Wiki pages on tests cases

2005-11-27 Thread Giovanni Bajo
Jonathan Wakely <[EMAIL PROTECTED]> wrote:

> http://gcc.gnu.org/wiki/HowToPrepareATestcase
> http://gcc.gnu.org/wiki/TestCaseWriting
>
> The second one seems fairly gfortran-specific, but doesn't mention
> that fact anywhere.  If the second page adds info that is generally
> useful to the whole compiler, that info should be in the first page.
> If it doesn't, it should clearly say it is gfortran-specific, or
> should be removed.
>
> Yes, I know it's a wiki and I can do this myself, but I only have so
> much spare time and maybe the second one was added for a good reason.


I think they should be merged. The second page (I never saw it before) takes a
more tutorial-like approach. Maybe it could be inserted somewhere at the start
of the first page like a quick example or something.

Giovanni Bajo



20040309-1.c vs overflow being undefined

2005-11-27 Thread Andrew Pinski
If we look at this testcase, we have a function like:
int foo(unsigned short x)
{
  unsigned short y;
  y = x > 32767 ? x - 32768 : 0;
  return y;
}


x is promoted to a signed int by the front-end as the type
of 32768 is signed.  So when we pass 65535 to foo (like in the testcase),
we get some large negative number for (signed int)x and then we are subtracting
more from the number which causes us to have an overflow.

Does this sound right?  Should the testcase have -fwrapv or change 32768 to
32768u?  (This might also be wrong in SPEC and gzip too, I have not looked
yet).

Thanks,
Andrew Pinski


Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Andreas Schwab
Andrew Pinski <[EMAIL PROTECTED]> writes:

> x is promoted to a signed int by the front-end as the type
> of 32768 is signed.  So when we pass 65535 to foo (like in the testcase),
> we get some large negative number for (signed int)x

I don't see how you can get a large negative number for that.  With 16 bit
ints you'll get -1, but for anything bigger you'll keep 65535.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Falk Hueffner
Andrew Pinski <[EMAIL PROTECTED]> writes:

> If we look at this testcase, we have a function like:
> int foo(unsigned short x)
> {
>   unsigned short y;
>   y = x > 32767 ? x - 32768 : 0;
>   return y;
> }
>
>
> x is promoted to a signed int by the front-end as the type
> of 32768 is signed.  So when we pass 65535 to foo (like in the testcase),
> we get some large negative number for (signed int)x

That shouldn't happen. Promoting from unsigned short to int shouldn't
sign extend. If you see it happening, then that's a bug.

> Should the testcase have -fwrapv or change 32768 to 32768u?

I don't see any reason to.

-- 
Falk


Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Andrew Pinski
> 
> Andrew Pinski <[EMAIL PROTECTED]> writes:
> 
> > x is promoted to a signed int by the front-end as the type
> > of 32768 is signed.  So when we pass 65535 to foo (like in the testcase),
> > we get some large negative number for (signed int)x
> 
> I don't see how you can get a large negative number for that.  With 16 bit
> ints you'll get -1, but for anything bigger you'll keep 65535.

sorry wrong number, I had meant 32769.
  if (foo (32769) != 1)
abort ();

-- Pinski


Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Andrew Pinski
> 
> Andrew Pinski <[EMAIL PROTECTED]> writes:
> 
> > If we look at this testcase, we have a function like:
> > int foo(unsigned short x)
> > {
> >   unsigned short y;
> >   y = x > 32767 ? x - 32768 : 0;
> >   return y;
> > }
> >
> >
> > x is promoted to a signed int by the front-end as the type
> > of 32768 is signed.  So when we pass 65535 to foo (like in the testcase),
> > we get some large negative number for (signed int)x
> 
> That shouldn't happen. Promoting from unsigned short to int shouldn't
> sign extend. If you see it happening, then that's a bug.

Actually I am not seeing that but instead I am seeing the subtraction done in
short and not in int (which is far as I can tell and remember reading the
promotion rules in C doing).

-- Pinski


Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Andreas Schwab
Andrew Pinski <[EMAIL PROTECTED]> writes:

> sorry wrong number, I had meant 32769.
>   if (foo (32769) != 1)
> abort ();

I think with 16 bit ints you should get 0 here, since (int)32769 ==
-32767, which is less than 32767.

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Richard Henderson
On Sun, Nov 27, 2005 at 12:25:28PM -0500, Andrew Pinski wrote:
> Actually I am not seeing that but instead I am seeing the subtraction done in
> short and not in int (which is far as I can tell and remember reading the
> promotion rules in C doing).

You're wrong about the C promotion rules.


r~


Re: Thoughts on LLVM and LTO

2005-11-27 Thread Devang Patel
> >
> > With our limited resources, we cannot really afford to go off on a
> > multi-year tangent nurturing and growing a new technology just to add
> > a
> > new feature.
> >
> What makes you think implementing LTO from scratch is different here?

I read entire thread (last msg, I read is from Mike Stump) but I did not
see any discussion about productivity of GCC developers.

If one approach provides tools that make developer very very productive
then it may blew initial work estimates out of water.

Here are the questions for LLVM as well as LTO folks. (To be fair,
Chris gave us some hints on few of this, but it is OK if people ask him
for clarifications :) And I have not read anything about this in LTO
proposal, so I take that this may need extra work not considered in
LTO time estimates).

1) Documentation

How well is the documentation so that _new_ compiler engineer can
become productive sooner ?

2) Testability of optimization passes

How much precision one can get while testing particular feature,
optimization pass?

3) Integrated tools to investigate/debug/fix optimizer bugs

4) Basic APIs needed to implement various optimization techniques

-
Devang


Re: Thoughts on LLVM and LTO

2005-11-27 Thread Daniel Berlin
On Sun, 2005-11-27 at 11:58 -0800, Devang Patel wrote:
> > >
> > > With our limited resources, we cannot really afford to go off on a
> > > multi-year tangent nurturing and growing a new technology just to add
> > > a
> > > new feature.
> > >
> > What makes you think implementing LTO from scratch is different here?
> 
> I read entire thread (last msg, I read is from Mike Stump) but I did not
> see any discussion about productivity of GCC developers.
> 
> If one approach provides tools that make developer very very productive
> then it may blew initial work estimates out of water.
> 
> Here are the questions for LLVM as well as LTO folks. (To be fair,
> Chris gave us some hints on few of this, but it is OK if people ask him
> for clarifications :) And I have not read anything about this in LTO
> proposal, so I take that this may need extra work not considered in
> LTO time estimates).
> 
> 1) Documentation
> 
> How well is the documentation so that _new_ compiler engineer can
> become productive sooner ?

There is no question that LLVM has much better documentation of IR and
semantics than we do, 

See, e.g., http://llvm.cs.uiuc.edu/docs/LangRef.html


It has tutorials on writing a pass, as well as example passes, 
http://llvm.cs.uiuc.edu/docs/WritingAnLLVMPass.html

> 
> 2) Testability of optimization passes
> 
> How much precision one can get while testing particular feature,
> optimization pass?

You can run one pass at a time, if you wanted to, using opt (or two, or
three).

> 
> 3) Integrated tools to investigate/debug/fix optimizer bugs

bugpoint beats pretty much anything we have, IMHO :).

> 
> 4) Basic APIs needed to implement various optimization techniques

All the basics are there for scalar opts.  There is no data dependence
yet, but they have a fine working SCEV, so it's only a few months to
implement, at most.








Re: 20040309-1.c vs overflow being undefined

2005-11-27 Thread Neil Booth
Andreas Schwab wrote:-

> Andrew Pinski <[EMAIL PROTECTED]> writes:
> 
> > sorry wrong number, I had meant 32769.
> >   if (foo (32769) != 1)
> > abort ();
> 
> I think with 16 bit ints you should get 0 here, since (int)32769 ==
> -32767, which is less than 32767.

int foo(unsigned short x)
{
  unsigned short y;
  y = x > 32767 ? x - 32768 : 0;
  return y;
}

With 16-bit ints unsigned short promotes to unsigned int, since int
does not hold all the values.  Similarly 32768 has type unsigned int
and not int from inception.  So everything is unsigned and it should be:

  y = 32769U > 32767U ? 32769U - 32768U: 0;

which is 1 to my understanding.

Neil.


Re: LEGITIMIZE_RELOAD_ADDRESS vs address_reloaded

2005-11-27 Thread Alan Modra
On Fri, Nov 25, 2005 at 07:20:52PM +0100, Ulrich Weigand wrote:
> > c) Modify the ppc 'Z' constraint to match the indexed address reload
> > generates.  This would rely on the pattern we generate in
> > LEGITIMIZE_RELOAD_ADDRESS never being generated elsewhere.
[snip]
> 
> Overall, I'd tend to prefer something along the lines of (c), in
> particular as it would also catch the cases where 
> LEGITIMIZE_RELOAD_ADDRESS isn't actually involved, as you note:

Thanks.  I went ahead and implemented this, and yes, the testcase in
pr24997 has better code in other places too.

-- 
Alan Modra
IBM OzLabs - Linux Technology Centre


Re: Thoughts on LLVM and LTO

2005-11-27 Thread Chris Lattner

On Sun, 27 Nov 2005, Daniel Berlin wrote:

On Sun, 2005-11-27 at 11:58 -0800, Devang Patel wrote:

What makes you think implementing LTO from scratch is different here?


Here are the questions for LLVM as well as LTO folks. (To be fair,

1) Documentation

How well is the documentation so that _new_ compiler engineer can
become productive sooner ?


There is no question that LLVM has much better documentation of IR and
semantics than we do,

See, e.g., http://llvm.org/docs/LangRef.html



It has tutorials on writing a pass, as well as example passes,
http://llvm.org/docs/WritingAnLLVMPass.html


Yup, in addition, LLVM has several pretty good docs for various 
subsystems, the full set is included here: http://llvm.org/docs/


Another good tutorial (aimed at people writing mid-level optimization 
passes) is here: 
http://llvm.org/pubs/2004-09-22-LCPCLLVMTutorial.html


Note that the organization of the 'llvm-gcc' compiler reflects the old 
compiler, not the new one.  Other than that it is up-to-date.


For a grab bag of various LLVM apis that you may run into, this document 
is useful: http://llvm.org/docs/ProgrammersManual.html



2) Testability of optimization passes

How much precision one can get while testing particular feature,
optimization pass?


You can run one pass at a time, if you wanted to, using opt (or two, or
three).


Yup, however there is one specific reason that is important/useful. 
With the ability to write out the IR and a truly modular pass manager, you 
can write really good regression tests.  This means you can write 
regression tests for optimizers/analyses that specify the exact input to a 
pass.


With traditional GCC regtests, you write your test in C (or some other 
language).  If you're testing the 7th pass from the parser, the regression 
test may fail to test what you want as time progresses and the 6 passes 
before you (or the parser) changes.  With LLVM, this isn't an issue.


Note that the link-time proposal could also implement this, but would 
require some hacking (e.g. implementing a text form for the IR) and time 
to get right.



3) Integrated tools to investigate/debug/fix optimizer bugs


bugpoint beats pretty much anything we have, IMHO :).


For those that are not familiar with it, here's some info:
http://llvm.org/docs/Bugpoint.html

If you are familiar with delta, it is basically a far more fast and 
powerful (but similar in spirit) automatic debugger.  It can reduce test 
cases, identify which pass is the problem, can debug ICE's and 
miscompilations, and can debug the optimizer, native backend, or JIT 
compiler.



4) Basic APIs needed to implement various optimization techniques


All the basics are there for scalar opts.  There is no data dependence
yet, but they have a fine working SCEV, so it's only a few months to
implement, at most.


Yup.  LLVM has the scalar optimizations basically covered, but is weak on 
loop optimizations.  This is something that we intend to cover in time (if 
noone else ends up helping) but will come after debug info and other 
things are complete.


-Chris

--
http://nondot.org/sabre/
http://llvm.org/


Re: Thoughts on LLVM and LTO

2005-11-27 Thread Chris Lattner

On Wed, 23 Nov 2005, Ian Lance Taylor wrote:


Chris Lattner <[EMAIL PROTECTED]> writes:


You will need to get University of Illinois and
past/present LLVM developers to assign the copyright over to the FSF.
Yes, you've claimed it's easy, but it needs to be done.  Otherwise, we are
in limbo.  We cannot do anything with LLVM until this is finalized.


I would definately like to get this process running, but unfortunately
it will have to wait until January.  The main person I have to talk to
has gone to India for Christmas, so I can't really start the process
until January.  Yes, I'm incredibly frustrated with this as well. :(


You, or somebody, can start the process by writing to the FSF, at
[EMAIL PROTECTED], to see what forms the FSF would like to see.  Ideally
those forms will be acceptable to all concerned.  More likely there
will have to be some negotiation between the FSF and the University.


For record, I sent an email to the FSF to get the ball rolling and find 
out what needs to be done.


-Chris

--
http://nondot.org/sabre/
http://llvm.org/


Re: ppc-linux and s390-linux compilers requiring 64-bit HWI?

2005-11-27 Thread Ulrich Weigand
Jakub Jelinek wrote:

> What's the reason why ppc-linux and s390-linux targetted GCC
> requires 64-bit HWI these days?
> If it is a native compiler, this means using long long in a huge part
> of the compiler and therefore slower compile times and bigger memory
> consumption.
> ppc-linux configured compiler (nor s390-linux) isn't multilibbed and
> ppc-linux only supports -m32 (s390-linux one apparently supports
> both -m31 and -m64 on the GCC side, but without multilibs it isn't very
> helpful).

There are two reasons why we require 64-bit HWI on s390: we want to
support -m64 (multilibs can be easily added), and 64-bit HWI simplifies
constant handling significantly.  There are multiple places in the
s390 backend that rely on the size of HWI being > 32-bit, and would
likely cause overflow problems otherwise ...

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  Linux on zSeries Development
  [EMAIL PROTECTED]