Open optimization repository updated

2009-06-05 Thread Grigori Fursin
Hi all,

Just wanted to mention that we updated Collective Optimization Database with 
various optimization 
cases (for Intel and AMD processors at the moment) for multiple benchmarks such 
as EEMBC, SPEC2006,
etc
and started collecting data from several users to compare different compilers 
including GCC, LLVM,
Open64, 
Intel, etc. You can access it at: 
http://ctuning.org/cdatabase

I would be very happy to have your feedback about this repository and if you 
would be interested
to provide optimization data to this database to improve GCC and help end-users 
optimize their
programs?

Yours,
Grigori Fursin


Grigori Fursin, INRIA, France
http://fursin.net/research




New Interactive Compilation Interface (ICI) v2.0 has been released

2009-06-05 Thread Zbigniew Chamski
Dear all,

We just finished and released the new Interactive Compilation
Interface v2.0 for GCC 4.4.0, based on the official GCC 4.4.0 release code.
Full source code, documentation and a variety of plugin examples
are available on the cTuning.org website at http://ctuning.org/ici and
http://ctuning.org/wiki/index.php/CTools:ICI:Documentation.

The ICI plugin infrastructure used in the ICI 2.0 release is
complementary to the developments on the GCC plugins branch, and provides
a high-level API aimed at supporting fast experimentation/prototyping of
new analyses and optimizations.  Specifically, the ICI API introduces:

* an abstraction of compiler state through the "feature" mechanism (with
possibility of limited adjustments to, e.g., the values of internal
compiler parameters);

* a simple interface for pass management: substitution of pass
manager, overriding of pass gates, tracking of current/next pass;

* dynamic registration of new plugin event kinds.

The integration of ICI API with the GCC plugin support developed on
the plugins-branch has already been demonstrated last February (cf.
http://gcc.gnu.org/ml/gcc-patches/2009-02/msg01242.html).  Future
releases of ICI will be based on - and will extend - the mainstream
GCC plugin infrastructure.

We hope that using ICI will make GCC more attractive and more accessible for
researchers.  We currently use it to solve long term research objectives
such as automatic tuning of compiler optimization heuristic or automatic
program optimization for multiple architectures.

There are several ICI extension projects that we hope to finish this summer,
such as:

* automatic selection and tuning of fine-grain program transformations,
* generic function cloning for program run-time adaptation,
* program instrumentation,
* GCC4CLI split-compilation for VMs.

You are warmly encouraged to use ICI and other collaborative tools of
the cTuning project.  Join the effort and follow the developments on the
cTuning community mailing list:
http://groups.google.com/group/ctuning-discussions

Yours,

  Zbigniew Chamski and the cTuning.org community


Re: LLVM as a gcc plugin?

2009-06-05 Thread Grigori Fursin
Hi guys,

Just saw this discussion so wanted to mention that we at HiPEAC are now 
interested
to use both GCC as static compiler and LLVM as run-time infrastructure for 
research
and several colleagues wanted to port ICI framework (the recent release is based
on the "official" gcc plugin branch) to LLVM. We want to have both official
gcc plugins and ICI addition on top of it since we have a relatively large 
community
already around those tools and ICI plugins, and additional tools for automatic 
program 
optimization.

I will unlikely be involved in that now because I just don't have time so I CCed
this email to Andy Nisbet who has been interested to provide plugin system for 
LLVM,
Zbigniew Chamski who supports ICI for GCC and also Albert Cohen and Ayal Zaks 
who are also coordinating those activities within HiPEAC. 

The idea is to make GCC and LLVM more attractive to the researchers (i.e. that 
it's
easy to use compilers without knowing internals much) so that research ideas 
could
go back to the compilers much faster improving GCC and LLVM ...

Cheers,
Grigori


> On Jun 3, 2009, at 11:30 PM, Uros Bizjak wrote:
>
>Hello!
>
>Some time ago, there was a discussion about integrating LLVM and GCC
>[1]. However, with plugin infrastructure in place, could LLVM be
>plugged into GCC as an additional optimization plugin?
>
>
>[1] http://gcc.gnu.org/ml/gcc/2005-11/msg00888.html 
>
>
> Hi Uros,
>
> I'd love to see this, but I can't contribute to it directly. I think the 
> plugin interfaces would
need small 
> extensions, but there are no specific technical issues preventing it from 
> happening. LLVM has
certainly progressed a 
> lot since that (really old) email went out :)
>
> -Chris



VTA merge?

2009-06-05 Thread Alexandre Oliva
It's been a very long time since I started working in the
var-tracking-assignments branch.  It is finally approaching a state in
which I'm comfortable enough to propose that it be integrated.

Alas, it's not quite finished yet and, unless it is merged, it might
very well never be.  New differences in final RTL between -g and -g0
keep popping up: I just found out one more that went in as recently as
last week (lots of -O3 -g torture testsuite failures in -fcompare-debug
test runs).  After every merge, I spent an inordinate amount of time
addressing such -fcompare-debug regressions.

I'm not complaining.  I actually enjoy doing that, it is fun.  But it
never ends, and it took time away from implementing the features that,
for a long time, were missing.  I think it's close enough to ready now
that I feel it is no longer unfair to request others to share the burden
of keeping GCC from emitting different code when given -g compared with
-g0, a property we should have always ensured.


== What is VTA?

This project aims at getting GCC to emit debug information for local
variables that is always correct, and as complete as possible.  By
correct, I mean, if GCC says a variable is at a certain location at a
certain point in the program, that location must hold the value of the
variable at that point.  By complete, I mean if the value of the
variable is available somewhere, or can be computed from values
available somewhere, then debug information for the variable should tell
the debug information consumer how to obtain or compute it.

The key to keep the mapping between SL (source-level) variables and IR
objects from being corrupted or lost was to introduce explicit IR
mappings that, on the SL hand, remained stable fixed points and, on the
IR hand, expressions that got naturally adjusted as part of the
optimization process, without any changes to the optimization passes.

Alas, no changes to the passes would be too good to be true.  It was
indeed true for several of them, but many that dealt with special
boundary cases such as single or no occurrences of references to a
value, or that counted references to make decisions, had to be taught
how to disregard references that appeared in these new binding IR
elements.  Others had to be taught to disregard these elements when
checking for the absence of intervening code between a pair of
statements or instructions.

In nearly all cases, the changes were trivial, and the need for them was
shown in -fcompare-debug or bootstrap-debug testing.  In a few cases,
changes had to be more elaborate, for disregarding debug uses during
analysis ended up requiring them to be explicitly adjusted afterwards.
For example, substituting a set into its single non-debug use required
adding code to substitute into the debug uses as well.  In most of these
cases, adjusting them would merely avoid loss of debug information.  In
a few, failing to do so could actually cause incorrect debug information
to be output, but there are safety nets in place that avoid this in the
SSA level, causing debug information to be dropped instead.

Overall, the amount of changes to the average pass was ridiculously
small, compared both with the amount of code in the pass, and with the
amount of code that would have to added for the pass to update debug
info mappings as it performs its pass-specific transformations.  It
might be possible to cover some of these updates by generic code, but
it's precisely in the non-standard transformations that they'd require
additional code.  Simply letting them apply their work to the debug
stuff proved to be quite a successful approach, as I hope anyone who
bothers to look at the patches will verify.


After the binding points are carried and updated throughout
optimizations and IR conversions, we arrive at the var-tracking pass,
where we used to turn register and memory attributes into var_location
annotations.  It is here that VTA does more of its magic.

Using something that vaguely resembles global value numbering, but
without the benefits of SSA, we propagate the bindings and analyze
loads, stores, copies and computations, so that we can determine where
all copies of the value of each variable are, so that, if one location
is modified, we can still use another to refer to it in debug
information.

At control flow confluences, we merge the known locations, known values,
computing expressions, etc, as expected.  This is where some work is
still required: although we merge stuff in registers perfectly, we still
don't deal with stack slots properly.  Sometimes they work, but mostly
by chance.  It is the lack of this feature that makes VTA debug
information not uniformly superior to current debug information at this
point.

This feature is next in my to-do list, it shouldn't take long, but I
wanted to post the bulk of the changes before the GCC Summit, so that
you get a chance to discuss it there.  Unfortunately, I won't be there;
by the time budget for my attendance became ava

Re: VTA merge?

2009-06-05 Thread Richard Guenther
On Fri, Jun 5, 2009 at 12:05 PM, Alexandre Oliva  wrote:
> It's been a very long time since I started working in the
> var-tracking-assignments branch.  It is finally approaching a state in
> which I'm comfortable enough to propose that it be integrated.
>
> Alas, it's not quite finished yet and, unless it is merged, it might
> very well never be.  New differences in final RTL between -g and -g0
> keep popping up: I just found out one more that went in as recently as
> last week (lots of -O3 -g torture testsuite failures in -fcompare-debug
> test runs).  After every merge, I spent an inordinate amount of time
> addressing such -fcompare-debug regressions.
>
> I'm not complaining.  I actually enjoy doing that, it is fun.  But it
> never ends, and it took time away from implementing the features that,
> for a long time, were missing.  I think it's close enough to ready now
> that I feel it is no longer unfair to request others to share the burden
> of keeping GCC from emitting different code when given -g compared with
> -g0, a property we should have always ensured.
>
>
> == What is VTA?
>
> This project aims at getting GCC to emit debug information for local
> variables that is always correct, and as complete as possible.  By
> correct, I mean, if GCC says a variable is at a certain location at a
> certain point in the program, that location must hold the value of the
> variable at that point.  By complete, I mean if the value of the
> variable is available somewhere, or can be computed from values
> available somewhere, then debug information for the variable should tell
> the debug information consumer how to obtain or compute it.
>
> The key to keep the mapping between SL (source-level) variables and IR
> objects from being corrupted or lost was to introduce explicit IR
> mappings that, on the SL hand, remained stable fixed points and, on the
> IR hand, expressions that got naturally adjusted as part of the
> optimization process, without any changes to the optimization passes.
>
> Alas, no changes to the passes would be too good to be true.  It was
> indeed true for several of them, but many that dealt with special
> boundary cases such as single or no occurrences of references to a
> value, or that counted references to make decisions, had to be taught
> how to disregard references that appeared in these new binding IR
> elements.  Others had to be taught to disregard these elements when
> checking for the absence of intervening code between a pair of
> statements or instructions.
>
> In nearly all cases, the changes were trivial, and the need for them was
> shown in -fcompare-debug or bootstrap-debug testing.

So if I understand the above right then VTA is a new source of
code-generation differences with -g vs. -g0.  A possibly quite
bad one (compared to what we have now).

IMHO a much more convincing way to avoid code generation
differences with -g vs. -g0 and VTA would be to _always_ have
the debug statements/instructions around, regardless of -g/-g0
or -fvta or -fno-vta (that would merely switch var-tracking into
the new mode).  This would also ensure we keep a very good
eye on compile-time/memory-usage overhead of the debug
instructions.

As of the var-tracking changes - do they make sense even with
the current state of affairs?  I remember us enhancing var-tracking
for the var-mappings approach as well.

Richard.


Re: LLVM as a gcc plugin?

2009-06-05 Thread Andrew Nisbet
Hello,
I am interested in developing LLVM functionality to support the interfaces in 
GCC ICI. I
plan to spend some time to investigate feasibility in a couple of weeks time 
once all exam
boards are finished. My initial goal would be to enable LLVM to be used for 
iterative
compilation using the HiPEAC ICI framework, either as a drop in replacement for 
GCC, or as
a plugin. I'd welcome focussed discussion and collaboration with this goal in 
mind. 
 
 My previous work in LLVM was in trying to develop a backend for a  soft-core 
processor
written in HandelC.  I am also interested in developing/extending teaching 
resources to
support compiler based undergraduate/postgraduate courses and projects. One of 
my long
term research goals is to investigate (implement) iterative feedback directed 
compilation
and design space exploration tools/techniques for hybrid multicore processor 
architectures
comprised of hard and reconfigurable logic. 

Thanks,

   Andy

 Dr. Andy Nisbet: URL http://www.docm.mmu.ac.uk/STAFF/A.Nisbet
Department of Computing and Mathematics, John Dalton Building, Manchester
  Metropolitan University, Chester Street, Manchester M1 5GD, UK.
Email: a.nis...@mmu.ac.uk, Phone:(+44)-161-247-1556; Fax:(+44)-161-247-6840.


"Before acting on this email or opening any attachments you
should read the Manchester Metropolitan University's email
disclaimer available on its website
http://www.mmu.ac.uk/emaildisclaimer "



  Dr. Andy Nisbet: URL http://www.docm.mmu.ac.uk/STAFF/A.Nisbet
Department of Computing and Mathematics, John Dalton Building, Manchester
   Metropolitan University, Chester Street, Manchester M1 5GD, UK.
Email: a.nis...@mmu.ac.uk, Phone:(+44)-161-247-1556; Fax:(+44)-161-247-6840. 


"Before acting on this email or opening any attachments you
should read the Manchester Metropolitan University's email
disclaimer available on its website
http://www.mmu.ac.uk/emaildisclaimer "



Re: VTA merge?

2009-06-05 Thread Joseph S. Myers
On Fri, 5 Jun 2009, Alexandre Oliva wrote:

> testsuite-guality (16K) - (still small) debug info quality testsuite

Has this been reworked as per my previous comments 
 to use DejaGnu 
interfaces to execute the debugger and test programs so that any host and 
target board files that work for the GDB testsuite will also work for 
running this testsuite?  (Or so board files will at most need small 
changes, since such changes are commonly needed in practice to support a 
new testsuite - but in any case, using the DejaGnu interfaces to support 
the wide range of supported hosts and targets with runtest --host_board 
--target_board.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: LLVM as a gcc plugin?

2009-06-05 Thread Steven Bosscher
On Fri, Jun 5, 2009 at 12:40 PM, Andrew  Nisbet wrote:
> Hello,
> I am interested in developing LLVM functionality to support the interfaces in 
> GCC ICI.

*sigh*

GCC != LLVM.  And this is a GCC list. Can LLVM topics please be
discussed on an LLVM mailing list?

Ciao!
Steven


Re: VTA merge?

2009-06-05 Thread Alexandre Oliva
On Jun  5, 2009, Richard Guenther  wrote:

> So if I understand the above right then VTA is a new source of
> code-generation differences with -g vs. -g0.

It was, but that was before I spent several months stopping it from
being it ;-)

And once VTA is on and bootstrap-debug is the rule rather than the
exception (with RTH's suggestion, it will again be faster than normal
bootstrap, and catch even some regressions that current
BUILD_CONFIG=bootstrap-debug doesn't), it won't be just me catching and
fixing these ;-)

FTR, in the last two or three merges, I've had more -fcompare-debug
regressions with VTA disabled than with it enabled.  Perhaps we should
default to BUILD_CONFIG=bootstrap-debug?  It would be a start, but it
wouldn't have caught all of the recent regressions.  Some of them only
affected C++ and Ada testcases, and bootstrap-debug won't catch these.
It takes -fcompare-debug for the testsuite run or something equivalent
to do so.

Hopefully people who run automated testers can be talked into using the
-fcompare-debug option for the library builds and testsuite runs.

> IMHO a much more convincing way to avoid code generation
> differences with -g vs. -g0 and VTA would be to _always_ have
> the debug statements/instructions around, regardless of -g/-g0

That's an option I haven't discarded, but I wouldn't be able to claim
VTA had zero cost when disabled if that was so.

It might make sense to have an option that emitted all notes but just
discarded them at the end, rather than actually emitting location notes
out of them.  Although I'm not sure how useful it would be: as long as
you can still get debug info without VTA (and you can), you can get the
same effect of such an option:

-fno-var-tracking-assignments, with -g0 or -g, will get you the same
debug info we emit it nowadays

-fvar-tracking-assignments followed by strip will get you the same
object code you'd have gotten with the approach you suggest

Since stripping is trivial, and probably the most common use, the most
interesting case is probably the one in which you start out from a
binary that fails and then find out the failure can't be duplicated once
you build with VTA.  Building with -fcompare-debug will let you know
you're running into one of these cases, and then you can resort to
disabling VTA and trying to make do with the sucky debug info we emit
today.

> This would also ensure we keep a very good eye on
> compile-time/memory-usage overhead of the debug instructions.

We can probably think of better ways to waste memory and compile time
;-)

Not that keeping them on check isn't something we should all strive to
do, mind you.

> As of the var-tracking changes - do they make sense even with
> the current state of affairs?

Most of it would just fit in, but it would obviously have to be
retargeted to take the input of known bindings from something else.

> I remember us enhancing var-tracking for the var-mappings approach as
> well.

Yeah, it should be pretty easy to retarget VTA to take, instead of debug
insns, any other source of information that correlates user variables
with locations at points in which they are known at first, and all the
machinery should propagate that information and figure out the rest:
equivalences, confluences, etc.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


[ANN] ForestGOMP 0.1, a libgomp-compatible OpenMP run-time

2009-06-05 Thread Ludovic Courtès
Hello,

We are pleased to announce the release of ForestGOMP 0.1, a
libgomp-compatible OpenMP 2.5 run-time support library targeting
high-performance computing.

  http://gforge.inria.fr/frs/download.php/22409/forestgomp-0.1.tar.gz
  SHA1: 18cb967cc21ee9effc3e4b3b2ee59ef838247a6a

More information is available at:

  http://runtime.bordeaux.inria.fr/forestgomp/

ForestGOMP builds on the Marcel user-level NxM threading library
(http://runtime.bordeaux.inria.fr/marcel/).  Marcel's salient points
include lightweight thread creation, and topology-aware scheduling over
hierarchical architectures ("bubble scheduling").  The run-forest(1)
tool's `--scheduler' option offers a simple way to experiment with this
feature on OpenMP applications.

Marcel 2.90, which is compatible with ForestGOMP 0.1, is available at:

  http://gforge.inria.fr/frs/download.php/22408/marcel-2.90.tar.gz
  SHA1: eace75589d26ff4018184da5afffcc2219665456

On behalf of the ForestGOMP team,
Ludovic.


pgpgyudYj0mvh.pgp
Description: PGP signature


Re: VTA merge?

2009-06-05 Thread Alexandre Oliva
On Jun  5, 2009, "Joseph S. Myers"  wrote:

> On Fri, 5 Jun 2009, Alexandre Oliva wrote:
>> testsuite-guality (16K) - (still small) debug info quality testsuite

> Has this been reworked as per my previous comments

Sorry, no, I didn't complete the rework, although I made some changes
towards that end.  But it's still stuck to native testing.

I've reworked the harness so that communication with the debugger is now
much simpler, which should make the next step easier.  But I have still
had my focus on adding missing features and fixing compare-debug
regressions, so I could never get 'round to teaching myself enough
dejagnu/expect to refit the harness as you suggested.

But don't think your suggestions were forgotten.  I even mentioned I
wasn't there yet last time I posted the patch, and it's high in my to-do
list.  Hopefully once we manage to avoid new -fcompare-debug regressions
I'll have more time to complete that task.  Of course I wouldn't mind if
someone beat me to it or taught me the basics on how to do it.  I'm not
even sure where to begin.  I'd start out by looking at the GDB testsuite
to try to figure out how they do it, but any more specific pointers
would be definitely welcome.

Thanks,

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: VTA merge?

2009-06-05 Thread Richard Guenther
On Fri, Jun 5, 2009 at 12:53 PM, Alexandre Oliva  wrote:
> On Jun  5, 2009, Richard Guenther  wrote:
>
>> So if I understand the above right then VTA is a new source of
>> code-generation differences with -g vs. -g0.
>
> It was, but that was before I spent several months stopping it from
> being it ;-)

Obviously ;)

> And once VTA is on and bootstrap-debug is the rule rather than the
> exception (with RTH's suggestion, it will again be faster than normal
> bootstrap, and catch even some regressions that current
> BUILD_CONFIG=bootstrap-debug doesn't), it won't be just me catching and
> fixing these ;-)

IMHO we should make bootstrap-debug (that's the one building
stage2 w/o debug info and stage3 with debug info, correct?) the
default regardless of VTA going in or not.  If it works on the
primary and secondary targets of course ;)

Can you submit a separate patch to do so? (maybe you did already)

> FTR, in the last two or three merges, I've had more -fcompare-debug
> regressions with VTA disabled than with it enabled.  Perhaps we should
> default to BUILD_CONFIG=bootstrap-debug?  It would be a start, but it
> wouldn't have caught all of the recent regressions.  Some of them only
> affected C++ and Ada testcases, and bootstrap-debug won't catch these.
> It takes -fcompare-debug for the testsuite run or something equivalent
> to do so.

bootstrap-debug by default would be a start.

Honestly I don't care too much about -g vs. -g0 differences as we
build everything with -g and strip debug info later.  But passing
bootstrap-debug is a release goal that I will support.

> Hopefully people who run automated testers can be talked into using the
> -fcompare-debug option for the library builds and testsuite runs.
>
>> IMHO a much more convincing way to avoid code generation
>> differences with -g vs. -g0 and VTA would be to _always_ have
>> the debug statements/instructions around, regardless of -g/-g0
>
> That's an option I haven't discarded, but I wouldn't be able to claim
> VTA had zero cost when disabled if that was so.

So what is the overhead of having the debug stmts/insns if you
throw them away before var-tracking and do debug info the old way?

Thanks,
Richard.


Re: VTA merge?

2009-06-05 Thread David Edelsohn
I thought a number of people had concerns that VTA was too expensive
and disruptive for the perceived benefit.

David


i370 port

2009-06-05 Thread Paul Edwards

Hello GCC maintainers.

There used to be an i370 target for GCC.  It was written in 1989,
and put into GCC 2.5 in 1993.

It has always been semi-working, but never good enough to
actually use.

It was dropped from GCC 4 when there was supposedly no
maintainer available.  Actually, Dave Pitts and myself were
both maintaining it at that time, but we were both still working
on an old version of it (3.2). So gcc 3.4.6, circa 2004, was the
last time it was included in the normal GCC distribution.

We were both maintaining it, and continue to maintain it,
because MVS doesn't have any alternate free C compiler
available.

As of yesterday, after years of work, an i370 version was 
released that is now fully working.  The code generator has 
no known bugs that would stop it from being C90-compliant,
and GCC can fully regenerate itself, with full optimization.  


You can see it here:

http://gccmvs.sourceforge.net

It's based on GCC 3.2.3.

There is also a free i370 emulator (Hercules) and a free i370-based 
operating system (MVS/380) that enables the compiler to be fully 
tested and fully regenerate itself on its native environment.  Not

only that, but there is an effort underway to allow this free
environment to be made available on the internet so that Unix
users can do an MVS build (takes around 4 hours if done properly
- ie 3 stage, full optimization, on an emulated machine), from the 
safety of their Unix box.


Both of those products are also under active development by a
community of mainframe enthusiasts.

In addition, that code has been ported to GCC 3.4.6, which is now
working as a cross-compiler at least.  It's still some months away
from working natively though.  It takes a lot of effort to convert
the Posix-expecting GCC compiler into C90 compliance.  This has
been done though, in a way that has minimal code changes to the
GCC mainline.

There is a lot of other activity (e.g. availability of REXX, PL/1, Cobol)
that rests on the C compiler being available.

So, my question is - what is required to get the i370 port reinstated
into the GCC mainline?

Yes, I'm aware that there is an S/390 port, but it isn't EBCDIC, isn't
HLASM, isn't 370, isn't C90, isn't MVS.  It may well be possible to
change all those things, and I suspect that in a few years from now
I may be sending another message asking what I need to do to get 
all my changes to the s390 target into the s390 target.  At that time,

I suspect there will be a lot of objection to "polluting" the s390 target
with all those "unnecessary" things.

So, here's hoping that the i370 port can end up where it was originally
intended to end up, and that all the effort that was spent in the GCC
mainline to get rid of the ASCII assumptions can now be put to good
use.

BFN.  Paul.



Re: i370 port

2009-06-05 Thread Joseph S. Myers
On Fri, 5 Jun 2009, Paul Edwards wrote:

> It was dropped from GCC 4 when there was supposedly no
> maintainer available.  Actually, Dave Pitts and myself were
> both maintaining it at that time, but we were both still working
> on an old version of it (3.2). So gcc 3.4.6, circa 2004, was the
> last time it was included in the normal GCC distribution.

(For reference, the port was removed in SVN revision 77216; before then it 
had had various largely mechanical changes as part of changes to multiple 
back ends or target-independent code, with r69086 as the last vaguely 
i370-only change but no changes appearing to come from someone 
specifically working and testing on i370 for some years before then.  "svn 
log svn://gcc.gnu.org/svn/gcc/trunk/gcc/config/i...@77215" shows the 
history.)

> We were both maintaining it, and continue to maintain it,
> because MVS doesn't have any alternate free C compiler
> available.

To merge back into FSF GCC, the people who have made changes that would be 
merged back will need to have copyright assignments on file at the FSF 
(and disclaimers from any relevant employers).  I don't have a current 
copy of the assignments list (my very old copy does show assignments from 
David G. Pitts with an employer disclaimer dating from 1993).

> So, my question is - what is required to get the i370 port reinstated
> into the GCC mainline?

The basic requirements for a resurrected port are the same as for a new 
port; it needs to be assigned to the FSF, to pass the normal technical 
review, and the SC needs to approve someone as a maintainer of the port 
(there may be a bottleneck with the last stage, since there are currently 
at least three new ports pending approval).  It is a very good idea if you 
can run the testsuite for the port and will be posting results to 
gcc-testresults regularly.

I would encourage going through all the changes made to the i370 port on 
GCC mainline, after 3.1/3.2 branched and before the port was removed, to 
see what should be merged into your version for mainline; ultimately it 
would be up to you how you get it updated for all the mechanical changes 
on mainline since 3.2, but those changes (see command above to get logs) 
may be a useful guide to how to do some of the updates.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: [RFC] enabling -fshow-column by default

2009-06-05 Thread Aldy Hernandez
On Fri, Jun 05, 2009 at 12:09:57AM +0100, Jonathan Wakely wrote:
> 2009/5/20 Aldy Hernandez:
> >>
> >> My only worry is that the testsuite may confuse column and line
> >> numbers and pass/fail tests because of it.
> >
> > Janis has a patch for the testsuite to handle all this.
> 
> I'm seeing exactly this in the libstdc++ testsuite with some new tests
> I've written - is a fix on the way soon?

The fix is already in the tree.

Which test is this?  Can you send it to me?

Aldy


Re: i370 port

2009-06-05 Thread Paul Edwards

We were both maintaining it, and continue to maintain it,
because MVS doesn't have any alternate free C compiler
available.


To merge back into FSF GCC, the people who have made changes that would be
merged back will need to have copyright assignments on file at the FSF
(and disclaimers from any relevant employers).  I don't have a current
copy of the assignments list (my very old copy does show assignments from
David G. Pitts with an employer disclaimer dating from 1993).


There's only 3 people who have made changes.  Dave Pitts, Linas
Vepstas and myself.  Dave you already have apparently.  Linas's
code is largely already merged - just his last set of changes didn't
get put in.  That leaves me.  I work as a contractor and I'd probably
be sacked if I tried to either bring in or attempt to maintain gcc.
All my work was done at home.


So, my question is - what is required to get the i370 port reinstated
into the GCC mainline?


The basic requirements for a resurrected port are the same as for a new
port; it needs to be assigned to the FSF, to pass the normal technical
review, and the SC needs to approve someone as a maintainer of the port
(there may be a bottleneck with the last stage, since there are currently
at least three new ports pending approval).  It is a very good idea if you
can run the testsuite for the port and will be posting results to
gcc-testresults regularly.


The port is to a pure C90 environment (ie not posix, not unix).  It was a
major effort to achieve that, and it has only just been completed to the
point where the compiler recompiles itself with full optimization.  The
environment where it runs is not set up to run shell scripts or makes
or test suites.  It's set up to run JCL, and there's a stack of JCL card
decks to allow GCC to compile, which would be good to have included
in the i370 directory.

It's difficult enough just to get GCC to know to open dd:include(xxx)
for an include of "xxx.h" and dd:sysincl(xxx) for an include of .
That logic was revamped in gcc 3.4.6 so I haven't put it in yet.  It'll
probably be months before I do that, because I can't test it until it
gets up onto the mainframe.  And once again, in preparation for that,
I need to make it a pure C90 application.  So that is where I spend
my effort.

Note that the i370 port used to be in GCC even though it was riddled
with bugs.  It took literally years to get rid of them.  I would have
thought that GCC recompiling itself was a damn good start for
inclusion, irrespective of any test suites (assuming those test
suites are even C90 code - as they would need to be to work).


I would encourage going through all the changes made to the i370 port on
GCC mainline, after 3.1/3.2 branched and before the port was removed, to
see what should be merged into your version for mainline; ultimately it
would be up to you how you get it updated for all the mechanical changes
on mainline since 3.2, but those changes (see command above to get logs)
may be a useful guide to how to do some of the updates.


I have already merged the changes made from 3.2.3 to 3.4.6 into my
code, and have a diff against 3.4.6 available already.  ie, that covers
all known code changes.  But it only works as a cross-compiler at the
moment.  It's probably a few months away from being native.

BFN.  Paul.



Re: i370 port

2009-06-05 Thread Joseph S. Myers
On Sat, 6 Jun 2009, Paul Edwards wrote:

> The port is to a pure C90 environment (ie not posix, not unix).  It was a
> major effort to achieve that, and it has only just been completed to the
> point where the compiler recompiles itself with full optimization.  The
> environment where it runs is not set up to run shell scripts or makes
> or test suites.  It's set up to run JCL, and there's a stack of JCL card
> decks to allow GCC to compile, which would be good to have included
> in the i370 directory.

You can test a cross compiler if you have some way of copying a test 
executable to the i370 system, running it and getting its output and exit 
status back (actually you don't need to be able to get the exit status 
since DejaGnu has wrappers to include it in the output if needed).  There 
is no need for the target to be able to run shell scripts or makes.  You 
would need to write your own DejaGnu board file that deals with copying 
to/from the i370 system and running programs there.  The testsuite 
routinely runs for much more limited embedded systems (using appropriate 
board files).

-- 
Joseph S. Myers
jos...@codesourcery.com


New Toshiba Media Processor (mep-elf) port and maintainer

2009-06-05 Thread Gerald Pfeifer
As everyone here surely has seen, DJ Delorie has been submitting a
new port for the Toshiba Media Processor (mep-elf) and done a number
of adjustments based on feedback already.

Pending initial (technical) approval the steering committee welcomes
this new port and is happy to appoint DJ as maintainer.

Congratulations, DJ!  Please adjust the MAINTAINERS file accordingly
when the port goes in, and Happy Hacking,

Gerald


Re: i370 port

2009-06-05 Thread Ulrich Weigand
Paul Edwards wrote:

> In addition, that code has been ported to GCC 3.4.6, which is now
> working as a cross-compiler at least.  It's still some months away
> from working natively though.  It takes a lot of effort to convert
> the Posix-expecting GCC compiler into C90 compliance.  This has
> been done though, in a way that has minimal code changes to the
> GCC mainline.

You're referring to building GCC for a non-Posix *host*, right?
I assume those changes are not (primarily) in the back-end, but
throughout GCC common code?

> Yes, I'm aware that there is an S/390 port, but it isn't EBCDIC, isn't
> HLASM, isn't 370, isn't C90, isn't MVS.  It may well be possible to
> change all those things, and I suspect that in a few years from now
> I may be sending another message asking what I need to do to get 
> all my changes to the s390 target into the s390 target.  At that time,
> I suspect there will be a lot of objection to "polluting" the s390 target
> with all those "unnecessary" things.

Actually, I would really like to see the s390 target optionally support
the MVS ABI and HLASM assembler format, so I wouldn't have any objection
to patches that add these features ...

I understand current GCC supports various source and target character
sets a lot better out of the box, so it may be EBCDIC isn't even an
issue any more.   If there are other problems related to MVS host
support, I suppose those would need to be fixed in common code anyway,
no matter whether the s390 or i370 back-ends are used.

The only point in your list I'm sceptical about is 370 architecture
support -- I don't quite see why this is still useful today (the s390
port does require at a minimum a S/390 G2 with the branch relative
instructions ... but those have been around for nearly 15 years).

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  ulrich.weig...@de.ibm.com


Re: i370 port

2009-06-05 Thread Paul Edwards

The port is to a pure C90 environment (ie not posix, not unix).  It was a
major effort to achieve that, and it has only just been completed to the
point where the compiler recompiles itself with full optimization.  The
environment where it runs is not set up to run shell scripts or makes
or test suites.  It's set up to run JCL, and there's a stack of JCL card
decks to allow GCC to compile, which would be good to have included
in the i370 directory.


You can test a cross compiler if you have some way of copying a test 
executable to the i370 system


It doesn't build executables either.

Only the "-S" option is used.

With that restriction, GCC merely reads a bunch of text files and
writes a text file, and thus is amenable to being a pure C90
application.  That's how it manages to work at all.

running it and getting its output and exit 
status back (actually you don't need to be able to get the exit status 
since DejaGnu has wrappers to include it in the output if needed).  


It so happens that MVS/380 has the ability to be run in batch, and
extracting the exit code won't be a problem either.

Note however that I normally do all my GCC work in Windows,
and the batch running etc is done with batch files.

There 
is no need for the target to be able to run shell scripts or makes.  You 
would need to write your own DejaGnu board file that deals with copying 
to/from the i370 system and running programs there.  The testsuite 
routinely runs for much more limited embedded systems (using appropriate 
board files).


I have a large backlog of work to do with the i370 port already, starting
with getting gcc 3.4.6 running natively.  Isn't that a more productive
thing to do?  Even after 3.4.6 is done, so that every scrap of code is
available, then there's version 4 to do!

BFN.  Paul.



Expanding a load instruction

2009-06-05 Thread fearyourself
Dear all,

In the instruction set of my architecture, the offsets of a half-load
(HImode) have to be multiples of 2. However, if I set up a structure
in a certain way, the compiler will generate:

(mem/s/j:HI (plus:DI (reg:DI 134 [ ivtmp.23 ])
(const_int 1 [0x1])) [0 .geno+0 S2 A16])

As the memory operand for the load.

Now, one solution I am going to try to fix this is to use
define_expand and add a move into another register before this load
and then load from that register (thus removing the offset of 1).

My question is: Is that how it should be done or is there another solution?

Thanks again for your help,
Jc


Re: Expanding a load instruction

2009-06-05 Thread Dave Korn
fearyourself wrote:

> In the instruction set of my architecture, the offsets of a half-load
> (HImode) have to be multiples of 2. However, if I set up a structure
> in a certain way, the compiler will generate:
> 
> (mem/s/j:HI (plus:DI (reg:DI 134 [ ivtmp.23 ])
> (const_int 1 [0x1])) [0 .geno+0 S2 A16])
> 
> As the memory operand for the load.
> 
> Now, one solution I am going to try to fix this is to use
> define_expand and add a move into another register before this load
> and then load from that register (thus removing the offset of 1).
> 
> My question is: Is that how it should be done or is there another solution?

  This looks like what you need:

 -- Macro: STRICT_ALIGNMENT
 Define this macro to be the value 1 if instructions will fail to
 work if given data not on the nominal alignment.  If instructions
 will merely go slower in that case, define this macro as 0.

cheers,
  DaveK



Re: i370 port

2009-06-05 Thread Paul Edwards

Wow, what a lot of responses.  Last time I tried making
contact, I didn't get a single response!


In addition, that code has been ported to GCC 3.4.6, which is now
working as a cross-compiler at least.  It's still some months away
from working natively though.  It takes a lot of effort to convert
the Posix-expecting GCC compiler into C90 compliance.  This has
been done though, in a way that has minimal code changes to the
GCC mainline.


You're referring to building GCC for a non-Posix *host*, right?


Yep.


I assume those changes are not (primarily) in the back-end, but
throughout GCC common code?


Yes.  Or rather, they would be, if it weren't for sleight-of-hand to
minimize that.  I dummied up all the Posix calls to point back to
C90 functions.

Please take a look at the actual changes to GCC.  There's not
a lot:

Here's the exact file:

https://sourceforge.net/project/downloading.php?group_id=195127&filename=gccmvs-3_2_3-7_0.zip&a=50206324

Most of the size is generated code from the md, or other new files,
and not changes to GCC proper.

However, in fact, GCC is turned on its head.  It's a single executable.
C90 doesn't guarantee, and the host doesn't support, the ability to do
a fork() and exec().


Yes, I'm aware that there is an S/390 port, but it isn't EBCDIC, isn't
HLASM, isn't 370, isn't C90, isn't MVS.  It may well be possible to
change all those things, and I suspect that in a few years from now
I may be sending another message asking what I need to do to get
all my changes to the s390 target into the s390 target.  At that time,
I suspect there will be a lot of objection to "polluting" the s390 target
with all those "unnecessary" things.


Actually, I would really like to see the s390 target optionally support
the MVS ABI and HLASM assembler format, so I wouldn't have any objection
to patches that add these features ...


Great.


I understand current GCC supports various source and target character
sets a lot better out of the box, so it may be EBCDIC isn't even an
issue any more.


It looks that way from what I've seen of 3.4.6 so far.  However, I
won't know for sure until it's on the host and self-generating.


If there are other problems related to MVS host
support, I suppose those would need to be fixed in common code anyway,
no matter whether the s390 or i370 back-ends are used.


Well, I would like to see that - I don't know why there are all
those calls to open() instead of fopen() etc in the first place,
but I don't see that happening.


The only point in your list I'm sceptical about is 370 architecture
support -- I don't quite see why this is still useful today (the s390
port does require at a minimum a S/390 G2 with the branch relative
instructions ... but those have been around for nearly 15 years).


The last free MVS was MVS 3.8j which runs on the S/370 architecture.
If you want to write MVS software, for free, your only option is to
pick that up.  It doesn't run on S/390 hardware.

However.  :-)

That's where MVS/380 comes in.

http://mvs380.sourceforge.net

So yes, we can sort of cope with S/390 instructions.  It's just not as
widely supported as the proper S/370 (emulated) machine.  Or rather,
I think we can.  It's into unchartered territory what restrictions
S/380 actually has in its current form.  It's known that it's enough to
run 31-bit GCC using 20 MB of memory though.  Which again, is a
damn good start.

BFN.  Paul.



Re: i370 port

2009-06-05 Thread Joseph S. Myers
On Fri, 5 Jun 2009, Ulrich Weigand wrote:

> I understand current GCC supports various source and target character
> sets a lot better out of the box, so it may be EBCDIC isn't even an
> issue any more.   If there are other problems related to MVS host

I think the EBCDIC support is largely theoretical and not tested on any 
actual EBCDIC host (or target).  cpplib knows the character set name 
UTF-EBCDIC, but whenever it does anything internally that involves the 
encoding of its internal character set it uses UTF-8 rules (which is not 
something valid to do with UTF-EBCDIC).

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: i370 port

2009-06-05 Thread Joseph S. Myers
On Sat, 6 Jun 2009, Paul Edwards wrote:

> > There is no need for the target to be able to run shell scripts or makes.
> > You would need to write your own DejaGnu board file that deals with copying
> > to/from the i370 system and running programs there.  The testsuite routinely
> > runs for much more limited embedded systems (using appropriate board files).
> 
> I have a large backlog of work to do with the i370 port already, starting
> with getting gcc 3.4.6 running natively.  Isn't that a more productive
> thing to do?  Even after 3.4.6 is done, so that every scrap of code is
> available, then there's version 4 to do!

It's up to you what priorities you assign to different things involved in 
getting this on mainline, but we use test results postings as evidence of 
what sort of state each port is in, where a particular test failure is 
appearing and whether each port is being used, so there may be reluctance 
to accept a port that will not have test results posted regularly for 
mainline.  (This is much less of a problem for OS ports than for CPU 
ports; if one OS for a given CPU has results routinely posted, it doesn't 
matter so much if other OSes don't, though having results for different 
OSes is still useful.)

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: i370 port

2009-06-05 Thread Daniel Jacobowitz
On Sat, Jun 06, 2009 at 01:39:07AM +1000, Paul Edwards wrote:
>> I understand current GCC supports various source and target character
>> sets a lot better out of the box, so it may be EBCDIC isn't even an
>> issue any more.
>
> It looks that way from what I've seen of 3.4.6 so far.  However, I
> won't know for sure until it's on the host and self-generating.

Why are you migrating to 3.4.6 now, instead of to a current version?
If you want to include this in mainline some day, then eventually it
has to be caught up - and 3.4.6 is older than it may appear from the
release date, since it branched off of mainline five years ago.  A lot
has changed since then.

-- 
Daniel Jacobowitz
CodeSourcery


Re: New Toshiba Media Processor (mep-elf) port and maintainer

2009-06-05 Thread DJ Delorie

Thanks!


Re: i370 port

2009-06-05 Thread Paul Edwards

I understand current GCC supports various source and target character
sets a lot better out of the box, so it may be EBCDIC isn't even an
issue any more.   If there are other problems related to MVS host


I think the EBCDIC support is largely theoretical and not tested on any
actual EBCDIC host (or target).  cpplib knows the character set name
UTF-EBCDIC, but whenever it does anything internally that involves the
encoding of its internal character set it uses UTF-8 rules (which is not
something valid to do with UTF-EBCDIC).



From the hercules-os380 files section, here's the relevant change

to 3.4.6 to stop it being theoretical:

Index: gccnew/gcc/cppcharset.c
diff -c gccnew/gcc/cppcharset.c:1.1.1.1 gccnew/gcc/cppcharset.c:1.6
*** gccnew/gcc/cppcharset.c:1.1.1.1 Wed Apr 15 16:26:16 2009
--- gccnew/gcc/cppcharset.c Wed May 13 11:07:08 2009
***
*** 23,28 
--- 23,30 
 #include "cpplib.h"
 #include "cpphash.h"
 #include "cppucnid.h"
+ #include "coretypes.h"
+ #include "tm.h"

 /* Character set handling for C-family languages.

***
*** 529,534 
--- 531,561 
   return conversion_loop (one_utf32_to_utf8, cd, from, flen, to);
 }

+ #ifdef MAP_OUTCHAR
+ /* convert ASCII to EBCDIC */
+ static bool
+ convert_asc_ebc (iconv_t cd ATTRIBUTE_UNUSED,
+  const uchar *from, size_t flen, struct _cpp_strbuf *to)
+ {
+   size_t x;
+   int c;
+
+   if (to->len + flen > to->asize)
+ {
+   to->asize = to->len + flen;
+   to->text = xrealloc (to->text, to->asize);
+ }
+   for (x = 0; x < flen; x++)
+ {
+   c = from[x];
+   c = MAP_OUTCHAR(c);
+   to->text[to->len + x] = c;
+ }
+   to->len += flen;
+   return true;
+ }
+ #endif
+
 /* Identity conversion, used when we have no alternative.  */
 static bool
 convert_no_conversion (iconv_t cd ATTRIBUTE_UNUSED,
***
*** 606,611 
--- 633,641 
   { "UTF-32BE/UTF-8", convert_utf32_utf8, (iconv_t)1 },
   { "UTF-16LE/UTF-8", convert_utf16_utf8, (iconv_t)0 },
   { "UTF-16BE/UTF-8", convert_utf16_utf8, (iconv_t)1 },
+ #if defined(TARGET_EBCDIC)
+   { "UTF-8/UTF-EBCDIC", convert_asc_ebc, (iconv_t)0 },
+ #endif
 };

 /* Subroutine of cpp_init_iconv: initialize and return a
***
*** 683,688 
--- 713,722 

   bool be = CPP_OPTION (pfile, bytes_big_endian);

+ #if defined(TARGET_EBCDIC)
+   ncset = "UTF-EBCDIC";
+   wcset = "UTF-EBCDIC";
+ #else
   if (CPP_OPTION (pfile, wchar_precision) >= 32)
 default_wcset = be ? "UTF-32BE" : "UTF-32LE";
   else if (CPP_OPTION (pfile, wchar_precision) >= 16)
***
*** 696,701 
--- 730,736 
 ncset = SOURCE_CHARSET;
   if (!wcset)
 wcset = default_wcset;
+ #endif

   pfile->narrow_cset_desc = init_iconv_desc (pfile, ncset, 
SOURCE_CHARSET);

   pfile->wide_cset_desc = init_iconv_desc (pfile, wcset, SOURCE_CHARSET);


The generated code appears to be fine from visual inspection.

BFN.  Paul.



Re: i370 port

2009-06-05 Thread Paul Edwards

I understand current GCC supports various source and target character
sets a lot better out of the box, so it may be EBCDIC isn't even an
issue any more.


It looks that way from what I've seen of 3.4.6 so far.  However, I
won't know for sure until it's on the host and self-generating.


Why are you migrating to 3.4.6 now, instead of to a current version?
If you want to include this in mainline some day, then eventually it
has to be caught up - and 3.4.6 is older than it may appear from the
release date, since it branched off of mainline five years ago.  A lot
has changed since then.


3.4.6 made some revamps to the i370 port (compared to 3.2.3), and I
need to make sure those changes have been digested, and no code
has been lost, so that I can pick up the final i370 port and move it.

It's less daunting to get 3.4.6 working first.  At least there the target
actually exists and does obstensibly appear to work!

Migrating from 3.4.6 to 4.x is not going to be made any bigger a deal.
It's not like someone else is making incompatible changes to the
i370 port at the same time as me!

BFN.  Paul.



Re: "plugin"-ifying the MELT branch.

2009-06-05 Thread Tom Tromey
> "Basile" == Basile STARYNKEVITCH  writes:

Basile> Can a branch be simply a plugin, or should I close (soon) the
Basile> melt-branch and start a melt-plugin-branch on the SVN. If I do that,
Basile> do I need some authorization? from whom?

I think what you do on your branch is up to you.  If you want to
repurpose it to be a plugin branch, I don't think there's any problem
with that.

You can also start a new branch.  Any maintainer can start a new
branch for any reason.

I don't know the answer to your other questions.  I assume there will
be at least one plugin on gcc trunk, for testing purposes if nothing
else.

Tom


Re: LLVM as a gcc plugin?

2009-06-05 Thread Chris Lattner


On Jun 5, 2009, at 3:43 AM, Steven Bosscher wrote:

On Fri, Jun 5, 2009 at 12:40 PM, Andrew  Nisbet  
wrote:

Hello,
I am interested in developing LLVM functionality to support the  
interfaces in GCC ICI.


*sigh*

GCC != LLVM.  And this is a GCC list. Can LLVM topics please be
discussed on an LLVM mailing list?


How is LLVM any different than another external imported library (like  
GMP or MPFR) in this context?


-Chris


Re: [RFC] enabling -fshow-column by default

2009-06-05 Thread Janis Johnson
On Fri, 2009-06-05 at 10:37 -0400, Aldy Hernandez wrote:
> On Fri, Jun 05, 2009 at 12:09:57AM +0100, Jonathan Wakely wrote:
> > 2009/5/20 Aldy Hernandez:
> > >>
> > >> My only worry is that the testsuite may confuse column and line
> > >> numbers and pass/fail tests because of it.
> > >
> > > Janis has a patch for the testsuite to handle all this.
> > 
> > I'm seeing exactly this in the libstdc++ testsuite with some new tests
> > I've written - is a fix on the way soon?
> 
> The fix is already in the tree.
> 
> Which test is this?  Can you send it to me?

I think the libstdc++ testsuite doesn't use the overrides for dg-error
and friends and so isn't handling the column numbers in the new way.
I'll take a look, but it might be awhile before I get to it.

Janis



Re: VTA merge?

2009-06-05 Thread Alexandre Oliva
On Jun  5, 2009, David Edelsohn  wrote:

> I thought a number of people had concerns that VTA was too expensive
> and disruptive for the perceived benefit.

There were such concerns, indeed.

All we knew back then was that there was room for a lot of improvement
in the quality of debug information, and that debug info quality was a
priority for some and a non-concern for others.

Time went by, code was written, adjustments were made, initial steps
towards measuring debug info quality in our testsuite were taken.  I
guess it is now time to assess whether the concerns voiced before the
implementation started, that I shared myself and took into account in
its design, were sufficiently addressed in the design and in the
implementation.

We can measure some of these things now.  Some can even be measured
objectively ;-)

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: i370 port

2009-06-05 Thread Joseph S. Myers
On Sat, 6 Jun 2009, Paul Edwards wrote:

> 3.4.6 made some revamps to the i370 port (compared to 3.2.3), and I
> need to make sure those changes have been digested, and no code
> has been lost, so that I can pick up the final i370 port and move it.

But there were probably also (mechanical) changes to the port between when 
3.4 branched (long before 3.4.6 was released) and when the port was 
removed from trunk - that's why the last revision before it was removed 
from trunk may be a better starting point.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: i370 port

2009-06-05 Thread Paul Edwards

3.4.6 made some revamps to the i370 port (compared to 3.2.3), and I
need to make sure those changes have been digested, and no code
has been lost, so that I can pick up the final i370 port and move it.


But there were probably also (mechanical) changes to the port between when
3.4 branched (long before 3.4.6 was released) and when the port was
removed from trunk - that's why the last revision before it was removed
from trunk may be a better starting point.


Ok, I understand now.  So there were some changes, that would
nominally have made it to GCC 4, but were in fact never officially
released, because it was dropped before release.

So, prior to starting a GCC 4 port (ie the changes may not be
desirable for the GCC 3.4.6 port I am currently working on), I
need to get GCC 3.4 as a base, GCC 3.4.6 as one derivative,
and SVN 77215, then do a 3-way diff/merge to obtain the
"nominal GCC 4" changes.

Or perhaps not.

I don't want the 3.4.6 changes at that point, since anything of
value will be covered by SVN 77215.  So I need to use GCC 3.4.6
as the base, my personal version as one derivative, SVN 77215
as the second derivative, and feed that into the 3-way diff.

Ok, I'll do that when I'm in a position to do the GCC 4 port
attempt.  I'm still months away from completing the GCC 3.4.6
port, and there are other MVS-related projects that are more
important than what is basically a transparent 3.2.3 to 4
upgrade that will only start being useful when it enables something
else to happen (such as the PL/1 front-end).

So I'll be working on that stable 3.4.6 before taking my chances
with what is basically an axed beta (SVN 77215), with still no
indication of whether even a perfectly working self-compiling
i370 target will be accepted unless the testsuite is working first
(and even if it was working, that still may not be enough - as
the next hoop may be an s390 merge - and a requirement to
switch from 370 to 390).

BFN.  Paul.



Re: [RFC] enabling -fshow-column by default

2009-06-05 Thread Jonathan Wakely
2009/6/5 Janis Johnson:
>
> I think the libstdc++ testsuite doesn't use the overrides for dg-error
> and friends and so isn't handling the column numbers in the new way.
> I'll take a look, but it might be awhile before I get to it.

Thanks, Janis. I have added -fno-show-column to
libstdc++-v3/scripts/testsuite_flags for now.

Jonathan


Re: [RFC] enabling -fshow-column by default

2009-06-05 Thread Jonathan Wakely
2009/6/5 Aldy Hernandez:
>
> Which test is this?  Can you send it to me?

It tests a header that isn't checked in yet, so sending the test alone
wouldn't help much :)

I'll try to come up with a self-contained example tomorrow.

Jonathan


Re: VTA merge?

2009-06-05 Thread Daniel Berlin
>
> We can measure some of these things now.  Some can even be measured
> objectively ;-)

Do you have any of them handy (memory use, compile time with release
checking only, etc) so that we can start the public
argument^H^H^H^H^H^discussion?

;)


Re: LLVM as a gcc plugin?

2009-06-05 Thread Joe Buck


On Fri, Jun 5, 2009 at 12:40 PM, Andrew  Nisbet wrote:
> >> Hello,
> >> I am interested in developing LLVM functionality to support the
> >> interfaces in GCC ICI.

On Jun 5, 2009, at 3:43 AM, Steven Bosscher wrote:
> > GCC != LLVM.  And this is a GCC list. Can LLVM topics please be
> > discussed on an LLVM mailing list?

On Fri, Jun 05, 2009 at 09:48:52AM -0700, Chris Lattner wrote:
> How is LLVM any different than another external imported library (like
> GMP or MPFR) in this context?

GMP and MPFR are required components of GCC, and every developer has to
deal with them.  For interfacing between GCC and LLVM, the experts who'll
be able to answer the questions are generally going to be found on the
LLVM lists, not the gcc list, and those (like you) who participate on
both lists, well, you're on both lists.

So as a practical matter, it seems that LLVM lists are more suitable.
If it's ever decided that LLVM becomes a required piece of GCC, like
GMP and MPFR, that would change.


Machine Description Template?

2009-06-05 Thread Graham Reitz


Is there a machine description template in the gcc file source tree?

If there is also template for the 'C header file of macro definitions'  
that would be good to know too.


I did a file search for '.md' and there are tons of examples.   
Although, I was curious if there was a generic template.


graham 


Re: Machine Description Template?

2009-06-05 Thread Ramana Radhakrishnan
On Fri, Jun 5, 2009 at 11:11 PM, Graham Reitz wrote:
>
> Is there a machine description template in the gcc file source tree?

There is no template as such but you could look at existing ports for
the basic templates. Google should give you results for previous
questions on this list regarding new ports. There are some links to
other documents about starting new ports in the gcc wiki under the
tutorials and documentation section.


>
> If there is also template for the 'C header file of macro definitions' that
> would be good to know too.

Most of the header files in the config//*.h have a
description of the target macros and some values in them. You should
be able to find something there though the best description should be
read from the internals documents .

>
> I did a file search for '.md' and there are tons of examples.  Although, I
> was curious if there was a generic template.

Sadly you'd have to keep them in sync with every version of gcc and no
one has thought of maintaining something like that.

Best of luck - HTH

Ramana
>
> graham
>


Re: Machine Description Template?

2009-06-05 Thread Michael Hope
I've found the MMIX port to be a good place to start.  It's a bit old
but the archtecture is nice and simple and the implementation nice and
brief.  Watch out though as it is a pure 64 bit machine - you'll need
to think SI every time you see DI.

The trick past there is to compare the significant features of your
machine with existing machines.  For example, GCC prefers a 68000
style machine with a set of condition codes, however many machines
only have one condition flag that changes meaning based on what you
are doing.

-- Michael

2009/6/6 Graham Reitz :
>
> Is there a machine description template in the gcc file source tree?
>
> If there is also template for the 'C header file of macro definitions' that
> would be good to know too.
>
> I did a file search for '.md' and there are tons of examples.  Although, I
> was curious if there was a generic template.
>
> graham
>


Re: Machine Description Template?

2009-06-05 Thread Graham Reitz

Excellent!  Thanks Ramana and Michael.

I have been working through sections 16 & 17 of the gccint.info  
document and also read through Hans' 'Porting GCC for Dunces'.


He sure wasn't kidding mentioning you would need to read them several  
times.


graham


On Jun 5, 2009, at 5:46 PM, Michael Hope wrote:


I've found the MMIX port to be a good place to start.  It's a bit old
but the archtecture is nice and simple and the implementation nice and
brief.  Watch out though as it is a pure 64 bit machine - you'll need
to think SI every time you see DI.

The trick past there is to compare the significant features of your
machine with existing machines.  For example, GCC prefers a 68000
style machine with a set of condition codes, however many machines
only have one condition flag that changes meaning based on what you
are doing.

-- Michael

2009/6/6 Graham Reitz :


Is there a machine description template in the gcc file source tree?

If there is also template for the 'C header file of macro  
definitions' that

would be good to know too.

I did a file search for '.md' and there are tons of examples.   
Although, I

was curious if there was a generic template.

graham





Re: Machine Description Template?

2009-06-05 Thread Jeff Law

Graham Reitz wrote:


Is there a machine description template in the gcc file source tree?

If there is also template for the 'C header file of macro definitions' 
that would be good to know too.


I did a file search for '.md' and there are tons of examples.  
Although, I was curious if there was a generic template.



Cygnus/Red Hat once had a generic template for ports; however, I 
seriously doubt it has been kept up-to-date.


The best suggestion I could give would be to identify supported chips 
with similar characteristics as your chip.  Then review how those ports 
handle each common characteristic.


Jeff



sched2, ret, use, and VLIW bundling

2009-06-05 Thread DJ Delorie

I'm working on a VLIW coprocessor for MeP.  One thing I noticed is
that sched2 won't bundle the function's RET with the insn that sets
the return value register, apparently because there's an intervening
USE of that register (insn 30 in the example below).

Is there any way around this?  The return value obviously isn't
actually used there, nor does the return insn need it - that USE is
just to keep the return value live until the function exits.


sched_reorder: clock 3 nready 1
insn  671   27  cgen_intrinsic_cpadd3_h_P0S_P1  p0s,p1

;;  Ready list (t =   3):27
;;3-->27 $c0=unspec[$c16,$c0] 1774 
:(ivc2_p0+ivc2_slot_p0s)|(ivc2_p1+ivc2_slot_p1)
;;  dependencies resolved: insn 30
;;  tick updated: insn 30 into ready
;;  Ready list (t =   3):30
;;  Ready list after queue_to_ready:30
;;  Ready list after ready_sort:30

sched_reorder: clock 4 nready 1
insn   -1   30  {unknown}  none

;;  Ready list (t =   4):30
;;4-->30 use $c0   :nothing
;;  dependencies resolved: insn 36
;;  tick updated: insn 36 into ready
;;  Ready list (t =   4):36
;;4-->36 {return;use $lp;} 
:(ivc2_core+ivc2_slot_c16)
;;  Ready list (t =   4):  
;;  Ready list (final):  
;;   total time = 4
;;   new head = 9
;;   new tail = 36


(insn:TI 27 19 30 2 dj.c:9 (set (reg/i:DI 48 $c0)
(unspec:DI [
(reg:DI 64 $c16 [140])
(reg:DI 48 $c0 [143])
] 1774)) 671 {cgen_intrinsic_cpadd3_h_P0S_P1} (expr_list:REG_DEAD 
(reg:DI 64 $c16 [140])
(nil)))

(insn 30 27 35 2 dj.c:9 (use (reg/i:DI 48 $c0)) -1 (nil))

(note 35 30 36 2 NOTE_INSN_EPILOGUE_BEG)

(jump_insn:TI 36 35 37 2 dj.c:9 (parallel [
(return)
(use (reg:SI 17 $lp))
]) 979 {return_internal} (expr_list:REG_DEAD (reg:SI 17 $lp)
(nil)))