Re: CRX and CR16 port maintainer

2008-03-11 Thread Pompapathi V Gadad
I am extremely thankful to GCC Steering Committee for appointing me as 
maintainer for CR16 and CRX ports.


I will submit the patch adding myself to the MAINTAINERS file for trunk. 
Can I also submit the patch for 4.3 branch?


Please suggest.
Thanks,
Pompa

David Edelsohn wrote:

I am pleased to announce that the GCC Steering Committee has
accepted the CR16 port for inclusion in GCC and appointed
Pompapathi Gadad as maintainer for the CRX and CR16 ports.  The initial CR16
patch needs approval from a GCC GWP maintainer before it may be committed.

Please join me in congratulating Pompa on his new role.
Please update your listing in the MAINTAINERS file.

Happy hacking!
David







GCC 4.3 - Error: Link tests are not allowed after GCC_NO_EXECUTABLES

2008-03-11 Thread Hans Kester
Ping!


When building GCC 4.3.0 for any newlib target I get:

...
supports shared libraries... yes
checking dynamic linker characteristics... no
checking how to hardcode library paths into programs... immediate
checking for shl_load... configure: error: Link tests are not allowed
after GCC_NO_EXECUTABLES.
make[1]: *** [configure-target-libstdc++-v3] Error 1
make[1]: Leaving directory `/gcc/build-gcc'
make: *** [all] Error 2

I am using binutils 2.18, newlib 1.16.0 and Cygwin 1.5.24.
The configure line is:
../gcc-4.3.0/configure --prefix=/usr/local/gcc-4.3.0-i686-elf
--target=i686-elf --with-gnu-as --with-gnu-ld --with-newlib
--with-headers=/gcc/newlib-1.16.0/newlib/libc/include --disable-shared
--disable-libssp --enable-languages=c,c++

I searched for this error and found:
http://gcc.gnu.org/ml/gcc/2007-09/msg00421.html
Wasn't this patched? How do I fix this?

Regards,

Hans Kester


Re: RTL definition

2008-03-11 Thread Fran Baena
Hi,

>  By the way, RTL is not really machine-independent.  The data
>  structures are machine independent.  But the contents are not.  You
>  can not, even in principle, take the RTL generated for one processor
>  and compile it on another processor.

I thought that RTL represented something close to the target machine,
but not machine-dependent. I firstly thought that the output of the
middle-end was an RTL machine-independent representation, to which is
applied a few low-optimization machine-independent passes, and after
that is translated to a RTL machine-dependent to be applied other
optimization passes.

I read the rtl.def and rtl.h files, are very interesting, and i better
understand the whole process. But reading the output files by debuggin
options (-fdump-rtl-all) i have seen instructions like this:

(insn 8 6 10 1 (set (mem/c/i:SI (plus:DI (reg/f:DI 54 virtual-stack-vars)
(const_int -8 [0xfff8])) [0 a+0 S4 A64])
(const_int 1 [0x1])) -1 (nil)
(nil))

Among the multiple questions that appears i have a particular one,
what does "8 6 10 1" represents? Is it the "print format" defined in
rtl.def?

Thanks,

Fran


howto run cross testings with help of translators

2008-03-11 Thread Lijuan Hai
Hi all,
  I have a developing cross compiler sparc-sun-solaris2.10-gcc on x86.
There is an available binary translator that could execute SPARC ELF
on x86 machines. so I want to run testings by runtest. It would
definitely help a lot if anyone could give clues on how to manage it.
Thanks in advance.

-- 
Best wishes!
Yours,
Lijuan Hai
  _  _
  (_)(_)
   (,,)
  =()=
 ((__)\
   _|L\___/


The effects of closed loop SSA and Scalar Evolution Const Prop.

2008-03-11 Thread Pranav Bhandarkar
Hi,

I am writing about a problem I noticed with the code generated for
memcpy by arm-none-eabi-gcc.

Now, memcpy has three distinct loops - one that copies (4 *sizeof
(long) ) bytes per iteration, one that copies sizeof (long) bytes per
iteration and the last one that copies one byte per iteration. The
registers used for the src and destination pointers should, IMHO, be
the same across all the three loops. However, what I noticed is that
after the first loop the src and dest registers arent reused but the
address of the next byte to be copied is recalculated as (original src
address + (number of iterations of 1st loop * 16)) . Similarly for the
destination address too. See the assembly snippet below.

.L3:
   
bhi .L3 @,
sub r2, r4, #16 @ D.1312, len0,
mov r3, r2, lsr #4  @ D.1314, D.1312,
sub r1, r2, r3, asl #4  @ len, D.1312, D.1314,
add r3, r3, #1  @ tmp225, D.1314,
mov r3, r3, asl #4  @ D.1320, tmp225,
cmp r1, #3  @ len,
add r4, r5, r3  @ aligned_src.60, src0, D.1320


Adding new section attribute

2008-03-11 Thread Mohamed Shafi
Hello all,

For a gcc port i need to add new section. This new section has both
the variations i.e initialized and uninitialized say .sbss and .sdata
After going through the internals this is what i understood

1. Only one section attribute is need. Depending on whether the
variable is initialized or not the variable can be placed in .sdata or
.sbss

2. For .sdata we need to define EXTRA_SECTIONS and
EXTRA_SECTION_FUNCTIONS. The variable should be handled in the target
macro TARGET_ASM_SELECT_SECTION by looking for the attribute value.

3. For .sbss we need to handle the variable in the macro
ASM_OUTPUT_BSS, again by looking for the attribute value. For .sbss
section there is no need to define EXTRA_SECTIONS and
EXTRA_SECTION_FUNCTIONS

Can anyone please confirm this?

Regards,
Shafi


Re: GCC 4.3 - Error: Link tests are not allowed after GCC_NO_EXECUTABLES

2008-03-11 Thread Brian Dessent
Hans Kester wrote:

> I searched for this error and found:
> http://gcc.gnu.org/ml/gcc/2007-09/msg00421.html
> Wasn't this patched? How do I fix this?

It wasn't actually fixed.  That was a very long thread and it's easy to
get confused, particularly since the mailing list archives don't
crossthread across month boundaries very gracefully.  Let me see if I
can summarize the history of the issue for posterity.

Target libraries like libstdc++ need to know a lot of details about the
system they are being configured for -- what C library functions and
capabilities are present, and so on.  The library can get this
information either by performing configure tests that link test
programs, or by having the needed answers somehow supplied in another
form.

The problem is that with embedded bare-metal newlib targets (i.e.
*-elf), the amount of functionality that the target has depends
significantly on the board support package (BSP) which typically
consists of things like startup code (crt*.o), linker scripts, and low
level syscall implementations (or stubs thereof.)  Since these are
things outside the scope of gcc and newlib, it means without some
external source (be it from the hardware vendor or using generic stubs
from libgloss) that a bare metal gcc target cannot link.  And that is
what the GCC_NO_EXECUTABLES error means: some configure script is trying
to do a link test but linking was earlier determined to be impossible
due to no linker script (or crt*.o or whatever) being present.

This presents a dilemma if you want to build a bare metal gcc with
anything but plain C support.  The libstdc++ configury in particular
dealt with this by special casing newlib bare metal cross targets, such
that a fixed list of capabilities known to be present in newlib were
coded into crossconfig.m4 such that no link tests were necessary.

At some point last year, this stopped working: bare metal newlib targets
were again failing with GCC_NO_EXECUTABLES when trying to configure
libstdc++.  The reason was related to an upgraded libtool and
AC_LIBTOOL_DLOPEN which wanted to check for dlopen() support in the
target, and thus why you see the error about searching for shl_load(). 
Ideally, since it's common for bare metal targets to not even have
shared libraries it should be possible to avoid the question of dlopen()
support by just specifying --disable-shared, but apparently this doesn't
work because AC_LIBTOOL_DLOPEN has to be called before libtool has been
initialized with AM_PROG_LIBTOOL.

The controversy and bulk of the long thread came about how to rectify
this.  The first workaround, which started the whole long thread was a
Blackfin-specific patch that papered over the problem by making -msim
implicit when -mfdpic or
-mid-shared-library was given, which artificially allowed link tests to
succeed.  Obviously that doesn't help with other targets but I think it
was enough to get the discussion rolling as to what the real nature of
the problem was.

One proposed remedy was apparently to have the user add libgloss to
their tree, and add code to detect this and pass the appropriate flags
down when configuring target libraries, allowing link tests to work. 
But there was objection to this because people were uncomfortable with
letting link tests succeed without the user having specified a BSP,
since that increases the potential for a user to build a toolchain whose
configuration does not match with what the actual hardware supports.

Another idea (the one in the link in your message) was to disable that
particular libtool test when cross compiling.  That was not acceptable
because it violates the philosophy that cross compilers should produce
identical code to native compilers for a given target.  If simply the
fact that you built gcc as a cross caused a change in behavior, then the
utility of cross compilation would be substantially decreased.

Yet another idea was presented to provide a framework wherein the user
could provide a config.cache-like file that would simulate the results
of all of the configure tests having been performed.  It seems that
people generally thought that was a good idea in the abstract but there
was some worry that since it required supplying answers to all configure
tests that it would be hard to come up with a suitable "generic" one to
ship with gcc.  There seemed to be agreement that it should remain an
optional feature that the user could use if he had a suitable
config.cache, but not something to make the default building of a
toolchain work again.

There was one additional solution discussed: disable libtool's checking
of dlopen() for newlib targets:
  I think
everyone agreed that this was a suitable compromise, and that the way
forward was to revert the libgloss hacks, implement the config.cache
idea as an optional alternative, and commit the patch to disable
dlopen() checking on newlib.

However nothing seemed to ever actually get committed

Re: RTL definition

2008-03-11 Thread Ian Lance Taylor
"Fran Baena" <[EMAIL PROTECTED]> writes:

>>  By the way, RTL is not really machine-independent.  The data
>>  structures are machine independent.  But the contents are not.  You
>>  can not, even in principle, take the RTL generated for one processor
>>  and compile it on another processor.
>
> I thought that RTL represented something close to the target machine,
> but not machine-dependent. I firstly thought that the output of the
> middle-end was an RTL machine-independent representation, to which is
> applied a few low-optimization machine-independent passes, and after
> that is translated to a RTL machine-dependent to be applied other
> optimization passes.

RTL is created using named patterns in the MD file, so even the
creation process is machine-dependent.  This is most obvious in the
use of unspec, but it is true in general.


> I read the rtl.def and rtl.h files, are very interesting, and i better
> understand the whole process. But reading the output files by debuggin
> options (-fdump-rtl-all) i have seen instructions like this:
>
> (insn 8 6 10 1 (set (mem/c/i:SI (plus:DI (reg/f:DI 54 virtual-stack-vars)
> (const_int -8 [0xfff8])) [0 a+0 S4 A64])
> (const_int 1 [0x1])) -1 (nil)
> (nil))
>
> Among the multiple questions that appears i have a particular one,
> what does "8 6 10 1" represents? Is it the "print format" defined in
> rtl.def?

8 is in the INSN uid.  6 is the previous INSN uid.  10 is the next
insn UID.  1 is the number of the basic block holding the insn.  In
general RTL is printed according to the format in rtl.def.  There are
a couple of exceptions; one of those exceptions is that field 4 of an
insn, INSN_LOCATOR, is only printed if it is present.  See line 391 of
print-rtl.c.

Ian


Re: The effects of closed loop SSA and Scalar Evolution Const Prop.

2008-03-11 Thread Zdenek Dvorak
Hi,

> Now tree scalar evolution goes over PHI nodes and realises that
> aligned_src_35 has a scalar evolution {aligned_src_22 + 16, +, 16}_1)
> where aligned_src_22 is
> (const long int *) src0_12(D) i.e the original src pointer.  Therefore
> to calculate aligned_src_62 before the second loop computations are
> introduced based on aligned_src_22.
> 
> My question is, shouldnt scalar evolution ignore PHI nodes with one
> argument (implying a copy)

no, it should not (scev_cprop only deals with phi nodes with one
argument).

> or If not atleast pay heed to the cost of
> additional computations introduced.

Yes, it should, in some form; however, it would probably not help this
testcase anyway, as computing x + 16 * y is too cheap.  Final value
replacement is often profitable even if it introduces some additional
computation, as performing it may make other loop optimizations
(vectorization, loop nest optimizations) possible.

One solution would be to add a pass that would replace the computations
with final values in a loop, undoing this transformation, after the
mentioned optimizations are performed (FRE could do this if value
numbering were strong enough, but that might not be feasible).

Zdenek


Re: [tuples] gimple_assign_subcode for GIMPLE_SINGLE_RHS

2008-03-11 Thread Diego Novillo

On 03/10/08 08:24, Richard Guenther wrote:


You could either do

GIMPLE_ASSIGN 


But 'cond' would be an unflattened tree expression.  I'm trying to avoid 
that.



or invent COND_GT_EXPR, COND_GE_EXPR, etc. (at least in GIMPLE
we always have a comparison in COND_EXPR_COND, never a plain
boolean variable).


Yeah, that would mean adding 5 more tree codes, though.  Unless we gave 
up and invented a new set of subcodes exclusively for gimple.  That 
seems like a waste, since tree.def neatly defines all the subcodes we 
want already.



Thanks.  Diego.


Re: [tuples] gimple_assign_subcode for GIMPLE_SINGLE_RHS

2008-03-11 Thread Zdenek Dvorak
Hi,

> On 03/10/08 08:24, Richard Guenther wrote:
> 
> >You could either do
> >
> >GIMPLE_ASSIGN 
> 
> But 'cond' would be an unflattened tree expression.  I'm trying to avoid 
> that.
> 
> >or invent COND_GT_EXPR, COND_GE_EXPR, etc. (at least in GIMPLE
> >we always have a comparison in COND_EXPR_COND, never a plain
> >boolean variable).
> 
> Yeah, that would mean adding 5 more tree codes, though.  Unless we gave 
> up and invented a new set of subcodes exclusively for gimple.  That 
> seems like a waste, since tree.def neatly defines all the subcodes we 
> want already.

another possibility would be to represent a = b < c ? d : e as

GIMPLE_ASSIGN (LT_EXPR, a, b, c, d, e)

and a = (b < c) as

GIMPLE_ASSIGN (LT_EXPR, a, b, c, true, false)

Zdenek


Re: [tuples] gimple_assign_subcode for GIMPLE_SINGLE_RHS

2008-03-11 Thread Diego Novillo
On Tue, Mar 11, 2008 at 08:02, Zdenek Dvorak <[EMAIL PROTECTED]> wrote:

>  another possibility would be to represent a = b < c ? d : e as
>
>  GIMPLE_ASSIGN (LT_EXPR, a, b, c, d, e)
>
>  and a = (b < c) as
>
>  GIMPLE_ASSIGN (LT_EXPR, a, b, c, true, false)

Yeah, I think I like this one.  We don't have too many assignments of
the form a  = (b < c) to be a problem (this adds two extra operand
slots to the tuple).


Diego.


Re: The effects of closed loop SSA and Scalar Evolution Const Prop.

2008-03-11 Thread Daniel Berlin
On Tue, Mar 11, 2008 at 9:41 AM, Zdenek Dvorak <[EMAIL PROTECTED]> wrote:
> Hi,
>
>
>  > Now tree scalar evolution goes over PHI nodes and realises that
>  > aligned_src_35 has a scalar evolution {aligned_src_22 + 16, +, 16}_1)
>  > where aligned_src_22 is
>  > (const long int *) src0_12(D) i.e the original src pointer.  Therefore
>  > to calculate aligned_src_62 before the second loop computations are
>  > introduced based on aligned_src_22.
>  >
>  > My question is, shouldnt scalar evolution ignore PHI nodes with one
>  > argument (implying a copy)
>
>  no, it should not (scev_cprop only deals with phi nodes with one
>  argument).
>
>
>  > or If not atleast pay heed to the cost of
>  > additional computations introduced.
>
>  Yes, it should, in some form; however, it would probably not help this
>  testcase anyway, as computing x + 16 * y is too cheap.  Final value
>  replacement is often profitable even if it introduces some additional
>  computation, as performing it may make other loop optimizations
>  (vectorization, loop nest optimizations) possible.
>
>  One solution would be to add a pass that would replace the computations
>  with final values in a loop, undoing this transformation, after the
>  mentioned optimizations are performed (FRE could do this if value
>  numbering were strong enough, but that might not be feasible).

SCCVN could certainly be taught to look at the SCEV values and
incorporate them into value numbers (IE we fallback to saying a name
is equivalent to its scalar evolution  rather than itself, or
something).  I looked into this once, and it got quite expensive
because SCEV is not particularly fast.


aneurin chandras hammond griswold

2008-03-11 Thread gad gurjot
dalibor becky omero ji sonia 
bridget dorothy 



Re: howto run cross testings with help of translators

2008-03-11 Thread Ben Elliston
> I have a developing cross compiler sparc-sun-solaris2.10-gcc on x86.
> There is an available binary translator that could execute SPARC ELF
> on x86 machines. so I want to run testings by runtest. It would
> definitely help a lot if anyone could give clues on how to manage it.

You want a SPARC simulator that can run under DejaGnu.  If you can find
a setup that would allow you to run "sparcsim ", that would be
best, as it would make the DejaGnu setup quite simple -- see the other
Dejagnu baseboard files for examples.

Another way is to find a full system simulator, boot an operating system
with a telnet/ftp server running and have DejaGnu copy the test cases
into the simulator and then log in to execute them.  In that case, the
setup is more like a convention cross-testing arrangement where the
machine doing the testing logs into the target.

Anyway, this is probably off-topic for the GCC list.  Please direct any
follow-ups to the DejaGnu mailing list.  Thanks.

Ben



libtool for shared objects?

2008-03-11 Thread Basile STARYNKEVITCH

Hello All,

in my MELT branch http://gcc.gnu.org/wiki/MiddleEndLispTranslator I need 
to compile a C file into a shared object library which can be loaded by 
lt_dlopenext


First, I have the impression that the libtool in e.g. 
libjava/Makefile.in or libgomp/Makefile.in or libmudflap/Makefile.in is 
not the usual one (I mean the Debian/Sid libtool package version 
1.5.26-1 for example). I would like to use this tool to compile some 
(generated) warm(basilys.c file into a warm-basilys.la in the most 
portable way (on Linux/ELF systems I would just use gcc -fPIC -shared 
warm-basilys.c -o warm-basilys.so and use the warm-basilys.so shared 
library to dlopen).


What is exactly the $(LIBTOOL) in Makefile.in-s (i.e; the @LIBTOOL@ from 
some autoconf stuff)?


What is the right way to produce a dynamically loadable "library" which 
would be the most portable?


Regards
--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***


Re: libtool for shared objects?

2008-03-11 Thread Basile STARYNKEVITCH

Hello All

and a big thanks to David Fang

(I, Basile, asked)
What is the right way to produce a dynamically loadable "library" 
which would be the most portable?


and David Fang kindly replied to me:

Hi, for starters:

in Makefile.am:

lib_LTLIBRARIES = mymodule.la
mymodule_la_SOURCES = foo.c
mymodule_la_LDFLAGS = -module

This creates a wrapper mymodule.la, whose real shared object will reside 
in .libs/mymodule.{so,dylib} until it is installed (in this example, 
$(prefix)/lib).



Thanks, but the gcc/ subdirectory of GCC trunk (or my MELT branch) has 
only a Makefile.in, no Makefile.am


And the MELT branch is supposed to generate some *.so (or *.la) (from a 
*.c file which itself can be generated) in cc1 during its execution and 
then in the same process dlopen it


My impression is that I should execute, a libtool subcommand.

Currently, my MELT branch rev 133113 file gcc/basilys.c function 
compile_to_dyl near line 3629, I have a function which execute (using 
pex_run) a script basilys-gcc which I would like to avoid, since I 
believe libtool --mode=compile might be enough


But I'm not sure to understand the relation between libtool & $(LIBTOOL) 
(ie @libtool@)


Any clues?

Thanks
--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***


Re: libtool for shared objects?

2008-03-11 Thread Ralf Wildenhues
Hello Basile,

* Basile STARYNKEVITCH wrote on Tue, Mar 11, 2008 at 09:18:54PM CET:
> First, I have the impression that the libtool in e.g.  
> libjava/Makefile.in or libgomp/Makefile.in or libmudflap/Makefile.in is  
> not the usual one (I mean the Debian/Sid libtool package version  
> 1.5.26-1 for example).

GCC uses a slightly older snapshot of Libtool 2.2.

> I would like to use this tool to compile some  
> (generated) warm(basilys.c file into a warm-basilys.la in the most  
> portable way (on Linux/ELF systems I would just use gcc -fPIC -shared  
> warm-basilys.c -o warm-basilys.so and use the warm-basilys.so shared  
> library to dlopen).

In which directory?  In those using automake you should be able to just
use the normal automake machinery for this.  AFAIK, in the gcc/ subdir
no libtool (nor automake) are used.

Cheers,
Ralf


Re: libtool for shared objects?

2008-03-11 Thread Basile STARYNKEVITCH

Hello All,

Ralf Wildenhues wrote:

Hello Basile,

* Basile STARYNKEVITCH wrote on Tue, Mar 11, 2008 at 09:18:54PM CET:
First, I have the impression that the libtool in e.g.  
libjava/Makefile.in or libgomp/Makefile.in or libmudflap/Makefile.in is  
not the usual one (I mean the Debian/Sid libtool package version  
1.5.26-1 for example).


GCC uses a slightly older snapshot of Libtool 2.2.

I would like to use this tool to compile some  
(generated) warm(basilys.c file into a warm-basilys.la in the most  
portable way (on Linux/ELF systems I would just use gcc -fPIC -shared  
warm-basilys.c -o warm-basilys.so and use the warm-basilys.so shared  
library to dlopen).


In which directory?  In those using automake you should be able to just
use the normal automake machinery for this.  AFAIK, in the gcc/ subdir
no libtool (nor automake) are used.


I need all this in the gcc/ subdir, since I want cc1 (the same process) to
  1. generate a C file
  2. compile it to some dynamically loadable stuff (*.so on Linux/Elf, 
perhaps *.la with libtool)

  3. lt_dlopenext it


Perhaps my question becomes: how to use libtool (and which one) within 
the gcc/ subdir?


A big thanks for the help!

Regards.


--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***


Re: libtool for shared objects?

2008-03-11 Thread Andreas Schwab
Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes:

> I need all this in the gcc/ subdir, since I want cc1 (the same process) to
>   1. generate a C file
>   2. compile it to some dynamically loadable stuff (*.so on Linux/Elf,
> perhaps *.la with libtool)
>   3. lt_dlopenext it

Why do you need to do it in the gcc directory?  When would cc1 need it?

Andreas.

-- 
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756  01D3 44D5 214B 8276 4ED5
"And now for something completely different."


Re: libtool for shared objects?

2008-03-11 Thread Basile STARYNKEVITCH

Andreas Schwab wrote:

Basile STARYNKEVITCH <[EMAIL PROTECTED]> writes:


I need all this in the gcc/ subdir, since I want cc1 (the same process) to
  1. generate a C file
  2. compile it to some dynamically loadable stuff (*.so on Linux/Elf,
perhaps *.la with libtool)
  3. lt_dlopenext it


Why do you need to do it in the gcc directory?  When would cc1 need it?


The details & motivations are explained in my GCC 2007 summit paper. The 
basic ideas are on http://gcc.gnu.org/wiki/MiddleEndLispTranslator (I 
just added a small paragraph).



--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***


minor mistake in http://gcc.gnu.org/gcc-4.3/changes.html = SSSE3 --> SSE3

2008-03-11 Thread Oliver Hessling

Hi,

there is a minor mistake in
http://gcc.gnu.org/gcc-4.3/changes.html

at chapter: New Targets and Target Specific Improvements
sub-section: IA-32/x86-64

"SSSE3" should be replaced by
"SSE3"

cheers
Oliver

--
(o-(:¬)liver  ]-[essling
//\+33 (0)6 31 82 83 84
V_/_   http://openwifiphone.free.fr 



Re: minor mistake in http://gcc.gnu.org/gcc-4.3/changes.html = SSSE3 --> SSE3

2008-03-11 Thread Andrew Pinski
On Tue, Mar 11, 2008 at 2:21 PM, Oliver Hessling
<[EMAIL PROTECTED]> wrote:
> Hi,
>
>  there is a minor mistake in
>  http://gcc.gnu.org/gcc-4.3/changes.html
>
>  at chapter: New Targets and Target Specific Improvements
>  sub-section: IA-32/x86-64
>
>  "SSSE3" should be replaced by
>  "SSE3"

No, there is a SSSE3.  Yes Intel went crazy with the naming (again).

-- Pinski


gcc 4.2.3 : make: *** [bootstrap] Error 2

2008-03-11 Thread Dennis Clarke

I had sent this to the wrong maillist I think. Yet another error. :-\

In any case .. here it is :

---
Subject:gcc 4.2.3 : make: *** [bootstrap] Error 2
From:   "Dennis Clarke" <[EMAIL PROTECTED]>
Date:   Tue, March 11, 2008 16:02

This error occurs well into the stage 2 of the bootstrap process.

Here are the specifics :

configure line used was

../gcc-4.2.3/configure --with-as=/home/dclarke/local/bin/as
--with-ld=/home/dclarke/local/bin/ld --enable-threads=posix --disable-nls
--prefix=/home/dclarke/local --with-local-prefix=/home/dclarke/local
--enable-shared --enable-languages=c,c++,objc,fortran
--with-gmp=/home/dclarke/local --with-mpfr=/home/dclarke/local
--enable-bootstrap

GNU Binutils 2.18
GNU Make 3.81
flex 2.5.35
autoconf (GNU Autoconf) 2.61
automake (GNU automake) 1.10.1

Both GMP and MPFR are built and pass all tests and the libraries are created
fine :

gmp-4.2.2
mpfr-2.3.1

[EMAIL PROTECTED]:~/build/first_pass/gcc/gcc-4.2.3-build$ ls -lap
$HOME/local/lib | grep -E "gmp|mpfr"
-rw-r--r--  1 dclarke csw  592016 Mar  9 17:48 libgmp.a
-rwxr-xr-x  1 dclarke csw 794 Mar  9 17:48 libgmp.la
lrwxrwxrwx  1 dclarke csw  15 Mar  9 17:48 libgmp.so -> libgmp.so.3.4.2
lrwxrwxrwx  1 dclarke csw  15 Mar  9 17:48 libgmp.so.3 -> libgmp.so.3.4.2
-rwxr-xr-x  1 dclarke csw  317869 Mar  9 17:48 libgmp.so.3.4.2
-rw-r--r--  1 dclarke csw 1812424 Mar  9 22:06 libmpfr.a
-rwxr-xr-x  1 dclarke csw 985 Mar  9 22:06 libmpfr.la
lrwxrwxrwx  1 dclarke csw  16 Mar  9 22:06 libmpfr.so -> libmpfr.so.1.1.1
lrwxrwxrwx  1 dclarke csw  16 Mar  9 22:06 libmpfr.so.1 -> libmpfr.so.1.1.1
-rwxr-xr-x  1 dclarke csw  985371 Mar  9 22:06 libmpfr.so.1.1.1
[EMAIL PROTECTED]:~/build/first_pass/gcc/gcc-4.2.3-build$



existing GCC was :

[EMAIL PROTECTED]:~/build/first_pass/gcc/gcc-4.2.3-build$ gcc -v
Using built-in specs.
Target: powerpc-linux-gnu
Configured with: ../src/configure -v
--enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr
--enable-shared --with-system-zlib --libexecdir=/usr/lib
--without-included-gettext --enable-threads=posix --enable-nls
--program-suffix=-4.1 --enable-__cxa_atexit --enable-clocale=gnu
--enable-libstdcxx-debug --enable-mpfr --disable-softfloat
--enable-targets=powerpc-linux,powerpc64-linux --with-cpu=default32
--enable-checking=release powerpc-linux-gnu
Thread model: posix
gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)

Stage 1 of the bootstrap process resulted in this :

[EMAIL PROTECTED]:~/build/first_pass/gcc/gcc-4.2.3-build$
/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build/gcc/xgcc -v
Using built-in specs.
Target: powerpc-unknown-linux-gnu
Configured with: ../gcc-4.2.3/configure --with-as=/home/dclarke/local/bin/as
--with-ld=/home/dclarke/local/bin/ld --enable-threads=posix --disable-nls
--prefix=/home/dclarke/local --with-local-prefix=/home/dclarke/local
--enable-shared --enable-languages=c,c++,objc,fortran
--with-gmp=/home/dclarke/local --with-mpfr=/home/dclarke/local
--enable-bootstrap
Thread model: posix
gcc version 4.2.3


The ERROR in stage 2 was thus :


checking for powerpc-unknown-linux-gnu-gfortran...
/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build/./gcc/gfortran
-B/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build/./gcc/
-B/home/dclarke/local/powerpc-unknown-linux-gnu/bin/
-B/home/dclarke/local/powerpc-unknown-linux-gnu/lib/ -isystem
/home/dclarke/local/powerpc-unknown-linux-gnu/include -isystem
/home/dclarke/local/powerpc-unknown-linux-gnu/sys-include
checking whether we are using the GNU Fortran compiler... no
checking whether
/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build/./gcc/gfortran
-B/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build/./gcc/
-B/home/dclarke/local/powerpc-unknown-linux-gnu/bin/
-B/home/dclarke/local/powerpc-unknown-linux-gnu/lib/ -isystem
/home/dclarke/local/powerpc-unknown-linux-gnu/include -isystem
/home/dclarke/local/powerpc-unknown-linux-gnu/sys-include accepts -g... no
checking whether the GNU Fortran compiler is working... no
configure: error: GNU Fortran is not working; the most common reason for
that is that you might have linked it to shared GMP and/or MPFR libraries,
and not set LD_LIBRARY_PATH accordingly. If you suspect any other reason,
please report a bug in http://gcc.gnu.org/bugzilla, attaching
/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build/powerpc-unknown-linux-gnu/libgfortran/config.log
make[1]: *** [configure-target-libgfortran] Error 1
make[1]: Leaving directory `/home/dclarke/build/first_pass/gcc/gcc-4.2.3-build'
make: *** [bootstrap] Error 2
[EMAIL PROTECTED]:~/build/first_pass/gcc/gcc-4.2.3-build$


This has happened repeatedly now .. twice actually. Not too sure what to do
to get out of this little bind.

Should I just set LD_LIBRARY_PATH=$HOME/local/lib  ??

Any thoughts ?

Dennis Clarke


Re: API for callgraph and IPA passes for whole program optimization

2008-03-11 Thread Diego Novillo

On 3/9/08 7:26 AM, Jan Hubicka wrote:


compensate testsuite and documentation for the removal of RTL dump
letters so I would rather do that just once.  Does this seem OK?


Yup, thanks for doing this.



The patch include the read/write methods that will be just placeholders
on mainline.  Naturally I can remove them for time being at least as
long as we think the RTL_PASS/TREE_PASS macros are good idea.


Nitpick on the name, can you s/TREE/GIMPLE/?



quite easilly see that those are stepping back from plan not making
passmanager poluted by ugly macros, but on the other hand since the PM
is now doing RTL/IPA/tree passes it needs at least a little of
flexibility to be able update API of one without affecting others.


How about doing the simple hierarchy as you outlined in your last 
message?  Both gimple and rtl passes would inherit everything from base, 
and ipa would have the additional hooks for summary generation and whatnot.



Diego.


Fwd: Porting gcc to a new architecture

2008-03-11 Thread Schmave
Hi i was wondering what steps i need to take to port gcc to a new  
architecture


thanks!


Porting gcc

2008-03-11 Thread Schmave
Hi I would luke to know what I need to do to port gcc to a new  
architecture


Thanks


Re: Porting gcc

2008-03-11 Thread Ben Elliston
On Wed, 2008-03-12 at 16:56 +1100, Schmave wrote:
> Hi I would luke to know what I need to do to port gcc to a new  
> architecture

You can start by reading the GCC internals documentation:

  http://gcc.gnu.org/onlinedocs/gccint/

You can also look at the source code, in particular, the gcc/config
directory.

Finally, you have the option of hiring someone to do the port for you:

  http://www.fsf.org/resources/service

Cheers, Ben




Re: RTL definition

2008-03-11 Thread Ben Elliston
> I thought that RTL represented something close to the target machine,
> but not machine-dependent. I firstly thought that the output of the
> middle-end was an RTL machine-independent representation, to which is
> applied a few low-optimization machine-independent passes, and after
> that is translated to a RTL machine-dependent to be applied other
> optimization passes.

There are certain details that you will discover in the RTL that makes
it highly machine-dependent.  For example, the register number where
functions place their values is defined by the ABI and implemented by
the backend.  RTL is by no means machine neutral.

Cheers, Ben

-- 
Ben Elliston <[EMAIL PROTECTED]>
Australia Development Lab, IBM



Re: libtool for shared objects?

2008-03-11 Thread Roberto Bagnara

Basile STARYNKEVITCH wrote:
But I'm not sure to understand the relation between libtool & $(LIBTOOL) 
(ie @libtool@)


Any clues?


Hi Basile,

I will tell you what (I think) is the relation in projects using Autoconf,
Automake and Libtool.

@LIBTOOL@ is a placeholder that stands for the Libtool main script.
In the Makefile.in files, you will find lines of the form

LIBTOOL = @LIBTOOL@

At configure time, what has to take the place of the placeholder
is computed.  In the generated config.status file, you will find
something like

s,@LIBTOOL@,|#_!!_#|$(SHELL) $(top_builddir)/libtool,g

and, consequently, in your Makefile files you will have

LIBTOOL = $(SHELL) $(top_builddir)/libtool

Thus the right way to invoke the libtool command is to use $(LIBTOOL)
in the makefiles, as in

install-data-local: ppl_sicstus.so
$(LIBTOOL) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) \
$< $(DESTDIR)$(pkglibdir)/$<

Some examples of use can be found in the Parma Polyhedra Library's
Makefile.am files, but I am sure there are more authoritative sources
out there (i.e., we may well misuse Libtool).
I hope it helps,

   Roberto

--
Prof. Roberto Bagnara
Computer Science Group
Department of Mathematics, University of Parma, Italy
http://www.cs.unipr.it/~bagnara/
mailto:[EMAIL PROTECTED]


Re: libtool for shared objects?

2008-03-11 Thread Basile STARYNKEVITCH

Roberto Bagnara wrote:

Basile STARYNKEVITCH wrote:
But I'm not sure to understand the relation between libtool & 
$(LIBTOOL) (ie @libtool@)




I will tell you what (I think) is the relation in projects using Autoconf,
Automake and Libtool.

@LIBTOOL@ is a placeholder that stands for the Libtool main script.
In the Makefile.in files, you will find lines of the form

LIBTOOL = @LIBTOOL@


Thanks for the hint. Apparently, this line is not enough alone. Besides, 
the set of autoconf/automake/libtool version is not the same in gcc/ 
subdirectory and in others. I remember having read there is some issue 
but I don't remember which one. The gcc/ subdir is a strange untamed 
beast. And other sub-directories using libtool (like libmudflap/) have 
both a Makefile.am and a (automake generated?) Makefile.in - it seems 
that the gcc/ subdirectory don't use automake (yet?).


So I tried to add to gcc/configure.ac the following lines (which exist 
in libmudflap/configure.ac)


  AC_LIBTOOL_DLOPEN
  AM_PROG_LIBTOOL
  AC_SUBST(enable_shared)
  AC_SUBST(enable_static)

and it does not work:

(cd /usr/src/Lang/basile-melt-gcc/gcc && autoconf)
configure.ac:434: error: possibly undefined macro: AC_LIBTOOL_DLOPEN
  If this token and others are legitimate, please use m4_pattern_allow.
  See the Autoconf documentation.
configure.ac:435: error: possibly undefined macro: AM_PROG_LIBTOOL
(cd /usr/src/Lang/basile-melt-gcc/gcc && autoheader)

Regards.
--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***