RE: Serious SPEC CPU 2006 FP performance regressions on IA32

2006-12-11 Thread Meissner, Michael
> -Original Message-
> From: H. J. Lu [mailto:[EMAIL PROTECTED]
> Sent: Monday, December 11, 2006 1:09 PM
> To: Menezes, Evandro
> Cc: gcc@gcc.gnu.org; [EMAIL PROTECTED]; rajagopal, dwarak; Meissner,
> Michael
> Subject: Re: Serious SPEC CPU 2006 FP performance regressions on IA32
> 
> On Mon, Dec 11, 2006 at 11:35:27AM -0600, Menezes, Evandro wrote:
> > HJ,
> >
> > > > Gcc 4.3 revision 119497 has very poor SPEC CPU 2006 FP
performance
> > > > regressions on P4, Pentium M and Core Duo, comparing aganst
> > > > gcc 4.2 20060910. With -O2, the typical regressions look like
> > > >
> > > > Gcc 4.2 Gcc 4.3
> > > > 410.bwaves   9.899.14-7.58342%
> > > > 416.gamess   7.177.16-0.13947%
> > > > 433.milc 7.687.65-0.390625%
> > > > 434.zeusmp   5.575.55-0.359066%
> > > > 435.gromacs  3.994.020.75188%
> > > > 436.cactusADM4.594.50-1.96078%
> > > > 437.leslie3d 5.783.98-31.1419%
> > > > 444.namd 6.256.18-1.12%
> > > > 447.dealII   11.311.30%
> > > > 450.soplex   8.618.59-0.232288%
> > > > 453.povray   6.706.720.298507%
> > > > 454.calculix 2.812.74-2.4911%
> > > > 459.GemsFDTD 6.074.95-18.4514%
> > > > 465.tonto4.444.450.225225%
> > > > 470.lbm  10.610.70.943396%
> > > > 481.wrf  4.564.50-1.31579%
> > > > 482.sphinx3  11.211.1-0.892857%
> > > > Est. SPECfp_base2006 6.426.15-4.20561%
> > > >
> > > > Evandro, what do you get on AMD?
> > > >
> > > > Is that related to recent i386 backend FP changes?
> >
> > Here's what we got:
> >
> >   ?%
> > CPU2006
> > 410.bwaves   -6%
> > 416.gamess
> > 433.milc
> > 434.zeusmp
> > 435.gromacs
> > 436.cactusADM
> > 437.leslie3d-26%
> > 444.namd
> > 447.dealII
> > 450.soplex
> > 453.povray
> > 454.calculix
> > 459.GemsFDTD-12%
> > 465.tonto
> > 470.lbm
> > 481.wrf
> > 482.sphinx3
> >
> > Though not as pronounced, definitely significant.
> >
> 
> It is close to what we see on both x86 and x86-64. Are you going to
> track it down?

Just in case people are cherry picking the gcc mailing list and not
reading all of the threads, this is also discussed in this thread, where
it was felt that the PPRE patches that were added on November 13th were
the cause of the slowdown:
http://gcc.gnu.org/ml/gcc/2006-12/msg00023.html

Has anybody tried doing a run with just ppre disabled?




RE: RFC: Add BID as a configure time option for DFP

2007-01-12 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of H.
> J. Lu
> Sent: Wednesday, January 10, 2007 5:35 PM
> To: Janis Johnson
> Cc: gcc@gcc.gnu.org; [EMAIL PROTECTED]; Menezes, Evandro;
> [EMAIL PROTECTED]
> Subject: Re: RFC: Add BID as a configure time option for DFP
> 
> On Wed, Jan 10, 2007 at 02:10:58PM -0800, Janis Johnson wrote:
> > On Wed, Jan 10, 2007 at 11:40:46AM -0800, H. J. Lu wrote:
> > > Both AMD and Intel like to have BID as a configure time option
> > > for DFP. Intel is planning to contribute a complete BID runtime
> > > library, which can be used by executables generate by gcc.
> > >
> > > As the first step, we'd like to contribute a BID<->DPD library so
that
> > > BID can be used with libdecnumber by executables generate by gcc
> > > before the complete BID runtime library is ready.
> > >
> > > Any comments?
> >
> > libdecnumber doesn't use DPD (densely packed decimal), it uses the
> > decNumber format.  Functions in libgcc convert from DPD to
decNumber,
> > call into libdecnumber to do computations, and then convert the
result
> > back to DPD.  It's all parameterized in dfp-bit.[ch], so replacing
> > conversions between decNumber structs and DPD with conversions
between
> > decNumber structs and BID (binary integer decimal) should be
> > straightforward; I don't think there's any need to convert between
BID
> > and DPD to use libdecnumber.
> 
> libdecnumber is used by both gcc and DFP executables.  We only want
> to use BID for DFP executables. That means we will need
BID<->decNumber
> for gcc to generate DFP executables which use the BID library.
> 
> Since the real BID library won't be ready for a while and in the
> meantime, we like to enable BID for gcc now, that is why we
> propose the BID<->DPD<->libdecnumber approach as a stopgap
> measure. We can plug in the real BID library later.
> 
> >
> > If all x86* targets will use BID then there's no need for a
configure
> > option.  Initial support using DPD for x86* was a proof of concept,
I
> > doubt that anyone would care if you replace it with BID support.
> 
> Glad to hear that. We can make it BID only for x86.

I've been looking into doing this, and now have some time cleared up on
my schedule that I can look at implementing it.  What I would like to do
is:

1) Update --enable-decimal-float to take an option of which format to
use
2) Set the defaults for x86_64/i386 to use BID
3) Add converter functions to libdecnumber that convert from/to BID to
the internal libdecnumber format.
4) I'm of two minds what to call the functions.  On one hand, it is
convenient to use the same names for the functions, but over the years,
I have seen many problems from using the same names for things that take
different formatted inputs, so I will likely make using BID functions
use different names.  This will also allow us to generate testing
compilers that use that alternate format.
5) Hopefully the Intel library will use the same names as the BID
functions in libdecnumber, so that it can be linked in.  We obviously
need names for each of the standard math support functions (ie,
add/sub/mul/div for _Decimal32/_Decimal64/_Decimal128 types).
6) Add a macro to libdec.h that says which format is being used.

--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719




RE: Autoconf manual's coverage of signed integer overflow & portability

2007-01-12 Thread Meissner, Michael
> -Original Message-
> I would like to say the one thing I have not heard through this
> discussion is the real reason why the C standards comittee decided
> signed overflow as being undefined.  All I can think of is they were
> thinking of target that do saturation for plus/minus but wrapping for
> multiplications/divide or even targets that trap for some overflow
cases
> (like x86) but not others.

I was on the original C standards committee from its inception through
the ANSI standard in 1989 and the ISO standard in 1990, representing
first Data General, and then the Open Software Foundation.  When the
standard was being produced, we had vendors with one's complement
machines (Univac, and possibly CDC), signed magnitude machines
(Burroughs), word based machines (Univac, Burroughs, Data General,
PR1ME, and a university doing a DEC-10 port).  While these machines are
uncommon now, we did have to keep them in mind while writing the
standard.  Because of the diversity of actual hardware, the only thing
we could say was "don't do that", just like with shifts where the shift
value is not in the proper range (and this bit gcc when I was doing the
early 88k port).




RE: char should be signed by default

2007-01-24 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of
> [EMAIL PROTECTED]
> Sent: Wednesday, January 24, 2007 12:19 AM
> To: gcc@gcc.gnu.org
> Subject: char should be signed by default
> 
> GCC should treat plain char in the same fashion on all types of
machines
> (by default).

No.  GCC should fit in within the environment it is running in.  That's
the whole point of ABI's.  Even in the case of GNU/Linux where you had a
clean slate at the beginning, there are now existing ABI's that you need
to adhere to.
 
> The ISO C standard leaves it up to the implementation whether a char
> declared plain char is signed or not. This in effect creates two
> alternative dialects of C.

During the standards process we called those "don't chars".  But there
are other places where the standard explicitly doesn't say which
alternative an implementation should choose (whether plain bitfields
sign extend or not, whether ints are 32 or 64 bits, etc.).

> The GNU C compiler supports both dialects; you can specify the signed
> dialect with -fsigned-char and the unsigned dialect with
> -funsigned-char. However, this leaves open the question of which
dialect
> to use by default.

You use the ABI, which specifies whether chars and plain bitfields sign
extend or not.
 
> The preferred dialect makes plain char signed, because this is
simplest.
> Since int is the same as signed int, short is the same as signed
short,
> etc., it is cleanest for char to be the same.

However, I've worked on machines that did not have a signed character
instruction and you had to generate about 3 instructions to sign extend
it.

During the standards process of the original C standard (ANSI C89),
Dennis Ritchie expressed an opinion that in hindsight, making chars
signed was a bad idea, and that logically chars should be unsigned.
This is because outside of the USA, people use 8-bit character sets, and
you want to index into arrays.
 
> Some computer manufacturers have published Application Binary
Interface
> standards which specify that plain char should be unsigned. It is a
> mistake, however, to say anything about this issue in an ABI. This is
> because the handling of plain char distinguishes two dialects of C.
Both
> dialects are meaningful on every type of machine. Whether a particular
> object file was compiled using signed char or unsigned is of no
concern
> to other object files, even if they access the same chars in the same
> data structures.

No, this is the whole purpose of an ABI, to nail down all of these
niggling details.  If you use either -fsigned-char or -funsigned-char,
you are essentially breaking the ABI.  Now in the case of chars, usually
it won't bite you, but it can if you include header files with structure
fields written for the ABI.
 
> A given program is written in one or the other of these two dialects.
> The program stands a chance to work on most any machine if it is
> compiled with the proper dialect. It is unlikely to work at all if
> compiled with the wrong dialect.

It depends on the program, and whether or not chars in the user's
character set is sign extended (ie, in the USA, you likely won't notice
a difference between the two if chars just hold character values).
 
> Many users appreciate the GNU C compiler because it provides an
> environment that is uniform across machines. These users would be
> inconvenienced if the compiler treated plain char differently on
certain
> machines.

And many users appreciate that GNU C fits in with the accepted practices
on their machine.
 
> Occasionally users write programs intended only for a particular
machine
> type. On these occasions, the users would benefit if the GNU C
compiler
> were to support by default the same dialect as the other compilers on
> that machine. But such applications are rare. And users writing a
> program to run on more than one type of machine cannot possibly
benefit
> from this kind of compatibility.
> 
> There are some arguments for making char unsigned by default on all
> machines. If, for example, this becomes a universal de facto standard,
> it would make sense for GCC to go along with it. This is something to
be
> considered in the future.

Unfortunately you are usually limited by the choices you made at the
original implementation.  Any change involves a massive flag day.
 
> (Of course, users strongly concerned about portability should indicate
> explicitly whether each char is signed or not. In this way, they write
> programs which have the same meaning in both C dialects.)
> 
> 


--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719





RE: variable-sized array fields in structure?

2007-01-24 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> Basile STARYNKEVITCH
> Sent: Wednesday, January 24, 2007 10:30 AM
> To: gcc@gcc.gnu.org
> Subject: variable-sized array fields in structure?
> 
> Hello all,
> 
> It is common to have structures which end with an "undefined"
> variable-length array like
> 
> struct foo_st {
>   struct bar_st* someptr;
>   int len;
>   struct biz_st *tab[] /* actual size is len */;
> };
> 
> I'm sorry to be unable to get the exact wording of this construct,
> which I am sure is in some recent (C99? maybe) standard, unfortunately
> I don't have these standards at hand.

It is discussed in section 6.7.2.1 of the C99 standard, in the Semantics 
section, paragraph #15 that explicitly allows the last element of a structure 
to have an array with no bound, called a flexible array.

I don't have an online version of C90, but it may have been in there as well.

> There is even a length attribute in  GTY to help support this
> http://gcc.gnu.org/onlinedocs/gccint/GTY-Options.html
> 
> I believe the correct idiom in GCC source is to put an explicit
> dimension to 1 (probably because 0 would make some old compilers
> unhappy), ie to code instead
> 
> struct foo_st {
>   struct bar_st* someptr;
>   int len;
>   struct biz_st *tab[1] /* 1 is dummy, actual size is len */;
> };

Pre-ANSI/ISO compilers did not allow this, and 1 was used quite heavily in the 
community for things like char name[1];.
 
> Unfortunately, when debugging (or taking sizeof), this makes a little
> difference.
> 
> My small suggestion would be
> 
> 1. To explicitly document that such undefined variable-sized array
> fields should be declared of dimension VARYING_SIZE (or some other
> word), i.e. to code
> 
>   struct foo_st {
> struct bar_st* someptr;
> int len;
> struct biz_st *tab[VARYING_SIZE] /* actual size is len */;
>   };
> 
> 2. To have a definition of VARYING_SIZE is some of our header files
> (config.h, or system.h or others) which is 1 for old compilers and
> empty for new ones (including gcc itself), maybe
> 
>   #if (defined __STDC_VERSION__ && __STDC_VERSION__ >= 199901L)
>   #define VARYING_SIZE 1
>   #else
>   #define VARYING_SIZE /*empty*/
>   #endif

Probably reasonable.
 
> 
> 
> Is there some point that I forgot? Probably yes, since my suggestion
> is quite obvious but not yet in GCC?
> 
> Thanks for reading.
> 
> --
> Basile STARYNKEVITCH http://starynkevitch.net/Basile/
> email: basilestarynkevitchnet mobile: +33 6 8501 2359
> 8, rue de la Faïencerie, 92340 Bourg La Reine, France
> *** opinions {are only mines, sont seulement les miennes} ***
> 


--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719





RE: [OT] char should be signed by default

2007-01-25 Thread Meissner, Michael
> -Original Message-
> From: Gabriel Paubert [mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 25, 2007 5:43 AM
> To: Paolo Bonzini
> Cc: Meissner, Michael; [EMAIL PROTECTED]; gcc@gcc.gnu.org
> Subject: Re: [OT] char should be signed by default
> 
> On Thu, Jan 25, 2007 at 10:29:29AM +0100, Paolo Bonzini wrote:
> >
> > >>A given program is written in one or the other of these two
dialects.
> > >>The program stands a chance to work on most any machine if it is
> > >>compiled with the proper dialect. It is unlikely to work at all if
> > >>compiled with the wrong dialect.
> > >
> > >It depends on the program, and whether or not chars in the user's
> > >character set is sign extended (ie, in the USA, you likely won't
notice
> > >a difference between the two if chars just hold character values).
> >
> > You might notice if a -1 (EOF) becomes a 255 and you get an infinite
> > loop in return (it did bite me).  Of course, this is a bug in that
> > outside the US a 255 character might become an EOF.
> 
> That'a a common bug with getchar() and similar function because people
> put the result into a char before testing it, like:
> 
>   char c;
>   while ((c=getchar())!=EOF) {
>   ...
>   }
> 
> while the specification of getchar is that it returns an unsigned char
> cast to an int or EOF, and therefore this code is incorrect
independently
> of whether char is signed or not:
> - infinite loop when char is unsigned
> - incomplete processing of a file because of early detection of EOF
>   when char is signed and you hit a 0xFF char.

Yep.  This was discussed in the ANSI X3J11 committee in the 80's, and it
is a problem (and the program is broken because getchar does return the
one out of band return value).  Another logical problem that occurs is
if you are on a system where char and int are the same size, that there
is no out of band
Value that can be returned, and in theory the only correct way is to use
feof and ferror, which few people do.

> I've been bitten by both (although the second one is less frequent now
> since 0xff is invalid in UTF-8).
> 
> BTW, I'm of the very strong opinion that char should have been
unsigned
> by default because the name itself implies that it is used as a
> enumeration of symbols, specialized to represent text. When you step
> from one enum value to the following one (staying within the range of
> valid values), you don't expect the new value to become lower than the
> preceding one.

And then there is EBCDIC, where there are 10 characters between 'I' and
'J' if memory serves.  Plus the usual problem in ASCII that the national
characters that are alphabetic aren't grouped with the A-Z, a-z
characters.
 
> Things would be very different if it had been called "byte" or
> "short short int" instead.
> 
>   Gabriel
> 


--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719





RE: RE: char should be signed by default

2007-01-25 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]
> Sent: Thursday, January 25, 2007 1:54 PM
> To: Meissner, Michael
> Cc: gcc@gcc.gnu.org
> Subject: Re: RE: char should be signed by default
> 
> > - Original Message -
> > From: "Meissner, Michael" <[EMAIL PROTECTED]>
> > Date: Wednesday, January 24, 2007 12:49 pm
> >
> > > -Original Message-
> > > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf
> > Of
> > > [EMAIL PROTECTED]
> > > Sent: Wednesday, January 24, 2007 12:19 AM
> > > To: gcc@gcc.gnu.org
> > > Subject: char should be signed by default
> > >
> > > The GNU C compiler supports both dialects; you can specify the
signed
> > > dialect with -fsigned-char and the unsigned dialect with
> > > -funsigned-char. However, this leaves open the question of which
> > > dialect to use by default.
> >
> > You use the ABI, which specifies whether chars and plain bitfields
> > signextend or not.
> 
> GCC ignores the ABI w.r.t. bit-fields:
> 
> http://gcc.gnu.org/onlinedocs/gcc-4.1.1/gcc/Non_002dbugs.html
> 
> s/bit-fields/char/  ;-)

Yes, and in 1989, I fought that issue with RMS and lost then.  It still
doesn't change my opinion that GCC should adhere to the local ABI.

--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719





RE: Has insn-attrtab.c been growing in size recently?

2007-03-19 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
> François-Xavier Coudert
> Sent: Monday, March 19, 2007 5:54 AM
> To: GCC Development
> Subject: Has insn-attrtab.c been growing in size recently?
> 
> Hi all,
> 
> A bootstrap attempt of GCC mainline on a i386-unknown-netbsdelf2.0.2 with:
> 
> > Memory: 378M Act, 264M Inact, 3520K Wired, 4664K Exec, 633M File, 151M
> Free
> > Swap: 129M Total, 129M Free
> 
> failed due to a compilation error in stage1:
> 
> cc1: out of memory allocating 138677280 bytes after a total of 31484356
> bytes
> make: *** [insn-attrtab.o] Error 1
> 
> The system compiler is gcc version 3.3.3 (NetBSD nb3 20040520). Last
> time I tried on this same computer was on 2006-12-03, and it passed
> stage1 OK. So I wonder what recent changes could have affected
> insn-attrtab.c on this target, and whether there could be a way to get
> it down in size.
> 
> Thanks,
> FX

Well the AMD AMDFAM10, Intel Core2, AMD Geode machine descriptions went in.  
Unfortunately without either redoing how insn-attrtab is built or reducing the 
number of machine variants that are supported, it is likely the only solution 
is to raise the amount of virtual memory you have on the system.

--
Michael Meissner, AMD
90 Central Street, MS 83-29, 
Boxborough, MA 01719
[EMAIL PROTECTED]




RE: Building mainline and 4.2 on Debian/amd64

2007-03-30 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of
> Joe Buck
> Sent: Monday, March 19, 2007 2:02 PM
> To: Andrew Pinski
> Cc: Florian Weimer; Steven Bosscher; gcc@gcc.gnu.org
> Subject: Re: Building mainline and 4.2 on Debian/amd64
> 
> On Mon, Mar 19, 2007 at 10:35:15AM -0700, Andrew Pinski wrote:
> > On 3/19/07, Joe Buck <[EMAIL PROTECTED]> wrote:
> > >This brings up a point: the build procedure doesn't work by default
on
> > >Debian-like amd64 distros, because they lack 32-bit support (which
is
> > >present on Red Hat/Fedora/SuSE/etc distros).  Ideally this would be
> > >detected when configuring.
> >
> > Actually it brings up an even more important thing, distros that
don't
> > include a 32bit user land is really just broken.  Why do these
distros
> > even try to get away with this, they are useless to 99.9% of the
> > people and for the 0.1% of the people who find them interesting,
they
> > can just compile with --disable-multilib.
> 
> Unfortunately, such distros are in common use, and the GCC project
jumps
> through hoops to support many things that get far less use.
> 
> A platform that contains only 64-bit libraries is fundamentally a
> different platform than one that has both 32- and 64-.  Ideally there
> would be a different target triplet for such platforms.  Maybe
> x86_64only?  But that would have to be handled upstream.
> 
> In the meantime, the installation instructions should tell people to
> use --disable-multilib.

For a hosted configuration, we probably have the configuration support
determine if both 32 and 64 bit libraries are installed by trying to
compile a program with -m32.




RE: Bootstrap is broken on i[345]86-linux

2007-04-05 Thread Meissner, Michael
> -Original Message-
> From: FX Coudert [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, April 03, 2007 6:01 PM
> To: gcc@gcc.gnu.org
> Cc: Meissner, Michael; [EMAIL PROTECTED]
> Subject: Bootstrap is broken on i[345]86-linux
> 
> Bootstrap has been broken since 2007-03-25 on i[345]86-linux. This is
a
> decimal float issue reported as PR31344, and is due to a decimal float
> patch, probably the following:
> 
> 2007-03-23  Michael Meissner  <[EMAIL PROTECTED]>
> H.J. Lu  <[EMAIL PROTECTED]>
> 
> I've asked a few times already, but nothing seems to be done: can this
be
> fixed? A simple workaround is to disable decimal float for
i[345]86-linux,
> and it would be nice if people who commit patches acted as if they
felt
> responsible for the consequences of their commits.
> 
> FX

I'm starting to look into it now.




RE: How can I create a const rtx other than 0, 1, 2

2005-07-22 Thread Meissner, Michael
Use the GEN_INT macro to create an appropriate (const_int ) RTL:

operand[1] = GEN_INT (111);

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Liu Haibin
Sent: Friday, July 22, 2005 3:23 AM
To: gcc@gcc.gnu.org
Subject: How can I create a const rtx other than 0, 1, 2

Hi,

There's const0_rtx, const1_rtx and const2_rtx. How can I create a
const rtx other than 0, 1, 2? I want to use it in md file, like

operand[1] = 111.

I know I must use const rtx here. How can I do it? A simple question,
but just no idea where to find the answer.


Regards,
Timothy




RE: var_args for rs6000 backend

2005-09-06 Thread Meissner, Michael
And note Yao qi, that there are different ABIs on the rs6000, each of
which has different conventions (ie, you will need to study the AIX ABI
as well as the System V/eabi ABIs, and possibly other ABIs that are now
used).

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Ian Lance Taylor
Sent: Tuesday, September 06, 2005 2:06 PM
To: Yao qi
Cc: gcc@gcc.gnu.org
Subject: Re: var_args for rs6000 backend

"Yao qi" <[EMAIL PROTECTED]> writes:

> I am working on variable arguments on rs6000 backend  and I have
> browsed gcc/config/rs6000/rs6000.c for several times,  I found
> there are some functions relavtive to this issue, they are
> setup_incoming_varargs, rs6000_build_builtin_va_list ,rs6000_va_start
> ,rs6000_gimplify_va_arg .
> 
> I could not know what they do just by source code.   Could anybody
> tell me the relationship among these routines?  I think it  is
> important for me to understand the mechnism of GCC backend.  Any
> comments or advice are highly appreciate.

These are partially documented in gcc/doc/tm.texi.  Unfortunately the
documentation is not particularly good.

These functions are all oriented around C stdarg.h functions.  In
brief, setup_incoming_varargs is called for a function which takes a
variable number of arguments, and does any required preparation
statements.  build_builtin_va_list builds the va_list data type.
va_start is called to implement the va_start macro.  gimplify_va_arg
is called to implement the va_arg macro.

> I do not know what is the precondation if I want to do it?  May I need
> to  know the architecture of PowerPC and ABI for it?

You do indeed need to know the PowerPC ABIs to understand what these
routines are doing and why.

Ian




RE: var_args for rs6000 backend

2005-09-07 Thread Meissner, Michael
There was also a PowerPC NT ABI at one point, but since Windows NT on
PowerPC was stillborn, it was removed.

My point was if you are working on the ABI functions, you need to make
sure that the other ABIs (AIX, Darwin) don't get broken by any changes
you make (presumably you will make sure that you don't break the ABI you
are working on).  There are some subtle differences by the way between
the System V (aka Linux) and eABI as well (stack alignment, number of
registers for small data area), but most of those don't show in the ABI
functions you are looking at.

-Original Message-
From: Yao Qi qi [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 06, 2005 11:14 PM
To: Meissner, Michael
Cc: gcc@gcc.gnu.org
Subject: RE: var_args for rs6000 backend


>From: "Meissner, Michael" <[EMAIL PROTECTED]>
>To: "Yao qi" <[EMAIL PROTECTED]>
>CC: gcc@gcc.gnu.org
>Subject: RE: var_args for rs6000 backend
>Date: Tue, 6 Sep 2005 14:13:56 -0400
>
>And note Yao qi, that there are different ABIs on the rs6000, each of
>which has different conventions (ie, you will need to study the AIX ABI
>as well as the System V/eabi ABIs, and possibly other ABIs that are now
>used).

First, thanks for you suggestions.

Yes, I found there are at least *three* ABIs in
gcc/config/rs6000/rs6000.c,

205 /* ABI enumeration available for subtarget to use.  */
206 enum rs6000_abi rs6000_current_abi;

And in gcc/config/rs6000/rs6000.h, I found the defination,

   1223 /* Enumeration to give which calling sequence to use.  */
   1224 enum rs6000_abi {
   1225   ABI_NONE,
   1226   ABI_AIX,  /* IBM's AIX */
   1227   ABI_V4,   /* System V.4/eabi */
   1228   ABI_DARWIN/* Apple's Darwin (OS X kernel)
*/
   1229 };

I just have to concentrate on ABI_V4 if I work on gcc develoment on 
powerpc-linux, am I right ?
I have traced cc1 and found DEFAULT_ABI in setup_incoming_varargs() is 
ABI_V4.

Best Regards

Yao Qi
Bejing Institute of Technology

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.click-url.com/go/onm00200636ave/direct/01/





RE: var_args for rs6000 backend

2005-09-08 Thread Meissner, Michael
Yes, the eABI is a modification of the System V ABI.  IIRC (but it has
been several years since I worked on PowerPC), the differences between
eABI and System V were:

1) eABI used r2 as a secondary small data pointer (System V used just
r13), and r0 was used for data centered around location 0;

2) there were some relocations in eABI not in System V (support for 3
small data pointers, section relative relocations) and some relocations
in System V not in eABI (shared library support);

3) System V had 16-byte stack alignment and eABI had 8-byte stack
alignment.

I suspect there may be more changes that I'm forgetting about, and also
the 64-bit support probably changes things also.

-Original Message-
From: Yao qi [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 08, 2005 9:10 PM
To: Meissner, Michael
Cc: gcc@gcc.gnu.org
Subject: RE: var_args for rs6000 backend

>From: "Meissner, Michael" <[EMAIL PROTECTED]>
>To: "Yao Qi qi" <[EMAIL PROTECTED]>
>CC: gcc@gcc.gnu.org
>Subject: RE: var_args for rs6000 backend
>Date: Wed, 7 Sep 2005 13:11:50 -0400
>
>There was also a PowerPC NT ABI at one point, but since Windows NT on 
>PowerPC was stillborn, it was removed.
>
>My point was if you are working on the ABI functions, you need to make 
>sure that the other ABIs (AIX, Darwin) don't get broken by any changes 
>you make (presumably you will make sure that you don't break the ABI 
>you are working on).  There are some subtle differences by the way 
>between the System V (aka Linux) and eABI as well (stack alignment, 
>number of registers for small data area), but most of those don't show 
>in the ABI functions you are looking at.

Do you mean the System V ABI and eABI is same on the aspect of variable
parameters passing?

Thanks for your reminding me.
I will take care of it and try to avoid breaking other ABIs.


Best Regards

Yao Qi
Bejing Institute of Technology

_
Is your PC infected? Get a FREE online computer virus scan from
McAfee(r) Security.
http://clinic.mcafee.com/clinic/ibuy/campaign.asp?cid=3963





RE: var_args for rs6000 backend

2005-09-09 Thread Meissner, Michael
As I said, I haven't looked at the code in awhile (before GIMPLE), but
the TREE code is the symbol table that allows you to look up the types
of arguments and the function return type.  The RTX code are the
instructions you produce for va_arg, etc.  For example, I believe the
eabi/System V had a structure that had a few elements, one of which was
the argument number, then there pointers to the save areas for gpr and
fpr registers and the stack frame.  The va_arg code would have to
produce code that tested the argument number, and if it was the first 8
arguments it would use the pointer to the gpr/fpr save areas and if not
it would use the stack pointer, and finally bump up the argument number.

I may be somewhat wrong on the details.  That is the trouble on working
on quite a few different ports -- after awhile all of the details blend
together.

-Original Message-
From: Yao qi [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 09, 2005 6:43 AM
To: Meissner, Michael
Cc: gcc@gcc.gnu.org
Subject: RE: var_args for rs6000 backend


>From: "Meissner, Michael" <[EMAIL PROTECTED]>
>To: "Yao qi" <[EMAIL PROTECTED]>
>CC: gcc@gcc.gnu.org
>Subject: RE: var_args for rs6000 backend
>Date: Thu, 8 Sep 2005 21:19:25 -0400
>
>Yes, the eABI is a modification of the System V ABI.  IIRC (but it has
>been several years since I worked on PowerPC), the differences between
>eABI and System V were:
>
>1) eABI used r2 as a secondary small data pointer (System V used just
>r13), and r0 was used for data centered around location 0;
>
>2) there were some relocations in eABI not in System V (support for 3
>small data pointers, section relative relocations) and some relocations
>in System V not in eABI (shared library support);
>
>3) System V had 16-byte stack alignment and eABI had 8-byte stack
>alignment.
>
>I suspect there may be more changes that I'm forgetting about, and also
>the 64-bit support probably changes things also.
>

Thanks very much for your explanation, and I will take them in mind.

Now, I can understand the ideas about ABI clearly and map these ideas
to the source code partialy.

However, I am cofused by the combination of TREE operations, RTX and
GIMPLE
in the functions about variable arguments.  I have to conquer them
before I 
hack code
about variable arguments in GCC, don't you?  However my focus now is 
routines about variable
argument, and do you think it is necessary for me to understand TREE 
strucuture, RTX
and GIMPLE before I pay attention on variable arguments ?  Is there any 
workrounds to
bypass them partially?  If there is no such shortcut for them, could you

tell me how to
start the learning in these fields ?
Thank you very much again.

Best Regards

Yao Qi
Bejing Institute of Technology

_
Don't just search. Find. Check out the new MSN Search! 
http://search.msn.com/





RE: bitmaps in gcc

2005-10-14 Thread Meissner, Michael
One of the classic places that sparse bitmaps were used in GCC in the
past is in register allocation phase, where you essentially have a 2D
sparse matrix with # of basic blocks on one axis and pseudo register
number on the other axis.  When you are compiling very large functions,
the number of basic blocks and the number of pseudo registers are very
large, and if the table wasn't compressed (most registers aren't live
past a single basic block) it was very significant.  I haven't looked at
this area in the last two years or so when I wasn't working on GCC, so
it might have changed.  Unfortunately I don't recall whether we were
using compressed bitmaps before I wrote the original versions of the
compressed bit vectors, but the idea was to encapsulate everything
within macros so it could be changed in the future.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Brian Makin
Sent: Friday, October 14, 2005 1:27 PM
To: gcc@gcc.gnu.org
Subject: bitmaps in gcc


In reference to this on the wiki.

Bitmaps, also called sparse bit sets, are implemented using a linked
list with a cache. This is probably not the most time-efficient
representation, and it is not unusual for bitmap functions to show up
high on the execution profile. Bitmaps are used for many things, such as
for live register sets at the entry and exit of basic blocks in RTL, or
for a great number of data flow problems. See bitmap.c (and sbitmap.c
for GCC's simple bitmap implementation).

Can someone point me to a testcase where bitmap functions show up high
on the profile?


Can anyone give me some background on the use of bitmaps in gcc?

Are they assumed to be sparse?
How critical is the memory consumption of bitsets?
What operations are the most speed critical?
Would it be desirable to merge bitmap and sbitmap into one
datastructure?
Anyone have good ideas for improvements?

Anything else anyone would want to add?

I think I may take a look at this.  Once I figure out the requirments
maybe we can speed it up a bit.


Brian N. Makin







__
Start your day with Yahoo! - Make it your home page! 
http://www.yahoo.com/r/hs




RE: porting gcc/binutils

2005-12-20 Thread Meissner, Michael
When I used to work for Cygnus Solutions (and then Red Hat after they
bought Cygnus in 1999), the general port to an embedded target was
typically done in parallel by 3 people (or 3 groups for large ports).
Before starting out, somebody would design the ABI (either customer
paying for the port, the person doing the compiler port, or some
combination of the two).  Also, the object file and debug formats were
chosen, ELF and Dwarf was the default choice, unless there was some
overriding reason to use something else.

The first parallel part of the port was porting gas and then ld.  You
need to look at the machine and determine what object file relocations
are needed.  If you were using ELF, in theory you would reserve the
appropriate magic numbers with SCO so that there would be no conflicts,
but a lot of ports didn't do that.  I may have been one of the last
people to get an official E_xxx number (E_SEP) before SCO started on its
current self destructive path.

As the assembler/linker work is started, the compiler person/team would
begin work.  Most ports are done by cloning another port, and often
times you can find places where the original comments were not modified.
When I did my second port from scratch, I set out to write a generic
backend that had all of the options as comments.  Unfortunately, the
port had decayed since it was not kept up to date, and is probably less
than useful, even if it had been released outside of Cygnus/Red Hat.
Obviously, after the initial definition, you will need the assembler and
linker to complete the compiler.

After the assembler/linker is done, the person/team doing that usually
would work on the simulator, and the simulator will be needed for the
second stage of compiler debugging (once everything builds).  If you
have the machine available in silicon form, then you can skip this step.

If your target is a regular target like a RISC platform, the CGEN system
can be used to simplify building the instruction tables:
http://sourceware.org/cgen/

The compiler team does all of the initial work.  By the time the
compiler can build stuff, either the compiler or binutils team will
create the system specific parts of newlib to handle I/O, etc. on the
simulator or hardware.

Once programs can be built and linked, the debugger team does whatever
is needed to bring up gdb.  Often times, the debugger would trail the
compiler, and the initial part of debugging the compiler was done via
simulator traces.

Running hello world is a milestone.  First you write it using the write
system call, and in the second iteration you rewrite it using printf.

The next milestone is running a full make check on each of the tools,
adding specific machine support for the assembler, linker, etc.

Usually after the compiler system is up and running, you start looking
into adding shared library support, new optimizations, etc.

For a lot of embedded chips, the next step is porting Linux or BSD to
the chip.

By then you need to start porting the applications that will be run on
the target, and fixing bugs, adding new optimizations, etc.

If you are the only person doing the project, I would do assembler,
linker, simulator, compiler, newlib, debugger, finish compiler port,
port Linux

If you intend to have the port contributed to the FSF, be sure you start
your paperwork early.  If you are being paid for the work, you will need
signatures from the appropriate corporate officers to verify that you
are legally allowed to contribute the code.  If you are doing this in
your spare time, make sure you know what your legal status is for code
that you write.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Andrija Radicevic
Sent: Wednesday, December 14, 2005 6:31 PM
To: [EMAIL PROTECTED]
Subject: porting gcc/binutils

Hi,

I'm trying to port gcc and binutils to a new target and I hoped to find
a brief procedure on that matter on the net, but was unsuccessful. OK,
the GCC internals is quite a resourceful document and one can learn a
lot by examining the source tree, but It would be very helpful if there
would be a brief procedure description (HOWTO). Like what to do first,
port gas I guess, and what to do (like create your directory(s), write
ISA file(s), machine descriptions, coff/elf generation etc.).
I'd be really grateful if someone could help me out.

best regards

Andrija Radicevic





RE: porting gcc/binutils

2005-12-20 Thread Meissner, Michael
The original intention was that CGEN would eventually be able to generate the 
MD file for GCC.  When I last used CGEN 2 years ago, it was not able to do that 
at the time, and I suspect the problem is very complex for real machines, 
because often times you have to have various tweaks that don't necessarily fit 
in the CGEN framework (errata, timing changes, etc.).

In terms of paperwork, if a company does not distribute GNU code, it does not 
have make the changes available (and if it does distribute the compiler, it 
only has to make the changes available to the people it distributes the 
compiler binaries to).  Obviously it is the best if the code is contributed 
back to the FSF, but there are machine ports out there that haven't been 
contributed for various reasons.

-Original Message-
From: Andrija Radičević [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 20, 2005 5:43 PM
To: Meissner, Michael
Cc: [EMAIL PROTECTED]
Subject: RE: porting gcc/binutils

Hi Michael,

first, thanks for your detailed instructions



> If your target is a regular target like a RISC platform, the CGEN system
> can be used to simplify building the instruction tables:
> http://sourceware.org/cgen/
> 


I have already stumbled over cgen on the net and skimmed the manual. I
have noticed that it uses RTL CPU descriptions, I hope this code can be
reused for gcc machine description file.



> If you intend to have the port contributed to the FSF, be sure you start
> your paperwork early.  If you are being paid for the work, you will need
> signatures from the appropriate corporate officers to verify that you
> are legally allowed to contribute the code.  If you are doing this in
> your spare time, make sure you know what your legal status is for code
> that you write.
> 


I'd be happy to contribute to the FSF, so thanks for reminding me on the
legal stuff. But, since all the tools are under GPL, should't the
company be obliged to make the code public, i.e. fall under GPL
automatically ?


Andrija





RE: GCC 4.1.0 Released

2006-03-07 Thread Meissner, Michael
When -mtune=generic was added, it was expected that it would go into the
4.2 GCC release, since it clearly missed the 4.1 window for new
features.  As desirable for both AMD and Intel that the new behavior be
propagated, I feel like Mark that it should wait for GCC 4.2, since it
clearly is a new feature.  However, if it does go in, it will be for the
good, but I'm not pressing for it.

Note, in this case, I am not officially speaking for AMD (though I have
taken part in discussions on our side about adding the generic tuning
feature).

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Mark Mitchell
Sent: Wednesday, March 01, 2006 8:22 PM
To: H. J. Lu
Cc: Steven Bosscher; [EMAIL PROTECTED]; Richard Guenther;
gcc@gcc.gnu.org
Subject: Re: GCC 4.1.0 Released

H. J. Lu wrote:

> You are comparing apply with orange. If a user uses -O2, he/she will
> see much more than that.

We can argue about that, but I don't think so.  I'm comparing a user can
achieve without the patch with the performance they can achieve with the
patch.  On all chips, for all time, users have been expected to specify
their target CPU in order to get good performance.  It's swell that GCC
4.2 will work better by default on IA32, but that's not a compelling
argument for a backport.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713




RE: Crazy ICE from gcc 4.1.0

2006-03-10 Thread Meissner, Michael
I suspect it isn't matching pattern #2, because it couldn't get a QI
register, and instead it falls back to the general case of moving to a
normal register.  I believe the gcc_assert should contain a check for
CONST_INT as well as a QI register or memory.

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Alan Lehotsky
Sent: Thursday, March 09, 2006 1:05 PM
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Subject: Crazy ICE from gcc 4.1.0

I've built a generic 4.1.0 for RH7.3 x86 linux (I did a make bootstrap)

Compiling a rather large file, I get 

tmp.f_00.cxx:26432: error: unrecognizable insn:
(insn 173 172 174 9 (set (reg:QI 122)
(const_int 128 [0x80])) -1 (nil)
(nil))
tmp.f_00.cxx:26432: internal compiler error: in extract_insn, at
recog.c:2020


Which looks insane, because there's a perfectly good define_insn (cf
*movqi_1" in i386.md)
I'm trying to reduce this to a reasonably sized test case (and I'm going
to try debugging this in the recognizer),
but I can't see why this instruction isn't matching the 2nd constraint
alternative and just producing a "movb r,#128"


(define_insn "*movqi_1"
  [(set (match_operand:QI 0 "nonimmediate_operand" "=q,q ,q ,r,r ,?r,m")
(match_operand:QI 1 "general_operand"  "
q,qn,qm,q,rn,qm,qn"))]
  "GET_CODE (operands[0]) != MEM || GET_CODE (operands[1]) != MEM"
{
  switch (get_attr_type (insn))
{
case TYPE_IMOVX:
  gcc_assert (ANY_QI_REG_P (operands[1]) || GET_CODE (operands[1])
== MEM);
  return "movz{bl|x}\t{%1, %k0|%k0, %1}";
default:
  if (get_attr_mode (insn) == MODE_SI)
return "mov{l}\t{%k1, %k0|%k0, %k1}";
  else
return "mov{b}\t{%1, %0|%0, %1}";
}
}





RE: Intermixing powerpc-eabi and powerpc-linux C code

2006-06-23 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of
> Ron McCall
> Sent: Thursday, June 01, 2006 2:33 PM
> To: gcc@gcc.gnu.org
> Subject: Intermixing powerpc-eabi and powerpc-linux C code
> 
> Hi!
> 
> Does anyone happen to know if it is possible to link
> (and run) C code compiled with a powerpc-eabi targeted
> gcc with C code compiled with a powerpc-linux targeted
> gcc?  The resulting program would be run on a PowerPC
> Linux system (ELDK 4.0).

When I last played with the powerpc many years ago, the main differences
between Linux and eabi was some details that you may or may not run into
(note these are from memory, so you probably need to check what the
current reality is):
1) eabi had different stack alignments than Linux;
2) eabi uses 2 small data registers (r2, r13) and Linux only 1 (r13?).
3) There are eabi relocations not officially in Linux and vice versa,
but the GNU linker should support any relocations the compiler uses.
4) eabi can be little endian, Linux is only big endian.
5) different system libraries were linked in by default. 

--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719





RE: Fortran Compiler

2006-06-26 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On 
> Behalf Of hector riojas roldan
> Sent: Friday, June 23, 2006 5:40 PM
> To: gcc@gcc.gnu.org
> Subject: Fortran Compiler
> 
> Hello, I would like to know if there is a fortran compiler 
> that runs on AMD 64 bits. I have installed suse 10.1 linux on 
> my computer, I would really apreciated all your help. I heard 
> yours also have C and
> C++.
> Thank  you very much, I write you from Argentina, héctor Riojas Roldan

The GNU compiler has a Fortran compiler.  I believe under SUSE Linux this is in 
the gcc-fortran package that you can install with YAST. 




RE: does gcc support multiple sizes, or not?

2006-08-15 Thread Meissner, Michael
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf
Of
> Mark Mitchell
> Sent: Monday, August 14, 2006 12:50 PM
> To: DJ Delorie
> Cc: [EMAIL PROTECTED]; gcc@gcc.gnu.org
> Subject: Re: does gcc support multiple sizes, or not?
> 
> DJ Delorie wrote:
> >> And back to my original answer: it's up to each language to decide
> >> that.
> >
> > Hence my original question: is it legal or not?  What did the C++
> > developers decide?
> 
> The C++ standard implies that for all pointer-to-object types have the
> same size and that all pointer-to-function types have the same size.
> (Technically, it doesn't say that that; it says that you can convert
T*
> -> U* -> T* and get the original value.)  However, nothing in the
> standard says that pointer-to-object types must have the same size as
> pointer-to-function types.

The C standard says that all pointer to structure and unions must have
the same size and format as each other, since otherwise declaring
pointers to structure tags that aren't declared in this module would not
be compatible with the same declaration in another module where you do
declare the structure tag.  When the ANSI C-89 (and later ISO C-90)
standard came out, I was working on a C compiler for a machine with
different flavors of pointers, and so I was very aware of the ins and
outs.  Pointers to functions can be a different size than pointers to
data, which is one of the reasons in C-89 you can't assign a function
pointer to a void *.  Because of the rule for functions with no
prototypes (since deprecated in C-99), all function pointers must be the
same size as each other.
 
> In theory, I believe that G++ should permit the sizes to be different.
> However, as far as I know, none of the G++ developers considered that
> possibility, which probably means that we have made the assumption
that
> they are all the same size at some points.  I would consider places
> where that assumption is made to be bugs in the front end.

I think having pointers be the same size is ingrained in the whole
compiler, not just the front ends.  I did a port to a machine (Mitsubshi
D10V) that had different flavors of pointers, though thankfully they
were the same size (pointers to functions were pointers to 32-bit
instruction words, while pointers to data were bytes to 8-bit bytes).
When I did that compiler (GCC 2-3 time frame), there were many
limitations caused by this, including the prohibition against
trampolines.

--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719




RE: GCC 4.3.0 Status Report (2007-09-04)

2007-09-13 Thread Meissner, Michael
> -Original Message-
> From: Mark Mitchell [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 13, 2007 2:37 PM
> To: Meissner, Michael; Mark Mitchell; GCC
> Subject: Re: GCC 4.3.0 Status Report (2007-09-04)
> 
> Michael Meissner wrote:
> 
> > One patch that got dropped on the floor was my patch to remove the
> dependency
> > in the back ends of the way arguments are encoded, so that
eventually
> for LTO
> > we can swtich to using a vector instead of linked list.
> 
> I think that could still goto 4.3, since it's already largely been
> reviewed.  But, of course, we do need to make sure all the targets
work.

I didn't hear back from you, so I checked in the machine independent and
i386 parts in my SSE5 patch.  Now, on to making the various ports still
work with the change.

--
Michael Meissner
AMD, MS 83-29
90 Central Street
Boxborough, MA 01719