Re: i370 port - constructing compile script

2009-11-13 Thread Paul Edwards

Ok, now I have some results from the auto-compile-script-generation.

I got it to work, but it required some manual corrections.

First of all, I got link errors, because sched-ebb etc were trying
to call various functions, but those functions were not being
compiled in because INSN_SCHEDULING was not defined
(that's my quick analysis, anyway).  So I just grepped those
files out of the "source list".

Next, a stack of libiberty files were not compiled - strcasecmp,
vasprintf, asprintf, getpagesize, strdup.  I don't know why this
would be the case, because e.g. HAVE_STRCASECMP is
not defined.  Anyway, I added them to the source list manually,
and with a script, awk and m4, I was able to produce my
traditional compile script (which is a stepping stone for doing
the same thing on MVS).

Oh, one other change I made - I normally define PREFIX in a
common header file.  However, this caused a conflict between
prefix.c and regex.c which both try to use this keyword.  It
would be good if this define was made unique within the
source base.  I realise there are different ways around this,
but it would still be good to be unique.  For now I just updated
prefix.c to use "" as a default prefix if none is provided.  That's
neater than any immediate alternative I can think of.

But anyway, the short story is that things are looking great,
and it is looking like I have managed to slot into the existing
build process with fairly minimal intrusive code, which bodes
well for a future GCC 4 port attempt.  :-)  The remaining work
I know of doesn't require any more intrusive code.

BFN.  Paul.



Re: i370 port - constructing compile script

2009-11-13 Thread Ulrich Weigand
Paul Edwards wrote:

> First of all, I got link errors, because sched-ebb etc were trying
> to call various functions, but those functions were not being
> compiled in because INSN_SCHEDULING was not defined
> (that's my quick analysis, anyway).  So I just grepped those
> files out of the "source list".

This is apparently a bug in the 3.4 version of sched-ebb.c.  This
whole file should be in a #ifdef INSN_SCHEDULING, just like the
other sched-*.c files.  This is fixed in current GCC.

> Next, a stack of libiberty files were not compiled - strcasecmp,
> vasprintf, asprintf, getpagesize, strdup.  I don't know why this
> would be the case, because e.g. HAVE_STRCASECMP is
> not defined.  Anyway, I added them to the source list manually,
> and with a script, awk and m4, I was able to produce my
> traditional compile script (which is a stepping stone for doing
> the same thing on MVS).

The libiberty configure process attempts to detect which functions
need to be built via link tests by default.  As you don't have a
cross-linker, something may be going wrong here.  As an alternative,
you can hard-code which functions to use in libiberty's configure.ac.

> Oh, one other change I made - I normally define PREFIX in a
> common header file.  However, this caused a conflict between
> prefix.c and regex.c which both try to use this keyword.  It
> would be good if this define was made unique within the
> source base.  I realise there are different ways around this,
> but it would still be good to be unique.  For now I just updated
> prefix.c to use "" as a default prefix if none is provided.  That's
> neater than any immediate alternative I can think of.

Why would you define this by hand?  The usual make process will
define PREFIX while building prefix.c, using the appropriate
value determined at configure time ...

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  ulrich.weig...@de.ibm.com


Re: i370 port - constructing compile script

2009-11-13 Thread Ulrich Weigand
Paul Edwards:

> 1. I think my unixio.h, which has a stack of POSIX functions
> that need to be there (mkdir, pwait, open, fileno etc), needs to
> be considered "honorary ansi" (after all, so much code assumes
> that they exist) and get included in ansidecl, with unixio.h
> living in include, and unixio.c living in libiberty.  Does that
> sound reasonable?

Well, it's sort of the whole point of libiberty to provide
functions that are not available on certain hosts, so that
the rest of GCC can be simplified by assuming they're always
there.  So in principle I guess this should be fine.

> What would be really good is if flags.h and toplev.c had a 
> consecutive block of flags, so that even if my few lines of
> intrusive code aren't accepted, it's at least easy for me to
> mask out an entire block.  At the moment I have to look
> for a few largish chunks of flags to mask out.

Note that with current GCC versions, all these flag global
variables are defined by C source code that is automatically
generated from various option parameter files.  This should
make it simpler to change this e.g. to use a struct instead
of many global variables ...

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  ulrich.weig...@de.ibm.com


Re: i370 port - constructing compile script

2009-11-13 Thread Paul Edwards

Next, a stack of libiberty files were not compiled - strcasecmp,
vasprintf, asprintf, getpagesize, strdup.  I don't know why this
would be the case, because e.g. HAVE_STRCASECMP is
not defined.  Anyway, I added them to the source list manually,
and with a script, awk and m4, I was able to produce my
traditional compile script (which is a stepping stone for doing
the same thing on MVS).


The libiberty configure process attempts to detect which functions
need to be built via link tests by default.  As you don't have a
cross-linker, something may be going wrong here.  As an alternative,
you can hard-code which functions to use in libiberty's configure.ac.


The thing is, I already know it has detected that I don't have
strcasecmp.  That's why it doesn't have HAVE_STRCASECMP
defined in the config.h.  You are right that I don't have a linker,
but the compile with error-on-no-prototype is working fine - I
can see the result in config.h.


Oh, one other change I made - I normally define PREFIX in a
common header file.  However, this caused a conflict between
prefix.c and regex.c which both try to use this keyword.  It
would be good if this define was made unique within the
source base.  I realise there are different ways around this,
but it would still be good to be unique.  For now I just updated
prefix.c to use "" as a default prefix if none is provided.  That's
neater than any immediate alternative I can think of.


Why would you define this by hand?  The usual make process will
define PREFIX while building prefix.c, using the appropriate
value determined at configure time ...


Because when my assemble and compile jobs start running on
MVS, I would first of all need to put in a special define for that
in the compile step for prefix - the only exception in fact.  Secondly,
I am running close to the 100-character limit of the PARM
statement already, with the things I was forced to add:

//ST2CMP   PROC GCCPREF='GCC',MEMBER='',
// PDPPREF='PDPCLIB',
// COS1='-Os -S -ansi -pedantic-errors -remap -DHAVE_CONFIG_H',
// COS2='-DIN_GCC -DPUREISO -o dd:out -'

Having another define, just to define an empty string, seems very
ugly indeed, even assuming it comes in under 100 characters.

By the way - that previous discussion we had about the potential
for the MVS version to one day be able to do a system().  Even
if it works for MVS eventually, which it probably will, it won't
work for MUSIC/SP in batch.  It's tragic, because I wanted to
use exactly that to issue a "/file" for dynamic file allocation
similar to how the CMS port does.  I only have one other
option - maybe the DYNALLOC call will work under MUSIC/SP,
which would be nicer than doing a "/file" anyway.  I will be trying
that in the days ahead, but regardless, I need gcc to be a
single executable on that environment if I want to run in batch.
And yes, I want to run my compiles in batch!  :-)

BFN.  Paul.



Re: i370 port - constructing compile script

2009-11-13 Thread Ulrich Weigand
Paul Edwards wrote:

> The thing is, I already know it has detected that I don't have
> strcasecmp.  That's why it doesn't have HAVE_STRCASECMP
> defined in the config.h.  You are right that I don't have a linker,
> but the compile with error-on-no-prototype is working fine - I
> can see the result in config.h.

Well, the configure process should result in the variable LIBOBJS
in the generated libiberty Makefile to be set to list of objects
containing implementations of replacement system routines.

This gets set during the macro call
  AC_REPLACE_FUNCS($funcs)
in configure.ac, which gets replaced by the following code
in configure (GCC 3.4):

for ac_func in $funcs
do
as_ac_var=`echo "ac_cv_func_$ac_func" | $as_tr_sh`
echo "$as_me:$LINENO: checking for $ac_func" >&5
echo $ECHO_N "checking for $ac_func... $ECHO_C" >&6
[...]
if test `eval echo '${'$as_ac_var'}'` = yes; then
  cat >>confdefs.h <<_ACEOF
#define `echo "HAVE_$ac_func" | $as_tr_cpp` 1
_ACEOF

else
  LIBOBJS="$LIBOBJS $ac_func.$ac_objext"
fi
done

So if you do not have HAVE_STRCASECMP in config.h, you should
have been getting strcasecmp.o in LIBOBJS ...

> > Why would you define this by hand?  The usual make process will
> > define PREFIX while building prefix.c, using the appropriate
> > value determined at configure time ...
> 
> Because when my assemble and compile jobs start running on
> MVS, I would first of all need to put in a special define for that
> in the compile step for prefix - the only exception in fact.  Secondly,
> I am running close to the 100-character limit of the PARM
> statement already, with the things I was forced to add:
> 
> //ST2CMP   PROC GCCPREF='GCC',MEMBER='',
> // PDPPREF='PDPCLIB',
> // COS1='-Os -S -ansi -pedantic-errors -remap -DHAVE_CONFIG_H',
> // COS2='-DIN_GCC -DPUREISO -o dd:out -'
> 
> Having another define, just to define an empty string, seems very
> ugly indeed, even assuming it comes in under 100 characters.

Ah, OK.

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU Toolchain for Linux on System z and Cell BE
  ulrich.weig...@de.ibm.com


Re: more graphite merges before gcc 4.5 branch?

2009-11-13 Thread Jack Howarth
On Thu, Nov 12, 2009 at 01:23:30PM -0500, David Edelsohn wrote:
> 
> Yes, more Graphite merges are planned.  The VTA merge broke Graphite
> and we are waiting for Alexandre's recent VTA fixes for Graphite to be
> updated based on the initial feedback from Sebastian and merged into
> the trunk.  Then the patches from Graphite can be merged.
> 
> Please keep in mind that Graphite is experimental and not a panacea.
> If you and your colleagues want Graphite to be able to apply more
> high-level loop transformations and want it to be more effective with
> better tuning, please help implement the optimizations.
> 
> Thanks, David

David,
Thanks for the information. Since FSF gcc on darwin is unlikely
to gain LTO any time soon (unless it comes in the form of DragonEgg
http://dragonegg.llvm.org/), I was hoping that we might realize
some performnce improvements via graphite (or at least see that
the none of the Polyhedron 2005 benchmarks degrade under its
optimizations) in gcc 4.5.
  Jack


Re: (C++) mangling vector types

2009-11-13 Thread Gabriel Dos Reis
On Thu, Nov 12, 2009 at 5:57 PM, Mark Mitchell  wrote:
> Jason Merrill wrote:
>
>> It isn't such a corner case, unfortunately; any code that tries to
>> overload on multiple vector sizes (i.e. MMX and SSE vectors) will break.
>>  See bug 12909 and its duplicates.  This is becoming more of a problem
>> with the advent of the Intel AVX extension.
>
> This still seems a lot of complexity to me, and I still think inserting
> a new version between 2 and 3 is odd.  If we need the complexity, I
> think we have to introduce a new orthogonal option for vector mangling,
> independent of the ABI version, but implied by ABI version > 4.

How is mangling orthogonal to the ABI?

-- Gaby


Re: (C++) mangling vector types

2009-11-13 Thread Mark Mitchell
Gabriel Dos Reis wrote:

>> This still seems a lot of complexity to me, and I still think inserting
>> a new version between 2 and 3 is odd.  If we need the complexity, I
>> think we have to introduce a new orthogonal option for vector mangling,
>> independent of the ABI version, but implied by ABI version > 4.
> 
> How is mangling orthogonal to the ABI?

It's certainly possible to have ABIv2-with-vector-change and
ABIv2-without.  I never claimed that they were the same ABI.

-- 
Mark Mitchell
CodeSourcery
m...@codesourcery.com
(650) 331-3385 x713


-Warray-bounds false negative

2009-11-13 Thread Matt

Hello,

I recently came across a false negative in GCC's detection of array bounds 
violation. At first, I thought the other tool (PC-Lint) was having false 
positive, but it turns out to be correct. The false negative occurs in GCC 
4.3, 4.4.1, and latest trunk (4.5). I'm curious to understand how exactly 
the detection breaks down, as I think it may affect if/how the loop in 
question is optimized.


Here is the code:

int main(int argc, char** argv)
{
unsigned char data[8];
int hyphen = 0, i = 0;
char *option = *argv;

for(i = 19; i < 36; ++i) {
if(option[i] == '-') {
if(hyphen) return false;
++hyphen;
continue;
}

if(!(option[i] >= '0' && option[i] <= '9')
&& !(option[i] >= 'A' && option[i] <= 'F')
&& !(option[i] >= 'a' && option[i] <= 'f')) {
return false;
}

data[(i-hyphen)/2] = 0;
}

return 0;
}

When i is 36 and hyphen is 0 (and in many other cases), data[] will be 
overflowed by quite a bit. Where does the breakdown in array bounds 
detection occur, and why? Once I understand, and if the fix is simple 
enough, I can try to fix the bug and supply a patch.


Thanks!

--
tangled strands of DNA explain the way that I behave.
http://www.clock.org/~matt


Re: -Warray-bounds false negative

2009-11-13 Thread Andrew Pinski
On Fri, Nov 13, 2009 at 1:09 PM, Matt  wrote:
> Hello,
>
> I recently came across a false negative in GCC's detection of array bounds
> violation. At first, I thought the other tool (PC-Lint) was having false
> positive, but it turns out to be correct. The false negative occurs in GCC
> 4.3, 4.4.1, and latest trunk (4.5). I'm curious to understand how exactly
> the detection breaks down, as I think it may affect if/how the loop in
> question is optimized.

Well in this case, all of the code is considered dead is removed
before the warning will happen to be emitted.
If I change it so that data is read from (instead of just written to),
the trunk warns about this code:
t.c:21:20: warning: array subscript is above array bounds

I changed the last return to be:
   return data[2];

Thanks,
Andrew Pinski


Re: -Warray-bounds false negative

2009-11-13 Thread Matt

On Fri, 13 Nov 2009, Andrew Pinski wrote:


On Fri, Nov 13, 2009 at 1:09 PM, Matt  wrote:

Hello,

I recently came across a false negative in GCC's detection of array bounds
violation. At first, I thought the other tool (PC-Lint) was having false
positive, but it turns out to be correct. The false negative occurs in GCC
4.3, 4.4.1, and latest trunk (4.5). I'm curious to understand how exactly
the detection breaks down, as I think it may affect if/how the loop in
question is optimized.


Well in this case, all of the code is considered dead is removed
before the warning will happen to be emitted.
If I change it so that data is read from (instead of just written to),
the trunk warns about this code:
t.c:21:20: warning: array subscript is above array bounds

I changed the last return to be:
  return data[2];


d'oh! Next time I'll look at the objdump output first.

Thanks for the quick explanation!

--
tangled strands of DNA explain the way that I behave.
http://www.clock.org/~matt