Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-17 Thread Paul Eggert
Thorsten Glaser <[EMAIL PROTECTED]> writes:

> Paul Eggert dixit:
>
>>  […] gcc -O2 makes no promises without
>>  -fwrapv.
>
> gcc -O1 -fwrapv even doesn't work correctly for gcc3,
> and gcc2 and gcc <3.3(?) don't even have -fwrapv:
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=30477

The latter would not be a problem, since Autoconf would try -fwrapv on
a test module before using it to compile a real module.

Do you have a test case illustrating the gcc3 bug?  If so,
perhaps Autoconf could include that in its test of -fwrapv.


cross mode -fstack-protector ?

2007-01-17 Thread Balint Cristian
 Is it a bug ?

 Maybe i still dont understand how this is emmited, but anyone knows why a
cross-compiler vs normal compiler using the -fstack-protector option why
will act differnetly ?

e.g nm on same object compiled with:
 cross:
  U __stack_chk_fail
  U __stack_chk_guard
 native:
 U __stack_chk_guard

 somehow the cross one still put outside reference to __stack_chk_fail ...

Compilers are built with:

 Using built-in specs.
 Target: sparc-redhat-linux

 Configured cross compiler:

 ../configure --prefix=/usr --enable-shared
--enable-threads=posix --enable-tls --disable-libunwind-exceptions
--enable-languages=c,c++ --disable-libgomp --enable-libssp
--with-system-zlib --enable-nls --disable-checking
--enable-__cxa_atexit --disable-libunwind-exceptions
--with-long-double-128 --with-cpu=v7 --host=x86_64-redhat-linux -
-build=x86_64-redhat-linux --target=sparc-redhat-linux
Thread model: posix

 Configured native compiler:

Target: sparc64-redhat-linux
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--enable-checking=release --with-system-zlib --enable-__cxa_atexit
--disable-libunwind-exceptions --enable-libgcj-multifile
--enable-languages=c,c++,objc,obj-c++,java,fortran
--enable-java-awt=gtk --disable-dssi --enable-plugin
--with-java-home=/usr/lib/jvm/java-1.4.2-gcj-1.4.2.0/jre
--host=sparc64-redhat-linux
Thread model: posix

 This makes somehow wierd things, basicly i am unable to use with
-fstack-protector a distcc regime over cross-compilers, compile processes
will fail like this on a final linking of objects:

src/util.c:229: undefined reference to `__stack_chk_guard'


A solution is to drop -fstack-protector , but really a cross cannot do it ?






Dataflow branch successfully bootstrapped on s390 and s390x

2007-01-17 Thread Andreas Krebbel
Hi,

I could successfully bootstrap the dataflow branch on s390 and s390x.

Configure options:
--enable-shared --with-system-zlib --enable-threads=posix 
--enable-__cxa_atexit --enable-checking 
--enable-languages=c,c++,fortran,java,objc

No testsuite regressions occurred comparing dataflow branch rev. 120826 on s390
and s390x with mainline rev. 120219.

Bye,

-Andreas-


Re: cross mode -fstack-protector ?

2007-01-17 Thread Jakub Jelinek
On Wed, Jan 17, 2007 at 11:15:55AM +0200, Balint Cristian wrote:
>  Is it a bug ?
> 
>  Maybe i still dont understand how this is emmited, but anyone knows why a
> cross-compiler vs normal compiler using the -fstack-protector option why
> will act differnetly ?
> 
> e.g nm on same object compiled with:
>  cross:
>   U __stack_chk_fail
>   U __stack_chk_guard
>  native:
>  U __stack_chk_guard
> 
>  somehow the cross one still put outside reference to __stack_chk_fail ...

Badly configured cross?
The configure check in question is:

case "$target" in
  *-*-linux*)
AC_CACHE_CHECK(__stack_chk_fail in target GNU C library,
  gcc_cv_libc_provides_ssp,
  [gcc_cv_libc_provides_ssp=no
  if test x$host != x$target || test "x$TARGET_SYSTEM_ROOT" != x; then
if test "x$with_sysroot" = x; then
  glibc_header_dir="${exec_prefix}/${target_noncanonical}/sys-include"
elif test "x$with_sysroot" = xyes; then
  
glibc_header_dir="${exec_prefix}/${target_noncanonical}/sys-root/usr/include"
else
  glibc_header_dir="${with_sysroot}/usr/include"
fi
  else
glibc_header_dir=/usr/include
  fi
  # glibc 2.4 and later provides __stack_chk_fail and
  # either __stack_chk_guard, or TLS access to stack guard canary.
  if test -f $glibc_header_dir/features.h \
 && $EGREP '^@<:@   @:>@*#[ ]*define[   
]+__GNU_LIBRARY__[  ]+([1-9][0-9]|[6-9])' \
$glibc_header_dir/features.h > /dev/null; then
if $EGREP '^@<:@@:>@*#[ ]*define[   ]+__GLIBC__[
]+([1-9][0-9]|[3-9])' \
   $glibc_header_dir/features.h > /dev/null; then
  gcc_cv_libc_provides_ssp=yes
elif $EGREP '^@<:@  @:>@*#[ ]*define[   ]+__GLIBC__[
]+2' \
 $glibc_header_dir/features.h > /dev/null \
 && $EGREP '^@<:@   @:>@*#[ ]*define[   
]+__GLIBC_MINOR__[  ]+([1-9][0-9]|[4-9])' \
 $glibc_header_dir/features.h > /dev/null; then
  gcc_cv_libc_provides_ssp=yes
fi
  fi]) ;;
  *) gcc_cv_libc_provides_ssp=no ;;
esac

so when your cross is not --with-sysroot, you should have your
target glibc headers in /usr/sparc-redhat-linux/sys-include
before configure.

Jakub


Re: Creating a variable declaration of custom type.

2007-01-17 Thread Revital1 Eres


[EMAIL PROTECTED] wrote on 16/01/2007 17:45:59:

> I succeeded to do it as follows:
>
> tree type_decl = lookup_name(get_identifier("MyType"));
> tree type_ptr = build_pointer_type(TREE_TYPE(type_decl));
> tree var_decl = build(VAR_DECL, get_identifier("t"), type_ptr);
> pushdecl(var_decl);
>
> It may not be a perfect solution but for now it works.
>
> On 1/16/07, Ferad Zyulkyarov <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > > Best way to figure this out is to write a simple 5 line testcase that
> > > defines a structure type and also defines a pointer to that type, and
> > > then step through gcc to see what it does.  Try putting breakpoints
in
> > > finish_struct and build_pointer_type.
> >
> > I tried with the advised test case but again I could not find how to
> > reference to the already declared type "MyType".
> >
> > As it sould be logically, there should be a way to get a reference to
> > the declared type i.e.
> > tree type_decl = lookup_name("MyType");
> > tree type_ptr = build_pointer_type(type_decl->type_node);
> > tree var_decl = build(VAR_DECL, get_identifier("t"), type_ptr);
> >
> > I tried similar codes like the above, but I don't know how to retrieve
> > the "type" from the type declaration. Any help, ideas are highly
> > appreciated.
> >

BTW - I think you can retrieve a reference to an existing type by
traversing the
type_hash hash in tree.c.

Revital


> > Ferad Zyulkyarov
> >
>
>
> --
> Ferad Zyulkyarov
> Barcelona Supercomputing Center



char alignment on ARM

2007-01-17 Thread Inder

Hi All
I have similar question as for arm
http://gcc.gnu.org/ml/gcc/2007-01/msg00691.html
consider the following program.
e.g..
- align.c -
int main()
{
int bc;
char a[6];
int ac;

bc = 0x;
/* fill with zeros.  */
a[0] = 0x00;
a[1] = 0x01;
 a[2] = 0x02;
  a[3] = 0x03;
   a[4] = 0x04;
a[5] = 0x05;

ac=0x;
make(a);
}
void make(char* a)
{
*(unsigned long*)a = 0x12345678;
}
--- End ---

the local variable for function main are pushed on the stack
as ac(4byte) + bc(4bytes)+a[6](6bytes)+2bytes padding
stating address of the char array now starts from an
unaligned address and is acessed by the instruction
  strbr3, [fp, #-26]

which gives a very wrong result when passed to the function make.

Previously we were using an older version of gcc
which actually aligned the address of the char array also.

Older version of gcc : v2.94
New Version : v4.0.1

Is the compiler doing a right thing or is it a bug??
--
Thanks,
Inder


Re: CSE not combining equivalent expressions.

2007-01-17 Thread Mircea Namolaru
> Thanks. Another question I have is that, in this case, will the 
following
>
> http://gcc.gnu.org/wiki/Sign_Extension_Removal
>
> help in removal of the sign / zero extension ?

First, it seems to me that in your case:

(1) a = a | 1 /* a |= 1 */
(2) a = a | 1 /* a |= 1 */

the expressions "a | 1" in (1) and (2) are different as the "a"
is not the same. So there is nothing to do for CSE.

If the architecture has an instruction that does both the 
store and the zero extension, the zero extension instructions 
become redundant.

The sign extension algorithm is supposed to catch such cases, but
I suspect that in this simple case the regular combine is enough. 

Mircea


Re: CSE not combining equivalent expressions.

2007-01-17 Thread Mircea Namolaru
> Thanks. Another question I have is that, in this case, will the 
following
>
> http://gcc.gnu.org/wiki/Sign_Extension_Removal
>
> help in removal of the sign / zero extension ?

First, it seems to me that in your case:

(1) a = a | 1 /* a |= 1 */
(2) a = a | 1 /* a |= 1 */

the expressions "a | 1" in (1) and (2) are different as the "a"
is not the same. So there is nothing to do for CSE.

If the architecture has an instruction that does both the 
store and the zero extension, the zero extension instructions 
become redundant.

The sign extension algorithm is supposed to catch such cases, but
I suspect that in this simple case the regular combine is enough. 

Mircea



Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Paubert
On Wed, Jan 17, 2007 at 12:43:40AM +0100, Vincent Lefevre wrote:
> On 2007-01-16 21:27:42 +, Andrew Haley wrote:
> > Ian Lance Taylor writes:
> >  > I suspect that the best fix, in the sense of generating the best
> >  > code, would be to do this at the tree level.  That will give loop
> >  > and VRP optimizations the best chance to eliminate the test for -1.
> >  > Doing it during gimplification would be easy, if perhaps rather
> >  > ugly.  If there are indeed several processors with this oddity,
> >  > then it would even make a certain degree of sense as a
> >  > target-independent option.
> > 
> > x86, x86-64, S/390, as far as I'm aware.
> 
> and PowerPC G4 and G5, where I don't get a crash, but an incorrect
> result (as said on PR#30484).
> 

On PPC, the solution is to use "divo." [1] followed by an unlikely 
conditional branch to out of line code to handle the corner cases.

The question is: what do we do in the case of a divide by zero on PPC?
Are there other architectures that do not trap?

Gabriel

[1] sadly gcc does not know about the overflow flag and (unless it has
improved greatly since the last time I checked on a small ADA program)
generates bloated and slow code when checking for overflow. This is not 
specific to the rs6000 backend.


Re: CSE not combining equivalent expressions.

2007-01-17 Thread Richard Kenner
> First, it seems to me that in your case:
> 
> (1) a = a | 1 /* a |= 1 */
> (2) a = a | 1 /* a |= 1 */
> 
> the expressions "a | 1" in (1) and (2) are different as the "a"
> is not the same. So there is nothing to do for CSE.

It's not a CSE issue, but after (1), you know that the low-order bit of
"a" is a one, so that (2) is a no-op.  Note that the similar
a &= ~1;
a &= ~1;

we do catch in combine.

It could also be caught by converting

a = ((a | 1) | 1);

into

a = (a | (1 | 1));



Re: CSE not combining equivalent expressions.

2007-01-17 Thread Mircea Namolaru
[EMAIL PROTECTED] (Richard Kenner) wrote on 17/01/2007 18:04:20:

> > First, it seems to me that in your case:
> > 
> > (1) a = a | 1 /* a |= 1 */
> > (2) a = a | 1 /* a |= 1 */
> > 
> > the expressions "a | 1" in (1) and (2) are different as the "a"
> > is not the same. So there is nothing to do for CSE.
> 
> It's not a CSE issue, but after (1), you know that the low-order bit of
> "a" is a one, so that (2) is a no-op.  Note that the similar
>a &= ~1;
>a &= ~1;
> 
> we do catch in combine.
> 
> It could also be caught by converting
> 
>a = ((a | 1) | 1);
> 
> into
> 
>a = (a | (1 | 1));
> 

Yes, you are right if (1) and (2) are in the same basic block.
But the initial example that started this thread looks like:

a |= 1;
if (*x) ...
a }= 1;

so (1) and (2) are in two separate basic blocks. I think that
in this case combine doesn't work.

Mircea 


Re: CSE not combining equivalent expressions.

2007-01-17 Thread Ramana Radhakrishnan

On 1/17/07, Mircea Namolaru <[EMAIL PROTECTED]> wrote:

[EMAIL PROTECTED] (Richard Kenner) wrote on 17/01/2007 18:04:20:

> > First, it seems to me that in your case:
> >
> > (1) a = a | 1 /* a |= 1 */
> > (2) a = a | 1 /* a |= 1 */
> >
> > the expressions "a | 1" in (1) and (2) are different as the "a"
> > is not the same. So there is nothing to do for CSE.
>
> It's not a CSE issue, but after (1), you know that the low-order bit of
> "a" is a one, so that (2) is a no-op.  Note that the similar
>a &= ~1;
>a &= ~1;
>
> we do catch in combine.
>
> It could also be caught by converting
>
>a = ((a | 1) | 1);
>
> into
>
>a = (a | (1 | 1));
>

Yes, you are right if (1) and (2) are in the same basic block.
But the initial example that started this thread looks like:

a |= 1;
if (*x) ...
a }= 1;

so (1) and (2) are in two separate basic blocks. I think that
in this case combine doesn't work.


combine wouldn't work in this case because its going to work only
within a basic block  . IIRC in this case the CSE pass in 3.4.x was
removing (2) and retaining just (1) . Alas , don't have logs handy
now. Also this is removed for the case of integers by the CSE pass
IIRC . The problem arises only for the type being a char or a short.

~ramana



Mircea




--
Ramana Radhakrishnan


Re: Miscompilation of remainder expressions

2007-01-17 Thread Andrew Haley
Ian Lance Taylor writes:
 > Gabriel Dos Reis <[EMAIL PROTECTED]> writes:
 > 
 > > Ian, do you believe something along the line of
 > > 
 > >  # > I mean, could not we generate the following for "%": 
 > >  # >
 > >  # > rem a b := 
 > >  # >   if abs(b) == 1
 > >  # >  return 0
 > >  # >   return  a b
 > >  #
 > >  # On x86 processors that have conditional moves, why not do the equivalent
 > >  # of
 > >  #
 > >  # neg_b = -b;
 > >  # cmov(last result is negative,neg_b,b)
 > >  # __machine_rem(a,b)
 > >  #
 > >  # Then there's no disruption of the pipeline.
 > > 
 > > is workable for the affected targets?
 > 
 > Sure, I think the only real issue is where the code should be
 > inserted.

>From a performance/convenience angle, the best place to handle this is
either libc or the kernel.  Either of these can quite easily fix up
the operands when a trap happens, with zero performance degradation of
existing code.  I don't think there's any need for gcc to be altered
to handle this.

Andrew.


Re: char alignment on ARM

2007-01-17 Thread Mike Stump

On Jan 17, 2007, at 5:23 AM, Inder wrote:

void make(char* a) { *(unsigned long*)a = 0x12345678; }


stating address of the char array now starts from an unaligned  
address and is acessed by the instruction

  strbr3, [fp, #-26]

which gives a very wrong result



Is the compiler doing a right thing or is it a bug?


You asked for char alignment, and your program requires long  
alignment, your program is now, and always has been buggy.  The  
manual covers how to fix this, if you want to use __attribute__,  
otherwise, you can use a union to force any alignment you want.  So,  
in your case, yes, the compiler is doing the right thing.


If you were on a processor that handled misaligned data slowly, and  
you saw a general performance drop because of this, I'd encourage you  
to file a bug report as it might be a bug, if the results you see  
apply generally.


You're asking about behavioral differences in compilers that are  
really old.  You increase the odds that you can have these types of  
questions answered here, if you track and test mainline and ask the  
week the behavior changes, if it isn't obvious from glancing at the  
list for the past week.  :-)


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Andrew Haley wrote:

| Ian Lance Taylor writes:
|  > Gabriel Dos Reis <[EMAIL PROTECTED]> writes:
|  >
|  > > Ian, do you believe something along the line of
|  > >
|  > >  # > I mean, could not we generate the following for "%":
|  > >  # >
|  > >  # > rem a b :=
|  > >  # >   if abs(b) == 1
|  > >  # >  return 0
|  > >  # >   return  a b
|  > >  #
|  > >  # On x86 processors that have conditional moves, why not do the 
equivalent
|  > >  # of
|  > >  #
|  > >  # neg_b = -b;
|  > >  # cmov(last result is negative,neg_b,b)
|  > >  # __machine_rem(a,b)
|  > >  #
|  > >  # Then there's no disruption of the pipeline.
|  > >
|  > > is workable for the affected targets?
|  >
|  > Sure, I think the only real issue is where the code should be
|  > inserted.
|
| From a performance/convenience angle, the best place to handle this is
| either libc or the kernel.

Hmm, that is predicated on assumptions not convenient to users
on targets that are not glibc-based or GNU/Linux-based.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread David Daney

Andrew Haley wrote:

Ian Lance Taylor writes:
 > Gabriel Dos Reis <[EMAIL PROTECTED]> writes:
 > 
 > > Ian, do you believe something along the line of
 > > 
 > >  # > I mean, could not we generate the following for "%": 
 > >  # >
 > >  # > rem a b := 
 > >  # >   if abs(b) == 1

 > >  # >  return 0
 > >  # >   return  a b
 > >  #
 > >  # On x86 processors that have conditional moves, why not do the equivalent
 > >  # of
 > >  #
 > >  # neg_b = -b;
 > >  # cmov(last result is negative,neg_b,b)
 > >  # __machine_rem(a,b)
 > >  #
 > >  # Then there's no disruption of the pipeline.
 > > 
 > > is workable for the affected targets?
 > 
 > Sure, I think the only real issue is where the code should be

 > inserted.

From a performance/convenience angle, the best place to handle this is
either libc or the kernel.  Either of these can quite easily fix up
the operands when a trap happens, with zero performance degradation of
existing code.  I don't think there's any need for gcc to be altered
to handle this.


That only works if the operation causes a trap.  On x86 this is the 
case, but Andrew Pinski told me on IM that this was not the case for PPC.


David Daney


Re: Miscompilation of remainder expressions

2007-01-17 Thread Andrew Haley
Gabriel Dos Reis writes:
 > On Wed, 17 Jan 2007, Andrew Haley wrote:
 > 
 > |
 > | From a performance/convenience angle, the best place to handle this is
 > | either libc or the kernel.
 > 
 > Hmm, that is predicated on assumptions not convenient to users
 > on targets that are not glibc-based or GNU/Linux-based.

Well, if GNU libc/Linux/whatever can fix this bug in libc or the
kernel, so can anyone else.

"To a man with a hammer, all things look like a nail."  It's very
tempting for us in gcc-land always to fix things in gcc, not because
it's technically the right place but because it's what we control
ourselves.

Andrew.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Andrew Haley
David Daney writes:
 > 
 > That only works if the operation causes a trap.

Which it does in almost all cases.  Let PPC do something different, if
that's what really PPC needs.

Andrew.



Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Andrew Haley wrote:

| Gabriel Dos Reis writes:
|  > On Wed, 17 Jan 2007, Andrew Haley wrote:
|  >
|  > |
|  > | From a performance/convenience angle, the best place to handle this is
|  > | either libc or the kernel.
|  >
|  > Hmm, that is predicated on assumptions not convenient to users
|  > on targets that are not glibc-based or GNU/Linux-based.
|
| Well, if GNU libc/Linux/whatever can fix this bug in libc or the
| kernel, so can anyone else.

(1) people upgrde OS less frequently than compilers.
(2) it is not at all obvious that the problem is in the libc or in the
kernel.

| "To a man with a hammer, all things look like a nail."  It's very
| tempting for us in gcc-land always to fix things in gcc, not because
| it's technically the right place but because it's what we control
| ourselves.

well, I'm unclear what your point is here, but certainly GCC is
at fault for generating trapping instructions.
So, we fix the problem in GCC, not because that is what we control
ourselves, but we we failed to generate proper code.

For the earlier point, for sure GCC provides necessary support
routines for targets lacking proper instructions.  This is not
different.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Andrew Haley
Gabriel Dos Reis writes:
 > On Wed, 17 Jan 2007, Andrew Haley wrote:
 > 
 > | Gabriel Dos Reis writes:
 > |  > On Wed, 17 Jan 2007, Andrew Haley wrote:
 > |  >
 > |  > |
 > |  > | From a performance/convenience angle, the best place to handle this is
 > |  > | either libc or the kernel.
 > |  >
 > |  > Hmm, that is predicated on assumptions not convenient to users
 > |  > on targets that are not glibc-based or GNU/Linux-based.
 > |
 > | Well, if GNU libc/Linux/whatever can fix this bug in libc or the
 > | kernel, so can anyone else.
 > 
 > (1) people upgrde OS less frequently than compilers.
 > (2) it is not at all obvious that the problem is in the libc or in the kernel
 >
 > | "To a man with a hammer, all things look like a nail."  It's very
 > | tempting for us in gcc-land always to fix things in gcc, not because
 > | it's technically the right place but because it's what we control
 > | ourselves.
 > 
 > well, I'm unclear what your point is here, but certainly GCC is
 > at fault for generating trapping instructions.
 > So, we fix the problem in GCC, not because that is what we control
 > ourselves, but we we failed to generate proper code.

It's not a matter of whose fault it is; trying to apportion blame
makes no sense.  You could blame the library for passing on the
signal, or the kernel for passing the signal to the library, or the
compiler for generating the instruction in the first place.

If we decide that the current behaviour is wrong, then we want to
change the behaviour.  If we want to do that, then we want do to so in
the way that has the least cost for most programs and most users.

Andrew.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Michael Veksler

Andrew Haley wrote:

>From a performance/convenience angle, the best place to handle this is
either libc or the kernel.  Either of these can quite easily fix up
the operands when a trap happens, with zero performance degradation of
existing code.  I don't think there's any need for gcc to be altered
to handle this.
  
Unfortunately this is only partially correct. As Vincent Lefe`vre 
 has noted

in PR30484/, /PPC generates no trap but gives incorrect result:

   "-2147483648 % -1 -> -2147483648"

A signal handler will not help here.

Another problem is -ftrapv. You wouldn't want to kill traps on

   INT_MIN/-1

with -ftrapv, would you?

GCC should be modified such that libc/kernel can distinguish
INT_MIN/-1 from INT_MIN%-1.

--
Michael Veksler
http:///tx.technion.ac.il/~mveksler



Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Andrew Haley wrote:

[...]

|  > | "To a man with a hammer, all things look like a nail."  It's very
|  > | tempting for us in gcc-land always to fix things in gcc, not because
|  > | it's technically the right place but because it's what we control
|  > | ourselves.
|  >
|  > well, I'm unclear what your point is here, but certainly GCC is
|  > at fault for generating trapping instructions.
|  > So, we fix the problem in GCC, not because that is what we control
|  > ourselves, but we we failed to generate proper code.
|
| It's not a matter of whose fault it is; trying to apportion blame
| makes no sense.

we have a communication problem here.  Nobody is trying to apportion
blame.  However, gcc is the tool that generates trapping instruction.
It is unclear why it would be the responsability of the OS or libc
to fix what GCC has generated in the first place.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Andrew Haley
Gabriel Dos Reis writes:
 > On Wed, 17 Jan 2007, Andrew Haley wrote:
 > 
 > [...]
 > 
 > |  > | "To a man with a hammer, all things look like a nail."  It's very
 > |  > | tempting for us in gcc-land always to fix things in gcc, not because
 > |  > | it's technically the right place but because it's what we control
 > |  > | ourselves.
 > |  >
 > |  > well, I'm unclear what your point is here, but certainly GCC is
 > |  > at fault for generating trapping instructions.
 > |  > So, we fix the problem in GCC, not because that is what we control
 > |  > ourselves, but we we failed to generate proper code.
 > |
 > | It's not a matter of whose fault it is; trying to apportion blame
 > | makes no sense.
 > 
 > we have a communication problem here.  Nobody is trying to apportion
 > blame.  However, gcc is the tool that generates trapping instruction.
 > It is unclear why it would be the responsability of the OS or libc
 > to fix what GCC has generated in the first place.

That makes no sense either.

It's an engineering problem.  We have a widget that does the wrong
thing*.  We have several ways to make it do the right thing, only one
of which has no adverse impact on the existing users of the widget.

Andrew.
* (in some people's opinion)


Re: Miscompilation of remainder expressions

2007-01-17 Thread Joe Buck
On Wed, Jan 17, 2007 at 05:48:34PM +, Andrew Haley wrote:
> From a performance/convenience angle, the best place to handle this is
> either libc or the kernel.  Either of these can quite easily fix up
> the operands when a trap happens, with zero performance degradation of
> existing code.  I don't think there's any need for gcc to be altered
> to handle this.

How will the kernel know whether the overflow in the divide instruction
is because the user's source code has a '%' and not a '/'?  We generate
the exact same instruction for i / minus_one(), after all, and in that
case the trap really should be there.

I suppose that the trap handler could try to analyze the code following
the divide instruction; if the quotient result is never used and the
divisor is -1, it could replace the remainder result with zero and return.
But that would be rather hairy, if it is even feasible.  Alternatively,
the divide instruction could be marked somehow, but I have no idea how.




Re: Miscompilation of remainder expressions

2007-01-17 Thread Joe Buck
On Wed, Jan 17, 2007 at 06:03:08PM +, Andrew Haley wrote:
> Gabriel Dos Reis writes:
>  > On Wed, 17 Jan 2007, Andrew Haley wrote:
>  > 
>  > |
>  > | From a performance/convenience angle, the best place to handle this is
>  > | either libc or the kernel.
>  > 
>  > Hmm, that is predicated on assumptions not convenient to users
>  > on targets that are not glibc-based or GNU/Linux-based.
> 
> Well, if GNU libc/Linux/whatever can fix this bug in libc or the
> kernel, so can anyone else.

If GCC winds up having to fix the bug in the compiler itself for PPC,
then everyone could have the option of using a kernel fix or a compiler
fix.  But how are you going to do the kernel fix?  What if the user did
an integer divide and not a modulo?  I suppose you could just say the
result is undefined and patch up the quotient too.




RE: Miscompilation of remainder expressions

2007-01-17 Thread Dave Korn
On 17 January 2007 19:09, Joe Buck wrote:

> On Wed, Jan 17, 2007 at 05:48:34PM +, Andrew Haley wrote:
>> From a performance/convenience angle, the best place to handle this is
>> either libc or the kernel.  Either of these can quite easily fix up
>> the operands when a trap happens, with zero performance degradation of
>> existing code.  I don't think there's any need for gcc to be altered
>> to handle this.
> 
> How will the kernel know whether the overflow in the divide instruction
> is because the user's source code has a '%' and not a '/'?  We generate
> the exact same instruction for i / minus_one(), after all, and in that
> case the trap really should be there.
> 
> I suppose that the trap handler could try to analyze the code following
> the divide instruction; if the quotient result is never used and the
> divisor is -1, it could replace the remainder result with zero and return.
> But that would be rather hairy, if it is even feasible.  Alternatively,
> the divide instruction could be marked somehow, but I have no idea how.

  Didn't someone suggest a no-op prefix somewhere back up-thread?

cheers,
  DaveK
-- 
Can't think of a witty .sigline today



Re: Miscompilation of remainder expressions

2007-01-17 Thread Joe Buck
On Wed, Jan 17, 2007 at 07:03:43PM +, Andrew Haley wrote:
> It's an engineering problem.  We have a widget that does the wrong
> thing*.  We have several ways to make it do the right thing, only one
> of which has no adverse impact on the existing users of the widget.

> * (in some people's opinion)

Agreed, but the compiler is not just for GNU/Linux or glibc, and a
trap-catching fix won't work everywhere (e.g. ppc).



Re: Miscompilation of remainder expressions

2007-01-17 Thread Ian Lance Taylor
Joe Buck <[EMAIL PROTECTED]> writes:

> On Wed, Jan 17, 2007 at 05:48:34PM +, Andrew Haley wrote:
> > From a performance/convenience angle, the best place to handle this is
> > either libc or the kernel.  Either of these can quite easily fix up
> > the operands when a trap happens, with zero performance degradation of
> > existing code.  I don't think there's any need for gcc to be altered
> > to handle this.
> 
> How will the kernel know whether the overflow in the divide instruction
> is because the user's source code has a '%' and not a '/'?  We generate
> the exact same instruction for i / minus_one(), after all, and in that
> case the trap really should be there.

We don't need to generate a trap for INT_MIN / -1.  That is undefined
signed overflow.  We can legitimately set the quotient register to
INT_MIN while setting the remainder register to zero.  (Hmmm, what
should we do if -ftrapv is set?  Probably generate a different code
sequence in the compiler.)

We do want to generate a trap for x / 0, of course.

Ian


Re: char alignment on ARM

2007-01-17 Thread Michael Eager

Inder wrote:

Hi All
I have similar question as for arm
http://gcc.gnu.org/ml/gcc/2007-01/msg00691.html
consider the following program.
e.g..
- align.c -
int main()
{
int bc;
char a[6];
int ac;

bc = 0x;
/* fill with zeros.  */
a[0] = 0x00;
a[1] = 0x01;
 a[2] = 0x02;
  a[3] = 0x03;
   a[4] = 0x04;
a[5] = 0x05;

ac=0x;

make(a);
}
void make(char* a)
{
*(unsigned long*)a = 0x12345678;
}



Is the compiler doing a right thing or is it a bug??


Your code is not Standard C.

If you want to treat the same memory locations as
different data types, use a union:

  union {
long l;
char c[4];
  } a;

  a.l = 0x12345678;
  a.c[0] = 0x01;

--
Michael Eager[EMAIL PROTECTED]
1960 Park Blvd., Palo Alto, CA 94306  650-325-8077


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Andrew Haley wrote:

| Gabriel Dos Reis writes:
|  > On Wed, 17 Jan 2007, Andrew Haley wrote:
|  >
|  > [...]
|  >
|  > |  > | "To a man with a hammer, all things look like a nail."  It's very
|  > |  > | tempting for us in gcc-land always to fix things in gcc, not because
|  > |  > | it's technically the right place but because it's what we control
|  > |  > | ourselves.
|  > |  >
|  > |  > well, I'm unclear what your point is here, but certainly GCC is
|  > |  > at fault for generating trapping instructions.
|  > |  > So, we fix the problem in GCC, not because that is what we control
|  > |  > ourselves, but we we failed to generate proper code.
|  > |
|  > | It's not a matter of whose fault it is; trying to apportion blame
|  > | makes no sense.
|  >
|  > we have a communication problem here.  Nobody is trying to apportion
|  > blame.  However, gcc is the tool that generates trapping instruction.
|  > It is unclear why it would be the responsability of the OS or libc
|  > to fix what GCC has generated in the first place.
|
| That makes no sense either.
|
| It's an engineering problem.  We have a widget that does the wrong
| thing*.  We have several ways to make it do the right thing, only one
| of which has no adverse impact on the existing users of the widget.

You believe there is one solution, except that it does not work for
the supported target.  But, I suppose that does not matter since you
have decided that anything else does not make sense.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Michael Veksler

Ian Lance Taylor wrote:

Joe Buck <[EMAIL PROTECTED]> writes:

  

How will the kernel know whether the overflow in the divide instruction
is because the user's source code has a '%' and not a '/'?  We generate
the exact same instruction for i / minus_one(), after all, and in that
case the trap really should be there.



We don't need to generate a trap for INT_MIN / -1.  That is undefined
signed overflow.  We can legitimately set the quotient register to
INT_MIN while setting the remainder register to zero.  (Hmmm, what
should we do if -ftrapv is set?  Probably generate a different code
sequence in the compiler.)
  
Simply let the kernel/libc set the overflow flag in this case, and let 
the compiler

append an INTO instruction right after the idivl.

We do want to generate a trap for x / 0, of course.

Ian

  



--
Michael Veksler
http:///tx.technion.ac.il/~mveksler



Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Paubert
On Wed, Jan 17, 2007 at 11:17:36AM -0800, Ian Lance Taylor wrote:
> Joe Buck <[EMAIL PROTECTED]> writes:
> 
> > On Wed, Jan 17, 2007 at 05:48:34PM +, Andrew Haley wrote:
> > > From a performance/convenience angle, the best place to handle this is
> > > either libc or the kernel.  Either of these can quite easily fix up
> > > the operands when a trap happens, with zero performance degradation of
> > > existing code.  I don't think there's any need for gcc to be altered
> > > to handle this.
> > 
> > How will the kernel know whether the overflow in the divide instruction
> > is because the user's source code has a '%' and not a '/'?  We generate
> > the exact same instruction for i / minus_one(), after all, and in that
> > case the trap really should be there.
> 
> We don't need to generate a trap for INT_MIN / -1.  That is undefined
> signed overflow.  We can legitimately set the quotient register to
> INT_MIN while setting the remainder register to zero.  (Hmmm, what
> should we do if -ftrapv is set?  Probably generate a different code
> sequence in the compiler.)
> 
> We do want to generate a trap for x / 0, of course.
> 

Then you have to fix the code generation for PPC, which never traps.
All (?) 3-register arithmetic instructions have the option to
set an overflow flag that you can check later.

Gabriel


Re: Miscompilation of remainder expressions

2007-01-17 Thread Michael Veksler

Dave Korn wrote:

On 17 January 2007 19:09, Joe Buck wrote:
  

How will the kernel know whether the overflow in the divide instruction
is because the user's source code has a '%' and not a '/'?  We generate
the exact same instruction for i / minus_one(), after all, and in that
case the trap really should be there.
...

  Didn't someone suggest a no-op prefix somewhere back up-thread?

   

Yes, there are two ideas (also documented in the PR):
(1)
IIRC idivl (%ecx) will use the eds segment register by default, so
adding eds prefix will make no difference in semantics. To make
it even more explicit it is possible to add two eds prefixes, just
in case.

If some code depends on a segmented memory model (I think that wine does),
and it still wants to have GCC generate the new behavior, you'd have to
use redundant prefixes such that

   eds ess idivl (%ecx)

(I hope got the order right and the first prefix is redundant.)

(2)
The second option is to mark it in the executable in a different ELF
section, like debug info or like C++ exception handling.
This solution will make it workable only with the libc rather
than the kernel modifications.

--
Michael Veksler
http:///tx.technion.ac.il/~mveksler



Re: Miscompilation of remainder expressions

2007-01-17 Thread Andrew Haley
Gabriel Dos Reis writes:
 > On Wed, 17 Jan 2007, Andrew Haley wrote:
 > 
 > | Gabriel Dos Reis writes:
 > |  > On Wed, 17 Jan 2007, Andrew Haley wrote:
 > |  >
 > |  > [...]
 > |  >
 > |  > |  > | "To a man with a hammer, all things look like a nail."  It's very
 > |  > |  > | tempting for us in gcc-land always to fix things in gcc, not 
 > because
 > |  > |  > | it's technically the right place but because it's what we control
 > |  > |  > | ourselves.
 > |  > |  >
 > |  > |  > well, I'm unclear what your point is here, but certainly GCC is
 > |  > |  > at fault for generating trapping instructions.
 > |  > |  > So, we fix the problem in GCC, not because that is what we control
 > |  > |  > ourselves, but we we failed to generate proper code.
 > |  > |
 > |  > | It's not a matter of whose fault it is; trying to apportion blame
 > |  > | makes no sense.
 > |  >
 > |  > we have a communication problem here.  Nobody is trying to apportion
 > |  > blame.  However, gcc is the tool that generates trapping instruction.
 > |  > It is unclear why it would be the responsability of the OS or libc
 > |  > to fix what GCC has generated in the first place.
 > |
 > | That makes no sense either.
 > |
 > | It's an engineering problem.  We have a widget that does the wrong
 > | thing*.  We have several ways to make it do the right thing, only one
 > | of which has no adverse impact on the existing users of the widget.
 > 
 > You believe there is one solution, except that it does not work for
 > the supported target.

Sorry, I don't understand what you mean by that.

I've been thinking about why we see this so very differently, and it's
dawned on me why that is.

I first came across this "architectural feature" of the 8086 in the
mid-1980s.  To begin with, there was no divide overflow handler at
all: the machine would simply crash.  It took me a little while to
figure out what was happening, but once I'd done so it was a simple
matter to write a few lines of assembly language that fix up the
operands and carry on.

Fast-forward ten years or so and for the first time I come across
unices running on an x86.  And I was surprised to see that rather than
fixing up the operands and continuing, the kernel punted the problem
to the user's program, which usually responded by core dumping.  "OK,"
I thought, "that must be what UNIX programmers want.  This trap must
be desired, because it is trivially easy to fixup and continue.
Perhaps it's because programmers want to be alerted that there is a
bug in their program."

And that's what I thought until last week.  :-)

Andrew.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Joe Buck
On Wed, Jan 17, 2007 at 07:42:38PM +, Andrew Haley wrote:
> Gabriel Dos Reis writes:
>  > You believe there is one solution, except that it does not work for
>  > the supported target.
> 
> Sorry, I don't understand what you mean by that.

I suspect that he meant to write "one supported target"; it won't work
for ppc.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Joe Buck wrote:

| On Wed, Jan 17, 2007 at 07:42:38PM +, Andrew Haley wrote:
| > Gabriel Dos Reis writes:
| >  > You believe there is one solution, except that it does not work for
| >  > the supported target.
| >
| > Sorry, I don't understand what you mean by that.
|
| I suspect that he meant to write "one supported target"; it won't work
| for ppc.

yes, thank you.  Sorry for mistyping.

-- Gaby


Preventing warnings (correction)

2007-01-17 Thread Richard Stallman
My suggestion is that

  (EMACS_INT)(int)(i) > MOST_POSITIVE_FIXNUM

would avoid the warning.  But we would not put the casts in the macro
FIXNUM_OVERFLOW_P itself, since that would negate the purpose of the
macro.  Instead we would put the cast in the argument, when the
argument is an `int' anyway.


Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
Paolo Carlini <[EMAIL PROTECTED]> writes:

| Joe Buck wrote:
| 
| >In the case of the containers, we are asserting/relying on the fact that
| >the pointer difference is zero or positive.  But this has become a
| >widespread idiom: people write their own code in the STL style.  If STL
| >code now has to be fixed to silence warnings, so will a lot of user code.
| >
| Good point. About it, we should also take into account the recent
| messages from Martin, pointing out that many C++ front-ends do not
| warn for signed -> unsigned.

I just built firefox (CVS) with GCC mainline.  The compiler spitted
avalanches of non-sensical warning about conversions signed ->
unsigned may alter values, when in fact the compiler knows that
such things cannot happen.

First, let's recall that GCC supports only 2s complement targets.

Second, a conversion from T to U may alter value if a round trip is
not the identity function.  That is, there exists a value t in T
such that the assertion

   assert (T(U(t)) == t)

fails.

Now, given a signed integer type T, and U as unsigned variant, there
is no way the above can happen, given the characteristics of GCC.
There many of this "may alter value" business is pure noise.

The function responsible for that diagnostic should be refined.

-- Gaby


Re: -Wconversion versus libstdc++

2007-01-17 Thread Richard Guenther

On 17 Jan 2007 16:36:04 -0600, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

Paolo Carlini <[EMAIL PROTECTED]> writes:

| Joe Buck wrote:
|
| >In the case of the containers, we are asserting/relying on the fact that
| >the pointer difference is zero or positive.  But this has become a
| >widespread idiom: people write their own code in the STL style.  If STL
| >code now has to be fixed to silence warnings, so will a lot of user code.
| >
| Good point. About it, we should also take into account the recent
| messages from Martin, pointing out that many C++ front-ends do not
| warn for signed -> unsigned.

I just built firefox (CVS) with GCC mainline.  The compiler spitted
avalanches of non-sensical warning about conversions signed ->
unsigned may alter values, when in fact the compiler knows that
such things cannot happen.

First, let's recall that GCC supports only 2s complement targets.

Second, a conversion from T to U may alter value if a round trip is
not the identity function.  That is, there exists a value t in T
such that the assertion

   assert (T(U(t)) == t)

fails.


I think it warns if U(t) != t in a mathematical sense (without promoting
to the same type for the comparison), so it warns as (unsigned)-1 is
not "-1".

I agree this warning is of questionable use and should be not enabled
with -Wall.

Richard.


gcc-4.2-20070117 is now available

2007-01-17 Thread gccadmin
Snapshot gcc-4.2-20070117 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.2-20070117/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.2 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_2-branch 
revision 120880

You'll find:

gcc-4.2-20070117.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.2-20070117.tar.bz2 C front end and core compiler

gcc-ada-4.2-20070117.tar.bz2  Ada front end and runtime

gcc-fortran-4.2-20070117.tar.bz2  Fortran front end and runtime

gcc-g++-4.2-20070117.tar.bz2  C++ front end and runtime

gcc-java-4.2-20070117.tar.bz2 Java front end and runtime

gcc-objc-4.2-20070117.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.2-20070117.tar.bz2The GCC testsuite

Diffs from 4.2-20070110 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.2
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: -Wconversion versus libstdc++

2007-01-17 Thread Paolo Carlini
... thanks a lot Gaby both for your practical and theoretical 
investigations into this issue, both right to the point! Now, in my 
opinion, we should simply remove the bits about signed -> unsigned from 
-Wconversion.


Paolo.


Re: -Wconversion versus libstdc++

2007-01-17 Thread Joe Buck
On Wed, Jan 17, 2007 at 04:36:04PM -0600, Gabriel Dos Reis wrote:
> I just built firefox (CVS) with GCC mainline.  The compiler spitted
> avalanches of non-sensical warning about conversions signed ->
> unsigned may alter values, when in fact the compiler knows that
> such things cannot happen.
> 
> First, let's recall that GCC supports only 2s complement targets.
> 
> Second, a conversion from T to U may alter value if a round trip is
> not the identity function.  That is, there exists a value t in T
> such that the assertion
> 
>assert (T(U(t)) == t)
> 
> fails.
> 
> Now, given a signed integer type T, and U as unsigned variant, there
> is no way the above can happen, given the characteristics of GCC.
> There many of this "may alter value" business is pure noise.

Careful.  As you suggest, let's restrict ourselves to two's complement
platforms.  I would want the compiler to warn if the identity holds for an
ILP32 machine but not an LP64 machine, even if I'm running on an ILP32.
But if the two types are going to be the same size everywhere (because one
is the unsigned modifier of the other) then GCC should not complain.




Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Richard Guenther wrote:

| On 17 Jan 2007 16:36:04 -0600, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:
| > Paolo Carlini <[EMAIL PROTECTED]> writes:
| >
| > | Joe Buck wrote:
| > |
| > | >In the case of the containers, we are asserting/relying on the fact that
| > | >the pointer difference is zero or positive.  But this has become a
| > | >widespread idiom: people write their own code in the STL style.  If STL
| > | >code now has to be fixed to silence warnings, so will a lot of user code.
| > | >
| > | Good point. About it, we should also take into account the recent
| > | messages from Martin, pointing out that many C++ front-ends do not
| > | warn for signed -> unsigned.
| >
| > I just built firefox (CVS) with GCC mainline.  The compiler spitted
| > avalanches of non-sensical warning about conversions signed ->
| > unsigned may alter values, when in fact the compiler knows that
| > such things cannot happen.
| >
| > First, let's recall that GCC supports only 2s complement targets.
| >
| > Second, a conversion from T to U may alter value if a round trip is
| > not the identity function.  That is, there exists a value t in T
| > such that the assertion
| >
| >assert (T(U(t)) == t)
| >
| > fails.
|
| I think it warns if U(t) != t in a mathematical sense (without promoting
| to the same type for the comparison), so it warns as (unsigned)-1 is
| not "-1".

Except that in the mathematical sense, it does not make sense.

The denotational domain for unsigned is not the set of natural numbers.
Rather, it is Z/nZ for appropriate n.  So, to compare an element in
Z/nZ with an element in a segment [M..N] does not make much sense without
further elaboration (which would reveal that the notion is flawed).
That elaboration that needs injections and projections.  Those are precisely
denoted by the implicit conversions mentioned above.

-- Gaby


Re: -Wconversion versus libstdc++

2007-01-17 Thread Paolo Carlini

Joe Buck wrote:


Careful.  As you suggest, let's restrict ourselves to two's complement
platforms.  I would want the compiler to warn if the identity holds for an
ILP32 machine but not an LP64 machine, even if I'm running on an ILP32.
But if the two types are going to be the same size everywhere (because one
is the unsigned modifier of the other) then GCC should not complain.

Indeed, between Gaby's message to the audit trail and the one to gcc@, I 
had your very same doubt, then noticed that, in the -Wconversion 
documentation, the bits about signed -> unsigned are distinct from the 
bits about conversion to smaller type: we could simply leave the latter 
untouched.


Paolo.



Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Joe Buck wrote:

| On Wed, Jan 17, 2007 at 04:36:04PM -0600, Gabriel Dos Reis wrote:
| > I just built firefox (CVS) with GCC mainline.  The compiler spitted
| > avalanches of non-sensical warning about conversions signed ->
| > unsigned may alter values, when in fact the compiler knows that
| > such things cannot happen.
| >
| > First, let's recall that GCC supports only 2s complement targets.
| >
| > Second, a conversion from T to U may alter value if a round trip is
| > not the identity function.  That is, there exists a value t in T
| > such that the assertion
| >
| >assert (T(U(t)) == t)
| >
| > fails.
| >
| > Now, given a signed integer type T, and U as unsigned variant, there
| > is no way the above can happen, given the characteristics of GCC.
| > There many of this "may alter value" business is pure noise.
|
| Careful.  As you suggest, let's restrict ourselves to two's complement
| platforms.  I would want the compiler to warn if the identity holds for an
| ILP32 machine but not an LP64 machine, even if I'm running on an ILP32.
| But if the two types are going to be the same size everywhere (because one
| is the unsigned modifier of the other) then GCC should not complain.

The specific cases I'm concerned about here (and if you have a chance
to build firefox for example, you'll see) is when T and U differ only
in signedness, that is

   T = int, U = unsigned
   T = long, U = unsigned long
   T = long long, U = unsigned long long

those have the same value representation bits and there is no way, GCC
can mess up -- except bugs in the compiler itself.

Furthermore, elsewhere (in the overflow thread) it has been suggested
that people should convert to the unsigned variants, do computations there,
and convert back to the signed variants.  We have just promised an
invariant that we will hold.

-- Gaby


Re: -Wconversion versus libstdc++

2007-01-17 Thread Joe Buck

I wrote:
> | Careful.  As you suggest, let's restrict ourselves to two's complement
> | platforms.  I would want the compiler to warn if the identity holds for an
> | ILP32 machine but not an LP64 machine, even if I'm running on an ILP32.
> | But if the two types are going to be the same size everywhere (because one
> | is the unsigned modifier of the other) then GCC should not complain.

On Wed, Jan 17, 2007 at 04:59:02PM -0600, Gabriel Dos Reis wrote:
> The specific cases I'm concerned about here (and if you have a chance
> to build firefox for example, you'll see) is when T and U differ only
> in signedness, that is
> 
>T = int, U = unsigned
>T = long, U = unsigned long
>T = long long, U = unsigned long long
> 
> those have the same value representation bits and there is no way, GCC
> can mess up -- except bugs in the compiler itself.

> Furthermore, elsewhere (in the overflow thread) it has been suggested
> that people should convert to the unsigned variants, do computations there,
> and convert back to the signed variants.  We have just promised an
> invariant that we will hold.

Fully agreed.


lib{gomp,decnumber}/autom4te.cache

2007-01-17 Thread FX Coudert
Is there any reason why libgomp and libdecnumber don't have a  
svn:ignore property containing autom4te.cache? I noticed the  
following always showing up in my "svn status" after a maintainer- 
mode build:

?  libdecnumber/autom4te.cache
?  libgomp/autom4te.cache

Thanks,
FX


Re: Miscompilation of remainder expressions

2007-01-17 Thread Robert Dewar

H .. I wish some of the more important bugs in gcc received
the attention that this very unimportant issue is receiving :-)

I guess the difference is that lots of people can understand
this issue.

Reminds me of the hullabaloo over the Pentium division problem.
The fact of the matter was that the Pentium had many more serious
problems, but they were not well known, and often much more
complex to understand!


Re: Miscompilation of remainder expressions

2007-01-17 Thread Robert Dewar

Joe Buck wrote:


If GCC winds up having to fix the bug in the compiler itself for PPC,
then everyone could have the option of using a kernel fix or a compiler
fix.  But how are you going to do the kernel fix?  What if the user did
an integer divide and not a modulo?  I suppose you could just say the
result is undefined and patch up the quotient too.


exactly!
And if you want this undefined operation to do something defined, you
can start another one of those long threads :-)






Re: Miscompilation of remainder expressions

2007-01-17 Thread Robert Dewar

Ian Lance Taylor wrote:


We do want to generate a trap for x / 0, of course.


Really? Is this really defined to generate a trap in C?
I would be surprised if so ...


Re: -Wconversion versus libstdc++

2007-01-17 Thread Manuel López-Ibáñez

On 17/01/07, Paolo Carlini <[EMAIL PROTECTED]> wrote:

... thanks a lot Gaby both for your practical and theoretical
investigations into this issue, both right to the point! Now, in my
opinion, we should simply remove the bits about signed -> unsigned from
-Wconversion.



I am not sure I am following the conversation.

So, are you saying that Wconversion should not warn for unsigned int x = -10; ?
Or are you talking about a particular especial case?

Cheers,

Manuel.


Re: -Wconversion versus libstdc++

2007-01-17 Thread Manuel López-Ibáñez

On 17/01/07, Richard Guenther <[EMAIL PROTECTED]> wrote:


I agree this warning is of questionable use and should be not enabled
with -Wall.



But... -Wconversion is not enabled by -Wall! It is not even enabled by
Wextra! It is only enabled by -Wconversion.

Getting confused,

Manuel.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Ian Lance Taylor
Robert Dewar <[EMAIL PROTECTED]> writes:

> Ian Lance Taylor wrote:
> 
> > We do want to generate a trap for x / 0, of course.
> 
> Really? Is this really defined to generate a trap in C?
> I would be surprised if so ...

As far as I know, but I think it would be a surprising change for x /
0 to silently continue executing.

But perhaps not a very important one.

Ian


Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Manuel López-Ibáñez wrote:

| On 17/01/07, Paolo Carlini <[EMAIL PROTECTED]> wrote:
| > ... thanks a lot Gaby both for your practical and theoretical
| > investigations into this issue, both right to the point! Now, in my
| > opinion, we should simply remove the bits about signed -> unsigned from
| > -Wconversion.
| >
|
| I am not sure I am following the conversation.

here it is in pseudo-code.

   int x = some value;
   // ...
   unsigned y = x  // please don't spit noise here

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
Ian Lance Taylor <[EMAIL PROTECTED]> writes:

| Robert Dewar <[EMAIL PROTECTED]> writes:
| 
| > Ian Lance Taylor wrote:
| > 
| > > We do want to generate a trap for x / 0, of course.
| > 
| > Really? Is this really defined to generate a trap in C?
| > I would be surprised if so ...
| 
| As far as I know, but I think it would be a surprising change for x /
| 0 to silently continue executing.

furthermore,  has been defined so that
numeric_limits::traps reports true when division by zero traps.

// GCC only intrinsicly supports modulo integral types.  The only remaining
// integral exceptional values is division by zero.  Only targets that do not
// signal division by zero in some "hard to ignore" way should use false.
#ifndef __glibcxx_integral_traps
# define __glibcxx_integral_traps true
#endif

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Paubert
On Wed, Jan 17, 2007 at 04:15:08PM -0800, Ian Lance Taylor wrote:
> Robert Dewar <[EMAIL PROTECTED]> writes:
> 
> > Ian Lance Taylor wrote:
> > 
> > > We do want to generate a trap for x / 0, of course.
> > 
> > Really? Is this really defined to generate a trap in C?
> > I would be surprised if so ...
> 
> As far as I know, but I think it would be a surprising change for x /
> 0 to silently continue executing.
> 

That's exactly what happens on PPC.

> But perhaps not a very important one.

Indeed.

Gabriel


Re: -Wconversion versus libstdc++

2007-01-17 Thread Manuel López-Ibáñez

On 18/01/07, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

On Wed, 17 Jan 2007, Manuel López-Ibáñez wrote:

| On 17/01/07, Paolo Carlini <[EMAIL PROTECTED]> wrote:
| > ... thanks a lot Gaby both for your practical and theoretical
| > investigations into this issue, both right to the point! Now, in my
| > opinion, we should simply remove the bits about signed -> unsigned from
| > -Wconversion.
| >
|
| I am not sure I am following the conversation.

here it is in pseudo-code.

   int x = some value;
   // ...
   unsigned y = x  // please don't spit noise here



Does that apply also to:

unsigned int y = -10;


Re: Miscompilation of remainder expressions

2007-01-17 Thread David Daney

Ian Lance Taylor wrote:

Robert Dewar <[EMAIL PROTECTED]> writes:



Ian Lance Taylor wrote:



We do want to generate a trap for x / 0, of course.


Really? Is this really defined to generate a trap in C?
I would be surprised if so ...



As far as I know, but I think it would be a surprising change for x /
0 to silently continue executing.

But perhaps not a very important one.

It depends on the front-end language.  For C, perhaps is would not 
matter.  For Java, the language specification requires an 
ArithmaticException to be thrown.  In libgcj this is done by having the 
operation trap and the trap handler generates the exception.


Because libgcj already handles all of this, it was brought up that a 
similar runtime trap handler could easily be used for C.  However as 
others have noted, the logistics of universally using a trap handler in 
C might be difficult.


David Daney


Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Thu, 18 Jan 2007, Manuel López-Ibáñez wrote:

| On 18/01/07, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:
| > On Wed, 17 Jan 2007, Manuel López-Ibáñez wrote:
| >
| > | On 17/01/07, Paolo Carlini <[EMAIL PROTECTED]> wrote:
| > | > ... thanks a lot Gaby both for your practical and theoretical
| > | > investigations into this issue, both right to the point! Now, in my
| > | > opinion, we should simply remove the bits about signed -> unsigned from
| > | > -Wconversion.
| > | >
| > |
| > | I am not sure I am following the conversation.
| >
| > here it is in pseudo-code.
| >
| >int x = some value;
| >// ...
| >unsigned y = x  // please don't spit noise here
| >
|
| Does that apply also to:
|
| unsigned int y = -10;

Yes.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
Gabriel Paubert <[EMAIL PROTECTED]> writes:

| On Wed, Jan 17, 2007 at 04:15:08PM -0800, Ian Lance Taylor wrote:
| > Robert Dewar <[EMAIL PROTECTED]> writes:
| > 
| > > Ian Lance Taylor wrote:
| > > 
| > > > We do want to generate a trap for x / 0, of course.
| > > 
| > > Really? Is this really defined to generate a trap in C?
| > > I would be surprised if so ...
| > 
| > As far as I know, but I think it would be a surprising change for x /
| > 0 to silently continue executing.
| > 
| 
| That's exactly what happens on PPC.

Indeed, and Andrew Pinski corrected numeric_limits to reflect
the reality on PPC for that specific case.

C++ forces compilers to reveal their semantics for built-in types
through numeric_limits<>.  Every time you change the behaviour,
you also implicilty break an ABI.

-- Gaby


Re: -Wconversion versus libstdc++

2007-01-17 Thread Joseph S. Myers
On Wed, 17 Jan 2007, Gabriel Dos Reis wrote:

> The specific cases I'm concerned about here (and if you have a chance
> to build firefox for example, you'll see) is when T and U differ only
> in signedness, that is
> 
>T = int, U = unsigned
>T = long, U = unsigned long
>T = long long, U = unsigned long long
> 
> those have the same value representation bits and there is no way, GCC
> can mess up -- except bugs in the compiler itself.

The point of such warnings is to detect security holes such as

void foo(void *s, int len);
void bar(void *s, unsigned len) { if (len < sizeof(S)) abort(); foo(s, len); }

where a large unsigned value gets implicitly converted to signed after a 
check and this leads to a hole in foo() with a negative value.

> Furthermore, elsewhere (in the overflow thread) it has been suggested
> that people should convert to the unsigned variants, do computations there,
> and convert back to the signed variants.  We have just promised an
> invariant that we will hold.

The suggestion is for *explicit* conversions (casts), the warnings (should 
be) for implicit conversions.

-- 
Joseph S. Myers
[EMAIL PROTECTED]


Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Thu, 18 Jan 2007, Joseph S. Myers wrote:

[...]

| > Furthermore, elsewhere (in the overflow thread) it has been suggested
| > that people should convert to the unsigned variants, do computations there,
| > and convert back to the signed variants.  We have just promised an
| > invariant that we will hold.
|
| The suggestion is for *explicit* conversions (casts), the warnings (should
| be) for implicit conversions.

Not just because a cast is explicit that it magically makes the
code more correct or safer.

In essence, what this warning is doing is to implicit suggest to add
casts, and basta it is done.  If the code is wrong before, it will
remain wrong after the cast.

Please do run the compiler on applications out there.  libstdc++
is a small scale libary (though an important one) that shows
the inappropriateness of the warnings; firefox is one I just built.
People should run more.

-- Gaby


Re: -Wconversion versus libstdc++

2007-01-17 Thread Manuel López-Ibáñez

On 18/01/07, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:

On Thu, 18 Jan 2007, Manuel López-Ibáñez wrote:

| Does that apply also to:
|
| unsigned int y = -10;

Yes.



Then, why Wconversion has warned about it at least since
http://gcc.gnu.org/onlinedocs/gcc-3.0.4/gcc_3.html#SEC11 ?

Moreover, most people that use Wconversion nowadays, use it just for
that warning despite all the noise produced by the prototype warnings,
that they continually claim to hate. For example:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=9072
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=13945

And my favourite, Gabriel dos Reis willing to implement a warning for
int->unsigned int for g++:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26167#c3

:-) I hope you understand why I think the issue is not as clear to me
as it seems to be for you.

Finally, why libstdc++ is using Wconversion at all? I don't think it
is to get warnings about adding prototypes to code, because those
don't appear in C++, so it can only be to get a warning about unsigned
int y = -10, since that is the only other documented thing that
Wconversion did up to GCC 4.3 (which is not even in stage2).

I am still confused, sorry.

Manuel.

PS: I have included Joseph Myers in the CC list, since he has been
thinking about this for some time and listening to the repeated
complains from security people about the useless old Wconversion and
how badly a new one was needed, mentioning as an example
signed->unsigned conversion:

http://gcc.gnu.org/ml/gcc-bugs/2000-11/msg00140.html
http://archives.neohapsis.com/archives/linux/lsap/2000-q4/0152.html

I wish we could consult with those people when and when not such
conversions could produce a security risk, so we can fine-tune the
warning.


incorrect symlink on Darwin

2007-01-17 Thread Jack Howarth
 I noticed today that gcc 4.2 branch seems to create a bogus symlink
on Darwin PPC. A symlink for libgcc_s_x86_64.1.dylib is created that
points at libgcc_s.1.dylib. However libgcc_s.1.dylib is not a quad
binary...

file libgcc_s.1.dylib
libgcc_s.1.dylib: Mach-O fat file with 2 architectures
libgcc_s.1.dylib (for architecture ppc):Mach-O dynamically linked 
shared library ppc
libgcc_s.1.dylib (for architecture ppc64):  Mach-O 64-bit dynamically 
linked shared library ppc64

Is anyone else seeing this?
  Jack


Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Thu, 18 Jan 2007, Manuel López-Ibáñez wrote:

| On 18/01/07, Gabriel Dos Reis <[EMAIL PROTECTED]> wrote:
| > On Thu, 18 Jan 2007, Manuel López-Ibáñez wrote:
| >
| > | Does that apply also to:
| > |
| > | unsigned int y = -10;
| >
| > Yes.
| >
|
| Then, why Wconversion has warned about it at least since
| http://gcc.gnu.org/onlinedocs/gcc-3.0.4/gcc_3.html#SEC11 ?

The description on the page there is:

Warn if a prototype causes a type conversion that is different
from what would happen to the same argument in the absence of a
prototype. This includes conversions of fixed point to floating
and vice versa, and conversions changing the width or signedness
of a fixed point argument except when the same as the default
promotion.

Also, warn if a negative integer constant expression is implicitly
converted to an unsigned type. For example, warn about the
assignment x = -1 if x is unsigned. But do not warn about explicit
casts like (unsigned) -1.


As the PR you noted, it wasn't part of C++.

[...]

| And my favourite, Gabriel dos Reis willing to implement a warning for
^
Please use capital "D" if you must write my full name; thanks.

| int->unsigned int for g++:
| http://gcc.gnu.org/bugzilla/show_bug.cgi?id=26167#c3
|
| :-) I hope you understand why I think the issue is not as clear to me
| as it seems to be for you.

You never re-evaluate based on data collected from experimenting with
applications out there?

Look at the code at issue in libstdc++.  What is wrong with it?
As noted by Joe, such constructs are now likely common place, as they
fall from STL-style view of sequences.  You have to take that into
account.

| Finally, why libstdc++ is using Wconversion at all?

Please go and read the PR submitted by Gerald.

| I don't think it
| is to get warnings about adding prototypes to code, because those
| don't appear in C++, so it can only be to get a warning about unsigned
| int y = -10, since that is the only other documented thing that
| Wconversion did up to GCC 4.3 (which is not even in stage2).
|
| I am still confused, sorry.

I see you're confused :-).

I don't believe in the "only" part of your reasoning.

One use of -Wconversion is to draw attention to

   int x = 2.3;   // warning: be careful, is this what you want?
  // this is a potential bug as it is value altering.

and in an upcoming revision to C++, it is very likely that implicit
conversion that may lose information are just banned outright, see

   http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2006/n2100.pdf

section "7.1 Can we ban narrowing for T{v}?" on page 27, which
was welcomed at the last C++ committee meeting (at my own surprise, I
must confess, as the committee tends to be conservative).

I suggest to move "int -> unsigned" into a separate category, out of
-Wconversion, if you must keep it.

-- Gaby


Re: -Wconversion versus libstdc++

2007-01-17 Thread Andrew Pinski
> 
> One use of -Wconversion is to draw attention to
> 
>int x = 2.3;   // warning: be careful, is this what you want?
>   // this is a potential bug as it is value altering.
> 
> and in an upcoming revision to C++, it is very likely that implicit
> conversion that may lose information are just banned outright, see
> 
>http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2006/n2100.pdf
> 
> section "7.1 Can we ban narrowing for T{v}?" on page 27, which
> was welcomed at the last C++ committee meeting (at my own surprise, I
> must confess, as the committee tends to be conservative).

The union between C and C++, just became smaller and I don't think C++
should be named C++ anymore then.  It really needs a rename if this
baning of narrowing goes through.

-- Pinski


Re: -Wconversion versus libstdc++

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Andrew Pinski wrote:

| >
| > One use of -Wconversion is to draw attention to
| >
| >int x = 2.3;   // warning: be careful, is this what you want?
| >   // this is a potential bug as it is value altering.
| >
| > and in an upcoming revision to C++, it is very likely that implicit
| > conversion that may lose information are just banned outright, see
| >
| >http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2006/n2100.pdf
| >
| > section "7.1 Can we ban narrowing for T{v}?" on page 27, which
| > was welcomed at the last C++ committee meeting (at my own surprise, I
| > must confess, as the committee tends to be conservative).
|
| The union between C and C++,

well, as C++ is adding more stuff, the union can only grow, not shrink :-)
I suspect you meant the intersection.

Did you read the reference?  You'll see that the ban is only for
aggregate-style initialization, as in

   char x = { 2879 };

| just became smaller and I don't think C++ should be named C++ anymore then.

you seem to know better what is C++ than its designer.  Good for you.

In case, you want to know how and why implicit conversions came into
C, you might probably want to read section "7.1.2 Why do we have the
narrowing problem?"

  # Some implicit casts, such as double->int and int->char, have
  # traditionally been consider a significant  even invaluable
  # notational convenience. Others, such as double->char and int*->bool,
  # are widely considered embarrassments. When Bjarne once asked around
  # in the Unix room why implicit narrowing had actually been
  # allowed. Nobody argued that there were a fundamental technical
  # reason, someone pointed out the obvious potential for errors and all
  # agreed that the reason was simply historical: Dennis Ritchie added
  # floating point before Steve Johnson added casts. Thus, the use of
  # implicit narrowing was well established before explicit casting
  # became an option.
  #
  # Bjarne tried to ban implicit narrowing in C with Classes but found
  # that a combination of existing practice (especially relating to the
  # use of chars) and existing code made that infeasible. Cfront,
  # however, stamped out the double->int conversions for early
  # generations of C++ programmers by providing long, ugly, and
  # non-suppressible warnings.
  #
  # Please note that the suggestion to ban narrowing does not actually
  # touch these common examples. It relies on explicit use of { }.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Joe Buck
On Wed, Jan 17, 2007 at 06:40:21PM -0500, Robert Dewar wrote:
> H .. I wish some of the more important bugs in gcc received
> the attention that this very unimportant issue is receiving :-)
> 
> I guess the difference is that lots of people can understand
> this issue.

Yes, this phenomenon has been given a name, by Parkinson of Parkinson's
law fame: "bike shed".

Someone even made a web site dedicated to the phenomenon:

http://www.bikeshed.org/

"Parkinson shows how you can go in to the board of directors and
get approval for building a multi-million or even billion dollar
atomic power plant, but if you want to build a bike shed you will
be tangled up in endless discussions."

As for the issue at hand, we've basically exhausted it; every reasonable
combination of solutions has been proposed, as well as "don't fix it".


Re: Miscompilation of remainder expressions

2007-01-17 Thread Mike Stump

On Jan 17, 2007, at 4:44 PM, Gabriel Dos Reis wrote:

C++ forces compilers to reveal their semantics for built-in types
through numeric_limits<>.  Every time you change the behaviour,
you also implicilty break an ABI.


No, the ABI does not document that the answer never changes between  
translation units, only that for this translation unit, what the  
answer happens to be.  If it said what you thought it said, you'd be  
able to quote it.  If you think I'm wrong, I look forward to the quote.


Consider the ABI document that says that the size of int is 4.  One  
cannot meaningfully use a compiler flag to change the size of an int  
to something other than 4 because then, that flag breaks the ABI.  An  
ABI document _can_ state that the answer to the question must be true  
for float, but, absent it stating that it is true, there isn't a  
document that says that it is true.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Gabriel Dos Reis
On Wed, 17 Jan 2007, Mike Stump wrote:

| On Jan 17, 2007, at 4:44 PM, Gabriel Dos Reis wrote:
| > C++ forces compilers to reveal their semantics for built-in types
| > through numeric_limits<>.  Every time you change the behaviour,
| > you also implicilty break an ABI.
|
| No, the ABI does not document that the answer never changes between
| translation units, only that for this translation unit, what the
| answer happens to be.  If it said what you thought it said, you'd be
| able to quote it.  If you think I'm wrong, I look forward to the quote.

(1) the ABI I was talking about is that of libstdc++, not that of
the processor.  Sorry if that wasn't clearer (I switched to the
library developer perspective, and should have made that clearer).

Each time we make changes to libstdc++ ABI, people get very nervous.

(2) numeric_limits<> cannot change from translation unit to translation
unit, within the same program otherwise you break the ODR.  I guess
we all agree on that.

-- Gaby


Re: Miscompilation of remainder expressions

2007-01-17 Thread Mike Stump

On Jan 17, 2007, at 6:46 PM, Gabriel Dos Reis wrote:

(1) the ABI I was talking about is that of libstdc++


(2) numeric_limits<> cannot change from translation unit to  
translation
unit, within the same program otherwise you break the ODR.  I  
guess

we all agree on that.


Doh!  Did I ever say that I hate abi issues, this truly is plainly  
obvious...  and yet I still missed it.  Thanks.


Anyway, that would just mean that any abi document that attempts a C+ 
+ ABI must specify these answers.  The issue is that if one  
implemented a new C++ compiler, attempting to match the ABI of the  
previous one, I think it'd be bad form to cheat off the actual  
implementation for the answer, rather the document should specify the  
answer.


The issue reminds me of Sun's attempt to do a C+ abi that didn't talk  
about templates or EH, nice, but not as useful as one that does.


Re: Miscompilation of remainder expressions

2007-01-17 Thread Robert Dewar

Joe Buck wrote:

(off topic!)


On Wed, Jan 17, 2007 at 06:40:21PM -0500, Robert Dewar wrote:

H .. I wish some of the more important bugs in gcc received
the attention that this very unimportant issue is receiving :-)

I guess the difference is that lots of people can understand
this issue.


Yes, this phenomenon has been given a name, by Parkinson of Parkinson's
law fame: "bike shed".


Actually I don't think Parkinson uses this term. He states a similar
principle as:

* THE LAW OF TRIVIALITY: The time spent on any item of a committee's 
agenda will be in inverse proportion to the sum of money involved.


Re: CSE not combining equivalent expressions.

2007-01-17 Thread pranav bhandarkar

Also this is removed for the case of integers by the CSE pass
IIRC . The problem arises only for the type being a char or a short.


Yes, That is true. With gcc 4.1 one of the 'or's gets eliminated for
'int'. I am putting below two sets of logs. The first just before
cse_main and the second just after cse_main has returned but the
trivially dead insns have not been deleted yet.

Set 1: Before cse_main

(note 9 6 11 0 [bb 0] NOTE_INSN_BASIC_BLOCK)

(insn 11 9 12 0 (set (reg:SI 1 $c1)
   (const_int 0 [0x0])) 43 {*movsi} (nil)
   (nil))

(call_insn 12 11 13 0 (parallel [
   (set (reg:SI 1 $c1)
   (call (mem:SI (symbol_ref:SI ("gen_T") [flags 0x41]
) [0 S4 A32])
   (const_int 0 [0x0])))
   (use (const_int 0 [0x0]))
   (clobber (reg:SI 31 $link))
   ]) 39 {*call_value_direct} (nil)
   (nil)
   (expr_list:REG_DEP_TRUE (use (reg:SI 1 $c1))
   (nil)))

(insn 13 12 15 0 (set (reg:SI 134 [ D.1214 ])
   (reg:SI 1 $c1)) 43 {*movsi} (nil)
   (nil))

(insn 15 13 16 0 (set (reg:SI 133 [ D.1216 ])
   (ior:SI (reg:SI 134 [ D.1214 ])
   (const_int 1 [0x1]))) 64 {iorsi3} (nil)
   (nil))

(insn 16 15 17 0 (set (reg/f:SI 136)
   (symbol_ref:SI ("a") [flags 0x2] )) 43
{*movsi} (nil)
   (nil))

(insn 17 16 19 0 (set (mem/c/i:SI (reg/f:SI 136) [2 a+0 S4 A32])
   (reg:SI 133 [ D.1216 ])) 43 {*movsi} (nil)
   (nil))

<<>>>

(note 23 21 25 1 [bb 1] NOTE_INSN_BASIC_BLOCK)

(insn 25 23 26 1 (set (reg/f:SI 139)
   (symbol_ref:SI ("a") [flags 0x2] )) 43
{*movsi} (nil)
   (nil))

(insn 26 25 27 1 (set (reg:SI 140)
   (ior:SI (reg:SI 133 [ D.1216 ])
   (const_int 1 [0x1]))) 64 {iorsi3} (nil)
   (nil))

(insn 27 26 29 1 (set (mem/c/i:SI (reg/f:SI 139) [2 a+0 S4 A32])
   (reg:SI 140)) 43 {*movsi} (nil)
   (nil))

(code_label 29 27 30 2 2 ("end") [1 uses])
.. to function end


Set 2: After cse_main
(note 9 6 11 0 [bb 0] NOTE_INSN_BASIC_BLOCK)

(insn 11 9 12 0 (set (reg:SI 1 $c1)
   (const_int 0 [0x0])) 43 {*movsi} (nil)
   (nil))

(call_insn 12 11 13 0 (parallel [
   (set (reg:SI 1 $c1)
   (call (mem:SI (symbol_ref:SI ("gen_T") [flags 0x41]
) [0 S4 A32])
   (const_int 0 [0x0])))
   (use (const_int 0 [0x0]))
   (clobber (reg:SI 31 $link))
   ]) 39 {*call_value_direct} (nil)
   (nil)
   (expr_list:REG_DEP_TRUE (use (reg:SI 1 $c1))
   (nil)))

(insn 13 12 15 0 (set (reg:SI 134 [ D.1214 ])
   (reg:SI 1 $c1)) 43 {*movsi} (nil)
   (nil))

(insn 15 13 16 0 (set (reg:SI 133 [ D.1216 ])
   (ior:SI (reg:SI 134 [ D.1214 ])
   (const_int 1 [0x1]))) 64 {iorsi3} (nil)
   (nil))

(insn 16 15 17 0 (set (reg/f:SI 136)
   (symbol_ref:SI ("a") [flags 0x2] )) 43
{*movsi} (nil)
   (nil))

(insn 17 16 19 0 (set (mem/c/i:SI (reg/f:SI 136) [2 a+0 S4 A32])
   (reg:SI 133 [ D.1216 ])) 43 {*movsi} (nil)
   (nil))

 Expansion of the If condition >>

(note 23 21 25 1 [bb 1] NOTE_INSN_BASIC_BLOCK)

(insn 25 23 26 1 (set (reg/f:SI 139)
   (reg/f:SI 136)) 43 {*movsi} (nil)
   (expr_list:REG_EQUAL (symbol_ref:SI ("a") [flags 0x2] )
   (nil)))

(insn 26 25 27 1 (set (reg:SI 140)
   (ior:SI (reg:SI 134 [ D.1214 ])
   (const_int 1 [0x1]))) 64 {iorsi3} (nil)
   (nil))

(insn 27 26 29 1 (set (mem/c/i:SI (reg/f:SI 136) [2 a+0 S4 A32])
   (reg:SI 133 [ D.1216 ])) 43 {*movsi} (nil)
   (nil))

(code_label 29 27 30 2 2 ("end") [1 uses])
 to function end

Therefore as I see it, cse_main has followed the following steps
1) found that the source of the set in insn 26 is equivalent to the
source of the set in insn 15 and replaced the source in insn 26 with
that from insn 15
2) Found the lhs of insn 15 to be equal to that of insn 26 and stored
that instead in insn 27 thus making the result of insn 26 ( reg 140 )
unused ever again ( and insn 26 subsequently gets deleted) .

However for the case of char or short, zero / sign extends are
generated after the 'ior' operations and as a result the source of the
second 'ior' is then not equal to the source of the first 'ior'.


Re: CSE not combining equivalent expressions.

2007-01-17 Thread pranav bhandarkar

On 1/17/07, Mircea Namolaru <[EMAIL PROTECTED]> wrote:

> Thanks. Another question I have is that, in this case, will the
following
>
> http://gcc.gnu.org/wiki/Sign_Extension_Removal
>
> help in removal of the sign / zero extension ?

First, it seems to me that in your case:

(1) a = a | 1 /* a |= 1 */
(2) a = a | 1 /* a |= 1 */

the expressions "a | 1" in (1) and (2) are different as the "a"
is not the same. So there is nothing to do for CSE.

If the architecture has an instruction that does both the
store and the zero extension, the zero extension instructions
become redundant.

The sign extension algorithm is supposed to catch such cases, but
I suspect that in this simple case the regular combine is enough.

Mircea


Thanks for the info. I went through the documentation provided by you
in see.c, which I must add is very comprehensive indeed, and realised
that we need an instruction that does a zero extend before a store so
that that the extension instructions become redundant and can be
removed.
Thank you,
Pranav