Re: Can't build gcc version 4.2.0 20060311 (experimental) on sparc/sparc64 linux

2006-03-24 Thread Christian Joensson
On 3/24/06, David S. Miller <[EMAIL PROTECTED]> wrote:
> From: Eric Botcazou <[EMAIL PROTECTED]>
> Date: Fri, 24 Mar 2006 00:04:41 +0100
>
> > Or maybe ELF 64-bit MSB since I'm seeing -m64 on the command line?
>
> Hey might have showed the wrong command line, since this
> library is multi-libbed.

ehrm, well, don't know for sure anymore, I've started from scratch
with 20060318 instead and on sparc-linux strictly, not using
multilibbed variant...

> > In any cases, do not drag me into this mess, please, I've already
> > said what I think about this 32-bit sparc64-*-* compiler. :-)
>
> There is no fundamental reason why it shouldn't work correctly.
> I think this configuration choice is very reasonable from
> a distribution makers' viewpoint, so we should not discount
> it so readily. :-)

Right...

--
Cheers,

/ChJ


Re: for getting millisecond resolution in profiling with -pg option

2006-03-24 Thread Mike Stump

On Mar 18, 2006, at 6:47 AM, jayaraj wrote:

I want to profile an application in linux. I used -pg option and
profiled the data with gprof. Here I am getting the resolution in
seconds only. but I wants in terms of milliseconds and microseconds.
can anybody help me. or any other options and tools available.


Wrong list, please use gcc-help instead.

http://en.wikipedia.org/wiki/RDTSC

Do it twice, once before and once after, then subtract.  It will give  
it to you in clock cycles, clock cycles are best.  You can also use  
gettimeofday.


Building glibc 4.0.2 with gcc 3.2.2

2006-03-24 Thread Piyush Garyali
Hi,

Building glibc 4.0.2 with gcc 3.2.2 on a redhat machine causes a
compile error about some CFI assembler support. Any ideas would be
welcome

Cheers
Piyush


Migration of mangled names

2006-03-24 Thread Piyush Garyali
Is there any solution to fix the troubles which result from the change
to the name mangling algorithm between 3.2 and 2.95 ?

thanks
Piyush

--
My blog http://verypondycherry.blogspot.com


Re: Migration of mangled names

2006-03-24 Thread Piyush Garyali
I meant other than recompiling the code of course. I have some
binaries without the source code. does 3.2 support the old mangling
algorithm.

thanks
Piyush

--
My blog http://verypondycherry.blogspot.com

On 3/24/06, Piyush Garyali <[EMAIL PROTECTED]> wrote:
> Is there any solution to fix the troubles which result from the change
> to the name mangling algorithm between 3.2 and 2.95 ?
>
> thanks
> Piyush
>
> --
> My blog http://verypondycherry.blogspot.com
>


Forward port darwin/i386 support 4.0 => 4.2 problems

2006-03-24 Thread Sandro Tolaini
I have successfully forward ported libffi changes from 4.0 to 4.2,  
but I have some problems running the testsuite. Here are the results:


# of expected passes1052
# of unexpected failures8
# of unsupported tests  8

Looking at the failures, I have:

FAIL: libffi.call/return_fl2.c -O0 -W -Wall execution test
FAIL: libffi.call/return_fl2.c -O2 execution test
FAIL: libffi.call/return_fl2.c -O3 execution test
FAIL: libffi.call/return_fl2.c -Os execution test

These all fail for a dummy FP rounding problem: the output is always  
"1022.800049 vs 1022.800018", so I'm leaving these alone (should the  
tests be fixed somehow?).


The other 4 failures are here:

FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++ execution  
test
FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++ execution  
test
FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++ execution  
test
FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++ execution  
test


These all fail because of this:

dyld: Symbol not found: ___dso_handle
  Referenced from: /Users/sandro/t/i386-apple-darwin8.5.2/./libstdc+ 
+-v3/src/.libs/libstdc++.6.dylib

  Expected in: flat namespace

libjava testsuite has a similar problem when test programs are being  
run:


dyld: Symbol not found: _GC_gcj_debug_kind
  Referenced from: /Users/sandro/t/i386-apple-darwin8.5.2/./ 
libjava/.libs/libgcj.7.dylib

  Expected in: flat namespace

I'm attaching the patch against a current svn checkout.

Anyone has some ideas on how to fix these problems?

Cheers,
  Sandro



gcc42-libffi.diff
Description: Binary data


gcc42-libjava.diff
Description: Binary data


gcc42-top.diff
Description: Binary data


Re: Working with c

2006-03-24 Thread Mike Stump

On Mar 23, 2006, at 2:02 PM, Laura Tardivo wrote:
Hello, my name is Laura and I need to know where I could find or  
download the

oldest version of de "C" compiler. I look forward to hearing from you.


You can find what we have in svn, see our web site.  -r1 would be the  
oldest bits we have, though, they aren't going to be very complete.   
As you get into higher numbers, the completeness fills out.


Re: Migration of mangled names

2006-03-24 Thread Mike Stump

On Mar 24, 2006, at 1:49 AM, Piyush Garyali wrote:

I meant other than recompiling the code of course. I have some
binaries without the source code. does 3.2 support the old mangling
algorithm.


Sure, just re-implement the 2.95 abi in 3.2 if you think that is  
better than re-implementing code that you don't have source for.  No,  
3.2 doesn't support the 2.95 abi.  Mangling is just one tiny part of it.


Re: Forward port darwin/i386 support 4.0 => 4.2 problems

2006-03-24 Thread Sandro Tolaini


On 24/mar/2006, at 10:50, Sandro Tolaini wrote:


I'm attaching the patch against a current svn checkout.


Forgot to say that boehm-gc needs to be upgraded to 6.7. I've not  
included patches for this,  simply import it from the original  
distribution.


Cheers,
  Sandro



smime.p7s
Description: S/MIME cryptographic signature


failed gcc-4.0.3-1(Debian) bootstrap with ARM VFP and binutils-2.16.1cvs20060117-1

2006-03-24 Thread peter.kourzanov
Dear gcc/binutils maintainers,

  During bootstrap of gcc-4.0 (4.0.3-1 Debian) on ARM
with VFP (--with-float=soft, --with-fpu=vfp) and binutils
2.16.1cvs20060117-1.my I stumbled upon the following issue. The
linkage of libgcc_s.so.1 fails because of multiple errors such as:

"ld: *_s.o uses VFP instructions, whereas ./libgcc_s.so.1.tmp does not"

(Complete log in attachment).

  I found this message quite strange, as libgcc_s.so.1.tmp was the
output of the linker, so it should have been created with the same
modes as the objects it should contain.

  Moreover, --msoft-float and --mfpu=vfp were explicitly passed to
xgcc. However, looking at the log I see that collect2 does not get
--msoft-float and --mfpu options. Could this be a problem? I tried
to modify build/gcc/specs, to no avail...

  All of this makes me suspect some deep shortcoming in either ld,
or its interface to collect2...

Regards,

Pjotr Kourzanov

Reading specs from /usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc/specs
Target: arm-vfp-linux-gnu
Configured with: ../src/configure -v --enable-languages=c,c++ --prefix=/usr 
--enable-shared --with-system-zlib --libexecdir=/usr/lib 
--without-included-gettext --enable-threads=posix --enable-nls 
--with-gxx-include-dir=/usr/arm-vfp-linux-gnu/include/c++/4.0.3 
--program-suffix=-4.0 --enable-__cxa_atexit --enable-clocale=gnu 
--enable-libstdcxx-debug --with-float=soft --with-fpu=vfp 
--enable-checking=release --program-prefix=arm-vfp-linux-gnu- 
--includedir=/usr/arm-vfp-linux-gnu/include --build=i486-linux-gnu 
--host=i486-linux-gnu --target=arm-vfp-linux-gnu
Thread model: posix
gcc version 4.0.3 (Debian 4.0.3-1)
 /usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc/collect2 --eh-frame-hdr -shared 
-dynamic-linker /lib/ld-linux.so.2 -X -m armelf_linux -p -o ./libgcc_s.so.1.tmp 
/usr/arm-vfp-linux-gnu/lib/crti.o 
/usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc/crtbeginS.o 
-L/usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc -L/usr/arm-vfp-linux-gnu/bin 
-L/usr/arm-vfp-linux-gnu/lib -L/usr/lib/gcc/../../arm-vfp-linux-gnu/lib 
--soname=libgcc_s.so.1 --version-script=libgcc/./libgcc.map -O1 
libgcc/./_udivsi3_s.o libgcc/./_divsi3_s.o libgcc/./_umodsi3_s.o 
libgcc/./_modsi3_s.o libgcc/./_dvmd_lnx_s.o libgcc/./_muldi3_s.o 
libgcc/./_negdi2_s.o libgcc/./_lshrdi3_s.o libgcc/./_ashldi3_s.o 
libgcc/./_ashrdi3_s.o libgcc/./_cmpdi2_s.o libgcc/./_ucmpdi2_s.o 
libgcc/./_floatdidf_s.o libgcc/./_floatdisf_s.o libgcc/./_fixunsdfsi_s.o 
libgcc/./_fixunssfsi_s.o libgcc/./_fixunsdfdi_s.o libgcc/./_fixdfdi_s.o 
libgcc/./_fixunssfdi_s.o libgcc/./_fixsfdi_s.o libgcc/./_fixxfdi_s.o 
libgcc/./_fixunsxfdi_s.o libgcc/./_floatdixf_s.o libgcc/./_fixunsxfsi_s.o 
libgcc/./_fixtfdi_s.o libgcc/./_fixunstfdi_s.o libgcc/./_floatditf_s.o 
libgcc/./_clear_cache_s.o libgcc/./_enable_execute_stack_s.o 
libgcc/./_trampoline_s.o libgcc/./__main_s.o libgcc/./_absvsi2_s.o 
libgcc/./_absvdi2_s.o libgcc/./_addvsi3_s.o libgcc/./_addvdi3_s.o 
libgcc/./_subvsi3_s.o libgcc/./_subvdi3_s.o libgcc/./_mulvsi3_s.o 
libgcc/./_mulvdi3_s.o libgcc/./_negvsi2_s.o libgcc/./_negvdi2_s.o 
libgcc/./_ctors_s.o libgcc/./_ffssi2_s.o libgcc/./_ffsdi2_s.o libgcc/./_clz_s.o 
libgcc/./_clzsi2_s.o libgcc/./_clzdi2_s.o libgcc/./_ctzsi2_s.o 
libgcc/./_ctzdi2_s.o libgcc/./_popcount_tab_s.o libgcc/./_popcountsi2_s.o 
libgcc/./_popcountdi2_s.o libgcc/./_paritysi2_s.o libgcc/./_paritydi2_s.o 
libgcc/./_powisf2_s.o libgcc/./_powidf2_s.o libgcc/./_powixf2_s.o 
libgcc/./_powitf2_s.o libgcc/./_mulsc3_s.o libgcc/./_muldc3_s.o 
libgcc/./_mulxc3_s.o libgcc/./_multc3_s.o libgcc/./_divsc3_s.o 
libgcc/./_divdc3_s.o libgcc/./_divxc3_s.o libgcc/./_divtc3_s.o 
libgcc/./_divdi3_s.o libgcc/./_moddi3_s.o libgcc/./_udivdi3_s.o 
libgcc/./_umoddi3_s.o libgcc/./_udiv_w_sdiv_s.o libgcc/./_udivmoddi4_s.o 
libgcc/./unwind-dw2_s.o libgcc/./unwind-dw2-fde-glibc_s.o 
libgcc/./unwind-sjlj_s.o libgcc/./gthr-gnat_s.o libgcc/./unwind-c_s.o -lc 
/usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc/crtendS.o 
/usr/arm-vfp-linux-gnu/lib/crtn.o
/usr/arm-vfp-linux-gnu/bin/ld: ERROR: 
/usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc/crtbeginS.o uses VFP instructions, 
whereas ./libgcc_s.so.1.tmp does not
/usr/arm-vfp-linux-gnu/bin/ld: failed to merge target specific data of file 
/usr/src/Debian/gcc-4.0-4.0.3-1/build/gcc/crtbeginS.o
/usr/arm-vfp-linux-gnu/bin/ld: ERROR: libgcc/./_udivsi3_s.o uses VFP 
instructions, whereas ./libgcc_s.so.1.tmp does not
/usr/arm-vfp-linux-gnu/bin/ld: failed to merge target specific data of file 
libgcc/./_udivsi3_s.o
/usr/arm-vfp-linux-gnu/bin/ld: ERROR: libgcc/./_divsi3_s.o uses VFP 
instructions, whereas ./libgcc_s.so.1.tmp does not
/usr/arm-vfp-linux-gnu/bin/ld: failed to merge target specific data of file 
libgcc/./_divsi3_s.o
/usr/arm-vfp-linux-gnu/bin/ld: ERROR: libgcc/./_umodsi3_s.o uses VFP 
instructions, whereas ./libgcc_s.so.1.tmp does not
/usr/arm-vfp-linux-gnu/bin/ld: failed to merge target specific data of file 
libgcc/./_umodsi3_s.o
/usr/arm-vfp-linux-gnu/bin/ld: ERROR: libgcc/./_modsi3_s.o uses VFP 
instruction

Copyright assignment for GCC Hello World

2006-03-24 Thread Gustavo Sverzut Barbieri
Hello,

I was one of the developers of the GCC Hello World, together with
Rafael EspĂ­ndola and I want to assign my copyright to GNU so it can be
included in GCC trunk.

What do I have to do?

Thanks,

--
Gustavo Sverzut Barbieri
--
Jabber: [EMAIL PROTECTED]
   MSN: [EMAIL PROTECTED]
  ICQ#: 17249123
  Skype: gsbarbieri
Mobile: +55 (81) 9927 0010
 Phone:  +1 (347) 624 6296; [EMAIL PROTECTED]
   GPG: 0xB640E1A2 @ wwwkeys.pgp.net


Re: Forward port darwin/i386 support 4.0 => 4.2 problems

2006-03-24 Thread Andrew Pinski


On Mar 24, 2006, at 4:50 AM, Sandro Tolaini wrote:

I have successfully forward ported libffi changes from 4.0 to 4.2,  
but I have some problems running the testsuite. Here are the results:


# of expected passes1052
# of unexpected failures8
# of unsupported tests  8

Looking at the failures, I have:

FAIL: libffi.call/return_fl2.c -O0 -W -Wall execution test
FAIL: libffi.call/return_fl2.c -O2 execution test
FAIL: libffi.call/return_fl2.c -O3 execution test
FAIL: libffi.call/return_fl2.c -Os execution test

These all fail for a dummy FP rounding problem: the output is  
always "1022.800049 vs 1022.800018", so I'm leaving these alone  
(should the tests be fixed somehow?).


These also fail on x86-linux-gnu (and x86_64-linux-gnu in 32bit mode).



The other 4 failures are here:

FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++  
execution test
FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++  
execution test
FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++  
execution test
FAIL: libffi.special/unwindtest.cc  -shared-libgcc -lstdc++  
execution test


These all fail because of this:

dyld: Symbol not found: ___dso_handle
  Referenced from: /Users/sandro/t/i386-apple-darwin8.5.2/./libstdc+ 
+-v3/src/.libs/libstdc++.6.dylib

  Expected in: flat namespace


This looks like you did not update your cctools to the newest one which
was posted which includes this symbol.

-- Pinski


Type conversion and addition

2006-03-24 Thread Eric Botcazou
Hi,

We have a problem with Kazu's following patch:

2005-12-26  Kazu Hirata  <[EMAIL PROTECTED]>

PR tree-optimization/25125
* convert.c (convert_to_integer): Don't narrow the type of a
PLUX_EXPR or MINUS_EXPR if !flag_wrapv and the unwidened type
is signed.

that introduced regressions in Ada (for example PR middle-end/26635).

Note that the ChangeLog entry is misleading, it should have been

PR tree-optimization/25125
* convert.c (convert_to_integer): Use an intermediate unsigned
type to narrow the type of a PLUS_EXPR or MINUS_EXPR if !flag_wrapv
and the unwidened type is signed.

The change was made in order not to "introduce signed-overflow undefinedness".


The typical expression is (int)(long int)j + o) where j is int, o long int and 
int and long int don't have the same size.  Before Kazu's change, it was 
simplified to j + (int)o.  Now take j=1 and o=INT_MIN-1.  There is indeed no 
signed-overflow undefinedness in the first expression while there is in the 
second when INT_MIN-1 is cast to int.

After the change it is simplified to (int)((unsigned int)j + (unsigned int)o).
Now take j=0 and o=-1.  There is signed-overflow undefinedness neither in the 
original expression nor in the original simplified expression while there is 
in the new simplified expression when UINT_MAX is cast to int.

So we have traded signed-overflow undefinedness in relatively rare cases for 
signed-overflow undefinedness is much more frequent cases, which hurts in 
particular Ada a lot.


Now the change installed by Kazu was not his original proposal either, the 
latter being to really disable the transformation, hence the misleading 
ChangeLog entry.  So you'd assume that reverting to the original proposal 
would be the way to go.

Then GOMP comes into play: if you disable the simplification, the following 
testcase (extracted from gcc.dg/gomp/for-17.c) fails to compile on AMD64:

void
foo (void)
{
  int j, k = 1, l = 30;
  long int o = 4;

#pragma omp for
  for (j = k; j <= l; j = j + o)
;
}

[EMAIL PROTECTED]:~/build/gcc/native> gcc/xgcc -Bgcc for-17.c -fopenmp
for-17.c: In function 'foo':
for-17.c:11: error: invalid increment expression

The GOMP parser chokes on the CONVERT_EXPR that has not been simplified.


How do we get away from that?  By enhancing the GOMP parser to handle 
CONVERT_EXPR like NOP_EXPR (in check_omp_for_incr_expr)?  Would other parts 
of the compiler not be silently affected by similar problems that could 
disable optimizations?  By building a NOP_EXPR in convert.c instead of a 
CONVERT_EXPR for the long int to int conversion?

Thanks in advance.

-- 
Eric Botcazou


Re: Type conversion and addition

2006-03-24 Thread Joseph S. Myers
On Fri, 24 Mar 2006, Eric Botcazou wrote:

> After the change it is simplified to (int)((unsigned int)j + (unsigned int)o).
> Now take j=0 and o=-1.  There is signed-overflow undefinedness neither in the 
> original expression nor in the original simplified expression while there is 
> in the new simplified expression when UINT_MAX is cast to int.

Conversion of unsigned to int does not involve undefined behavior, it 
involves implementation-defined behavior, which GCC defines in 
implement-c.texi:

@item
@cite{The result of, or the signal raised by, converting an integer to a
signed integer type when the value cannot be represented in an object of
that type (C90 6.2.1.2, C99 6.3.1.3).}

For conversion to a type of width @math{N}, the value is reduced
modulo @math{2^N} to be within range of the type; no signal is raised.

This is of course a definition of C semantics rather than tree semantics, 
but I believe our trees follow the C semantics here.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Type conversion and addition

2006-03-24 Thread Eric Botcazou
> This is of course a definition of C semantics rather than tree semantics,
> but I believe our trees follow the C semantics here.

Not quite, TREE_OVERFLOW is set on the result.  And the C front-end has 
explicit code to unset it:

build_c_cast:
  /* Ignore any integer overflow caused by the cast.  */
  if (TREE_CODE (value) == INTEGER_CST)
{
  if (CONSTANT_CLASS_P (ovalue)
  && (TREE_OVERFLOW (ovalue) || TREE_CONSTANT_OVERFLOW (ovalue)))
{
  /* Avoid clobbering a shared constant.  */
  value = copy_node (value);
  TREE_OVERFLOW (value) = TREE_OVERFLOW (ovalue);
  TREE_CONSTANT_OVERFLOW (value) = TREE_CONSTANT_OVERFLOW (ovalue);
}
  else if (TREE_OVERFLOW (value) || TREE_CONSTANT_OVERFLOW (value))
/* Reset VALUE's overflow flags, ensuring constant sharing.  */
value = build_int_cst_wide (TREE_TYPE (value),
TREE_INT_CST_LOW (value),
TREE_INT_CST_HIGH (value));
}

void foo ()
{
  int x = (int) (unsigned int) (int) (-1);
}

-- 
Eric Botcazou


Re: Ada subtypes and base types

2006-03-24 Thread Duncan Sands
On Tuesday 21 March 2006 21:59, Jeffrey A Law wrote:
> On Tue, 2006-03-21 at 10:14 +0100, Duncan Sands wrote:
> 
> > Hi Jeff, on the subject of seeing through typecasts, I was playing around
> > with VRP and noticed that the following "if" statement is not eliminated:
> > 
> > int u (unsigned char c) {
> > int i = c;
> > 
> > if (i < 0 || i > 255)
> > return -1; /* never taken */
> > else
> > return 0;
> > }
> > 
> > Is it supposed to be?
> Fixed thusly.  Bootstrapped and regression tested on i686-pc-linux-gnu.

Hi Jeff, while your patch catches many cases, the logic seems a bit wonky
for types with TYPE_MIN/TYPE_MAX different to those that can be deduced
from TYPE_PRECISION.  For example, there is nothing stopping inner_type
having a bigger TYPE_PRECISION than outer_type, but a smaller
[TYPE_MIN,TYPE_MAX] range.  For example, outer_type could be a byte with
range 0 .. 255, and inner_type could be an integer with range 10 .. 20.
I think this logic:

!   if (vr0.type == VR_RANGE
! || (vr0.type == VR_VARYING
! && TYPE_PRECISION (outer_type) > TYPE_PRECISION (inner_type)))
!   {
! tree new_min, new_max, orig_min, orig_max;

should really test whether
TYPE_MAX (inner_type) < TYPE_MAX (outer_type) || TYPE_MIN (inner_type) 
> TYPE_MIN (outer_type)
and take the appropriate action if so.

By the way, I hacked tree-vrp to start all value ranges for INTEGRAL_TYPE_P
variables to [TYPE_MIN, TYPE_MAX].  It certainly helps with eliminating many
Ada range checks.  Maybe the compiler will even bootstrap :)
This approach has advantages, even if TYPE_MIN/MAX are those given by
TYPE_PRECISION.  For example, after incrementing a variable you automatically
get the range [TYPE_MIN+1,INF].  The approach also has disadvantages: for
example, the notion of an "interesting" range has to be adjusted, since now
VR_RANGE can be pretty uninteresting; and how interesting is [TYPE_MIN+1,INF]
in practice?  Also, there's likely to be extra overhead, due to pointless
computations and comparisons on [TYPE_MIN, TYPE_MAX] ranges, especially if
they are symbolic.  Furthermore, in the case of symbolic TYPE_MIN and/or 
TYPE_MAX,
it may be possible to deduce better ranges using the max/min range given by
TYPE_PRECISION, since these are compile time constants, rather than using
[TYPE_MIN,TYPE_MAX].

As I final thought, it struck me that it might make sense to associate the
range [TYPE_MIN, TYPE_MAX] to the type (rather than to variables of the type),
and extend the notion of equivalence so that a variable's range can be
equivalent to the range of it's type.   This is probably nonsensical, but
I can't tell given my current minuscule understanding of VRP ;)

All the best,

Duncan.


Re: Type conversion and addition

2006-03-24 Thread Joseph S. Myers
On Fri, 24 Mar 2006, Eric Botcazou wrote:

> > This is of course a definition of C semantics rather than tree semantics,
> > but I believe our trees follow the C semantics here.
> 
> Not quite, TREE_OVERFLOW is set on the result.  And the C front-end has 
> explicit code to unset it:

Setting TREE_OVERFLOW here sounds like the bug.  Or, if some front ends 
require it for some test cases then it should be made language-specific 
whether conversions of constants set TREE_OVERFLOW in this case.  (For C, 
gcc.dg/overflow-warn-*.c should adequately cover the diagnostics in this 
area.)  In the long run, TREE_OVERFLOW should go away and fold should 
provide front ends with information about the sorts of overflow happening 
so front ends can track their own information about what counts as 
overflow in each language.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Ada subtypes and base types

2006-03-24 Thread Jeffrey A Law
On Fri, 2006-03-24 at 18:50 +0100, Duncan Sands wrote:

> Hi Jeff, while your patch catches many cases, the logic seems a bit wonky
> for types with TYPE_MIN/TYPE_MAX different to those that can be deduced
> from TYPE_PRECISION.  For example, there is nothing stopping inner_type
> having a bigger TYPE_PRECISION than outer_type, but a smaller
> [TYPE_MIN,TYPE_MAX] range.  For example, outer_type could be a byte with
> range 0 .. 255, and inner_type could be an integer with range 10 .. 20.
> I think this logic:
I really wouldn't expect that to happen terribly often.  If you think
it's worth handling, then feel free to cobble some code together and
submit it.  It shouldn't be terribly complex.


> By the way, I hacked tree-vrp to start all value ranges for INTEGRAL_TYPE_P
> variables to [TYPE_MIN, TYPE_MAX].  It certainly helps with eliminating many
> Ada range checks.  Maybe the compiler will even bootstrap :)
The thing to check will be compile-time performance -- in general
with propagators such as VRP, CCP and CPROP it's compile-time
advantageous to drop to a VARYING state in the lattice as quickly
as possible.  The flipside is you can sometimes miss transformations
when there's cases when you can use a VARYING object in an expression
and get a useful value/range.

Basically the two extremes we need to look at are:

  1. Give everything a range, even if it's just TYPE_MIN/TYPE_MAX.  In
 this case VR_VARYING disappears completely as it's meaningless.

  2. Anytime we're going to record a range TYPE_MIN/TYPE_MAX, drop to
 VARYING.  The trick then becomes to find all those cases where we
 have an expression involving a VARYING which produces a useful
 range (such as widening typecasts, IOR with nonzero, etc etc).


Jeff



Re: Migration of mangled names

2006-03-24 Thread Joe Buck
On Fri, Mar 24, 2006 at 03:15:58PM +0530, Piyush Garyali wrote:
> Is there any solution to fix the troubles which result from the change
> to the name mangling algorithm between 3.2 and 2.95 ?

The troubles do not result from the name mangling algorithm.  If anything,
the incompatible name mangling protects users, by making clear that
the code is incompatible.

The layout of C++ objects changed in a major way between 2.95 and 3.x.
If the name mangling had not changed, people would link code that would
then crash.


gcc-4.1-20060324 is now available

2006-03-24 Thread gccadmin
Snapshot gcc-4.1-20060324 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.1-20060324/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.1 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-4_1-branch 
revision 112362

You'll find:

gcc-4.1-20060324.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.1-20060324.tar.bz2 C front end and core compiler

gcc-ada-4.1-20060324.tar.bz2  Ada front end and runtime

gcc-fortran-4.1-20060324.tar.bz2  Fortran front end and runtime

gcc-g++-4.1-20060324.tar.bz2  C++ front end and runtime

gcc-java-4.1-20060324.tar.bz2 Java front end and runtime

gcc-objc-4.1-20060324.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.1-20060324.tar.bz2The GCC testsuite

Diffs from 4.1-20060317 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.1
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


the loss of SET_TYPE

2006-03-24 Thread Gaius Mulley

Hi,

I'm trying to port bring the GNU Modula-2 front end up to gcc-4.1.0
and have found out that SET_TYPE has been removed :-)

I see the patch from Dec 2004:

http://gcc.gnu.org/ml/gcc-patches/2004-12/msg00670.html

and understand the reason (no front ends in gcc tree use SET_TYPE
etc). I suppose I'd just like to flag that Modula-2 utilises this type
during the generation of BITSET types and also large sets (built
internally from multiple int length SET_TYPEs). But I totally
understand it might be difficult to justify code which is never run
(and difficult to test) in the main gcc tree..

Pragmatically I guess it is best for me to maintain a reversed patch
which can be applied to a gcc-4.1.0 tar ball which reintroduces this
TYPE. Any thoughts?

Many thanks,
Gaius


Re: the loss of SET_TYPE

2006-03-24 Thread Eric Botcazou
> Pragmatically I guess it is best for me to maintain a reversed patch
> which can be applied to a gcc-4.1.0 tar ball which reintroduces this
> TYPE. Any thoughts?

Integrating the Modula-2 front-end?

-- 
Eric Botcazou


categorize_ctor_elements/nc_elts vs initializer_constant_valid_p

2006-03-24 Thread Olivier Hainque
Hello all,

>From a call like

categorize_ctor_elements (ctor, &num_nonzero_elements,
  &num_nonconstant_elements,

is 'num_nonconstant_elements == 0' expected to convey the same as
initializer_constant_valid_p (ctor, TREE_TYPE (ctor)) ?

The code in gimplify_init_constructor apparently assumes so and it is
currently not true for a number of cases exposed in Ada. 

This results in the promotion/copy of !valip_p constructors into
static storage, which triggers spurious 'invalid initial value for
member' errors out of output_constructor.


Thanks in advance for your help,

Olivier