Dead link http://gcc.gnu.org/install/build.html on http://gcc.gnu.org/install/

2006-02-21 Thread Thomas Boehne
Hi gcc-developers, 

I was trying to look up some information about the installation of gcc
on http://gcc.gnu.org/install/ and found out the the "Building" link
(http://gcc.gnu.org/install/build.html) was dead. I checked out
wwwdocs from CVS (as suggested) but I could not find the appropriate
files anywhere... Am I missing something in the CVS tree, or did the
website source move to the SVN?

Cheers,
Thomas

-- 
Jäger Computergesteuerte Messtechnik GmbH
Thomas Böhne
Rheinstraße 2-4
64653 Lorsch
Germany
Phone: +49-6251-9632-0





do -fprofile-arcs and -fbranch-probabilities help to set bb->count?

2006-02-21 Thread Liu Haibin
Hi,

I wanted to use bb->count, so I expected that -fprofile-arcs and
-fbranch-probabilities would help. I added printf just before
peephole2 optimization and ran the following.

$gcc -O3 -fprofile-arcs test.c -o test
$./test (which produced test.gcno only, but no test.gcda)
$gcc -O3 -fprofile-arcs -fbranch-probabilities test.c -o test

But it turned out that all the basic block counts were 0. Any idea how
I can see bb->count set.


Regards,
Haibin


Re: Dead link http://gcc.gnu.org/install/build.html on http://gcc.gnu.org/install/

2006-02-21 Thread Ben Elliston
Hi.

> I was trying to look up some information about the installation of
> gcc on http://gcc.gnu.org/install/ and found out the the "Building"
> link (http://gcc.gnu.org/install/build.html) was dead. I checked out
> wwwdocs from CVS (as suggested) but I could not find the appropriate
> files anywhere... Am I missing something in the CVS tree, or did the
> website source move to the SVN?

Good find, thanks.  For install/build.html, for instance:

revision 1.26
date: 2001/05/23 06:02:05;  author: gerald;  state: dead;  lines: +0 -0
Remove all install documentation in HTML format, as this now resides in
gcc/doc/install.texi.

Gerald removed these files for the reason shown above.  However I'm
not sure what we should do about it.  Perhaps provide a small page
explaining where to get installation docs.

Cheers, Ben


Re: GCC 4.1.0 RC1

2006-02-21 Thread Grigory Zagorodnev

Mark Mitchell wrote:

GCC 4.1.0 RC1 is here:

ftp://gcc.gnu.org/pub/gcc/prerelease-4.1.0-20060219

Please download, build, and test.  Please use these tarballs, rather
than the current SVN branch, so that we test packaging, and other
similar issues.

If you find problems, please do not send me email directly; instead,
file a bug in Bugzilla, and add me to the CC: list.

Enjoy!


Hi!
My spec cpu2000 run shows 252.eon miscompared with i686-redhat-linux 
4.1.0 20060221 (prerelease) compiler. Optimization level is -O2. Spec 
reported "miscompare of pixels_out.kajiya".


Has anybody seen this before?

- Grigory


Re: Dead link http://gcc.gnu.org/install/build.html on http://gcc.gnu.org/install/

2006-02-21 Thread Joseph S. Myers
On Tue, 21 Feb 2006, Ben Elliston wrote:

> Good find, thanks.  For install/build.html, for instance:
> 
> revision 1.26
> date: 2001/05/23 06:02:05;  author: gerald;  state: dead;  lines: +0 -0
> Remove all install documentation in HTML format, as this now resides in
> gcc/doc/install.texi.
> 
> Gerald removed these files for the reason shown above.  However I'm
> not sure what we should do about it.  Perhaps provide a small page
> explaining where to get installation docs.

This is not the relevant reference; the files should be generated by 
install.texi2html.  See, instead, 
:

/tmp/gcc-doc-update.4743/gcc/gcc/doc/install.texi:1724: Footnote defined 
without parent node.
makeinfo: Removing output file 
`/www/gcc/htdocs-preformatted/install/build.html' due to errors; use --force to 
preserve.

[...]
Removing obsolete file ./install/build.html

I suspect

r111295 | bonzini | 2006-02-20 08:29:17 + (Mon, 20 Feb 2006) | 59 lines

of being the responsible patch.

I've installed the following patch to ensure that install.texi2html halts 
with error status (and so update_web_docs_svn does so) when such an error 
occurs.

Index: doc/install.texi2html
===
--- doc/install.texi2html   (revision 111331)
+++ doc/install.texi2html   (working copy)
@@ -5,13 +5,15 @@
 # $SOURCEDIR and $DESTDIR, resp., refer to the directory containing
 # the texinfo source and the directory to put the HTML version in.
 #
-# (C) 2001 Free Software Foundation
+# (C) 2001, 2003, 2006 Free Software Foundation
 # Originally by Gerald Pfeifer <[EMAIL PROTECTED]>, June 2001.
 #
 # This script is Free Software, and it can be copied, distributed and
 # modified as defined in the GNU General Public License.  A copy of
 # its license can be downloaded from http://www.gnu.org/copyleft/gpl.html
 
+set -e
+
 SOURCEDIR=${SOURCEDIR-.}
 DESTDIR=${DESTDIR-HTML}
 
Index: ChangeLog
===
--- ChangeLog   (revision 111331)
+++ ChangeLog   (working copy)
@@ -1,3 +1,7 @@
+2006-02-21  Joseph S. Myers  <[EMAIL PROTECTED]>
+
+   * doc/install.texi2html: Use set -e.
+
 2006-02-21  Richard Sandiford  <[EMAIL PROTECTED]>
 
* doc/tm.texi (ASM_OUTPUT_SHARED_COMMON, ASM_OUTPUT_SHARED_BSS)

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: GCC 4.1.0 RC1

2006-02-21 Thread Andrew Pinski


On Feb 21, 2006, at 6:09 AM, Grigory Zagorodnev wrote:

Hi!
My spec cpu2000 run shows 252.eon miscompared with i686-redhat-linux 
4.1.0 20060221 (prerelease) compiler. Optimization level is -O2. Spec 
reported "miscompare of pixels_out.kajiya".


Has anybody seen this before?


Yes this is PR 323, the normal floating point precision bug.

-- Pinski



Re: do -fprofile-arcs and -fbranch-probabilities help to set bb->count?

2006-02-21 Thread Paolo Bonzini

Liu Haibin wrote:

Hi,

I wanted to use bb->count, so I expected that -fprofile-arcs and
-fbranch-probabilities would help. I added printf just before
peephole2 optimization and ran the following.

$gcc -O3 -fprofile-arcs test.c -o test
$./test (which produced test.gcno only, but no test.gcda)
$gcc -O3 -fprofile-arcs -fbranch-probabilities test.c -o test


Easier to use -fprofile-generate and -fprofile-use:

gcc -O3 -fprofile-generate test.c -o test (produces test.gcno)
./test (now should have as well test.gcda)
gcc -O3 -fprofile-use test.c -o test

Paolo


Re: GCC 4.1.0 RC1

2006-02-21 Thread Paolo Bonzini



Hi!
My spec cpu2000 run shows 252.eon miscompared with i686-redhat-linux 
4.1.0 20060221 (prerelease) compiler. Optimization level is -O2. Spec 
reported "miscompare of pixels_out.kajiya".


Has anybody seen this before?


You should use -ffast-math for eon.

Paolo


SPEC cpu2000 ia64 tester

2006-02-21 Thread Michael Matz
Hi,

some people already noticed it seems, so this may be a little too late, 
but still.  We now have a nightly SPEC tester running which posts the 
results to

  http://www.suse.de/~gcctest/SPEC/CINT/sb-terbium-head-64/
  http://www.suse.de/~gcctest/SPEC/CFP/sb-terbium-head-64/

The machine is a double Itanium 2, 1.6 GHz with 4GB RAM.  Currently I'm 
using -O3 for base and -O3 -funroll-loops -fpeel-loops for peak.  These 
are non-FDO runs.


Ciao,
Michael.


Re: GCC 4.1.0 RC1

2006-02-21 Thread H. J. Lu
On Tue, Feb 21, 2006 at 02:09:43PM +0300, Grigory Zagorodnev wrote:
> Mark Mitchell wrote:
> >GCC 4.1.0 RC1 is here:
> >
> >ftp://gcc.gnu.org/pub/gcc/prerelease-4.1.0-20060219
> >
> >Please download, build, and test.  Please use these tarballs, rather
> >than the current SVN branch, so that we test packaging, and other
> >similar issues.
> >
> >If you find problems, please do not send me email directly; instead,
> >file a bug in Bugzilla, and add me to the CC: list.
> >
> >Enjoy!
> >
> Hi!
> My spec cpu2000 run shows 252.eon miscompared with i686-redhat-linux 
> 4.1.0 20060221 (prerelease) compiler. Optimization level is -O2. Spec 
> reported "miscompare of pixels_out.kajiya".
> 
> Has anybody seen this before?
> 

Yes, Grigory:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=25303

Please use my SPEC config files.


H.J.


Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Jeffrey A Law
On Mon, 2006-02-20 at 16:49 -0500, Richard Kenner wrote:
>  Which leaves us with a very fundamental issue.  Namely that we can not
>  use TYPE_MIN_VALUE or TYPE_MAX_VALUE for ranges.  
> 
> The point is that it *is* supposed to be usable in general.  If it can't
> be used in a specific case, let's address that specific case and understand
> what needs fixing.  The intent is for it to be useful and usable information..
But if the values in there do not reflect the reality of what values
are valid for the type, then I don't see how they can be generally
useful -- that's my point.  We have two fields that are inaccurate,
apparently on purpose, and as a result they are basically unusable.

Jeff



borken link !

2006-02-21 Thread Boris Pereira


this link to
http://gcc.gnu.org/install/build.html

in page
http://gcc.gnu.org/install/

is broken



Re: gcc 4.1 RC1 and SPEC CPU 2000

2006-02-21 Thread Janis Johnson
On Mon, Feb 20, 2006 at 08:54:49PM -0800, [EMAIL PROTECTED] wrote:
> 
>  I've been testing gcc-4.1 RC1 on x86-linux-gnu with SPEC CPU 2000.

Have you verfied that you can build and run all of the tests without
profile-directed optimizations?  Several of these programs require
special options to compile.  In addition, several of them require
patches to work correctly, most of which are available from SPEC.

I regularly run SPEC CPU2000 (using the short test input) with several
sets of compilation options including profile-directed optiimizations,
on powerpc64-linux for both -m32 and -m64.  I did this last week for
the 4.1 branch and saw no problems.

Janis


Re: Bootstrap failure on trunk: x86_64-linux-gnu

2006-02-21 Thread Jeffrey A Law
On Sun, 2006-02-19 at 20:15 +0100, Eric Botcazou wrote:

> >"Now for the first "oddity".  If we look at the underlying type
> > for last we have a type "natural___XDLU_0__2147483647".  What's
> >interesting about it is that it has a 32bit type precision, but
> > the min/max values only specify 31 bits.  ie, the min/max values
> > are 0, 0x7fff."
> 
> That's an old problem, which has already been discussed IIRC: should 
> TYPE_MAX_VALUE/TYPE_MIN_VALUE be constrained by TYPE_PRECISION and 
> TYPE_UNSIGNED?
My feeling?  Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
represent the set of values that an object of the type may hold.
Any other definition effectively renders those values useless.

ie, if an object can have the values 0..128 at runtime, then
TYPE_MIN_VALUE/TYPE_MAX_VALUE must cover that entire range.  
0..128.  If TYPE_MIN_VALUE/TYPE_MAX_VALUE only cover 0..127,
then that's a bug.

I suspect we get this behavior from the Ada front-end as a
side effect of the language and possibly the need to do 
runtime bounds checking on object values.   But that's no
excuse for a front-end to lie about the bounds of an object.

It'll be tedious, but I guess possible to have VRP look one
level deeper into the type to get a max/min value.  But that's
really a bandaid IMHO.



> 
> Then:
> 
> "Second, for a given integer type (such as natural___XDLU_0_2147483647),
> the type for the nodes in TYPE_MIN_VALUE and TYPE_MAX_VALUE really
> should be a natural___XDLU_0_2147483647.  ie, the type of an integer
> constant should be the same as the type of its min/max values."
> 
> I think that one is new and again pertains to TYPE_MAX_VALUE/TYPE_MIN_VALUE.
This is probably a secondary issue.  We end up ping-ponging both
on values for the range *and* on the type of the range.  I suspect
but can not verify yet that ping-ponging on the type of the range
will result in infinite iteration as well.I'm willing to table
this until we get the TYPE_MIN/TYPE_MAX stuff resolved at which
point we can verify VRP's behavior when the types ping-pong.
jeff




Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Jeffrey A Law
On Mon, 2006-02-20 at 22:00 +0100, Richard Guenther wrote:
> On 2/20/06, Jeffrey A Law <[EMAIL PROTECTED]> wrote:
> > On Sun, 2006-02-19 at 20:43 +0100, Laurent GUERBY wrote:
> > > On Sun, 2006-02-19 at 14:23 -0500, Richard Kenner wrote:
> > > > "Second, for a given integer type (such as
> > > > natural___XDLU_0_2147483647), the type for the nodes in 
> > > > TYPE_MIN_VALUE
> > > > and TYPE_MAX_VALUE really should be a natural___XDLU_0_2147483647.
> > > > ie, the type of an integer constant should be the same as the type 
> > > > of
> > > > its min/max values."
> > > >
> > > > No, the type of the bounds of a subtype should be the *base type*.  
> > > > That's
> > > > how the tree has always looked, as far back as  I can remember.
> > >
> > > This is because intermediate computations can produce results
> > > outside the subtype range but within the base type range (RM 3.5(6)),
> > > right?
> > >
> > >  type T1 is range 0 .. 127;
> > >  -- Compiler will choose some type for T'Base, likely to be -128..127
> > >  -- but could be Integer (implementation dependant)
> > >  subtype T is T1 range 0 .. 100;
> > >  R : T := 100+X-X;
> > >  -- guaranteed work as long 100+X<=T'Base'Last and 100-X>=T'Base'First
> > Which leaves us with a very fundamental issue.  Namely that we can not
> > use TYPE_MIN_VALUE or TYPE_MAX_VALUE for ranges.  That's lame,
> > incredibly lame.  This nonsense really should be isolated within the
> > Ada front-end.
> 
> Indeed.  Ada should in this case generate
> 
>   R = (T)( (basetype)100 + (basetype)X - (basetype)X )
> 
> i.e. carry out all arithmetic explicitly in the basetype and only for stores
> and loads use the subtype.
I'd tend to agree, furthermore, if a pass starts wiping out those
type conversions, then we've got a bug.  I could believe that
such bugs exist as those conversions might be seen as useless
(particularly if the basetype and the real type differ only in
their TYPE_MIN_VALUE/TYPE_MAX_VALUE -- ie, they have the same
signedness and precision).  That case ought to be easy enough to
detect though.

Jeff



Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Richard Kenner
 But if the values in there do not reflect the reality of what values
 are valid for the type, then I don't see how they can be generally
 useful -- that's my point.  We have two fields that are inaccurate,
 apparently on purpose, and as a result they are basically unusable.

No, they *do* reflect the "reality of what values are valid for the type".
The only glitch, which is not what we're talking about here, is that you have
to have a way to implement the language-defined test to see if the value is
valid or not.  However, the need to implement that relative-uncommon test
should't drive the basic methodology used to represent types.

With checking enabled, "normal" Ada usage should not be able to generate an
invalid value, so the assumption by VRP that the values are in the valid
range is a correct one (of course, you do have to be able to generate those
checks!).  With checking disabled, an Ada program is erroneous if it
generates a value outside that range, so again VRP can assume the values are
in the specififed range.

There's only one exception here.  It's valid to do a "unchecked conversion"
from arbitrary data into a value of a subtype with a restricted range.  You
are then allowed to test (using 'Valid) whether or not the value is in range.
But any use of an out-of-range value (other than 'Valid) is also erroneous.

So the Ada rules and what VRP are assuming are precisely consistent.  We
unfortunately have bugs in two areas.  The first are not having a good way of
making sure the validity check isn't suppressed and the second is that there
apparently places where trees are being generated that do operations
improperly.  These simply have to be found and fixed.  I found one quite a
while ago (when folding ranges) and I'm sure there are more (somebody
speculated that perhaps fold is removing conversions that are, in fact,
needed).


Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 12:46 -0500, Richard Kenner wrote:
>  But if the values in there do not reflect the reality of what values
>  are valid for the type, then I don't see how they can be generally
>  useful -- that's my point.  We have two fields that are inaccurate,
>  apparently on purpose, and as a result they are basically unusable.
> 
> No, they *do* reflect the "reality of what values are valid for the type".
Err, no they don't.  Clearly an object of the type can hold a value 
outside TYPE_MIN_VALUE/TYPE_MAX_VALUE at runtime.  That IMHO means
that TYPE_MIN_VALUE/TYPE_MAX_VALUE do not reflect reality.


> The only glitch, which is not what we're talking about here, is that you have
> to have a way to implement the language-defined test to see if the value is
> valid or not.  However, the need to implement that relative-uncommon test
> should't drive the basic methodology used to represent types.
Having a consistent TYPE_MIN_VALUE and TYPE_MAX_VALUE in no way
prohibits you from implementing these tests.  As I've told you before,
if you've got a case where the tests are being incorrectly removed
or casts are being incorrectly removed I'll happily investigate.

But I'll repeat again, a consistent TYPE_MIN_VALUE/TYPE_MAX_VALUE in no
way changes your ability to emit those tests in the Ada front-end.


> With checking enabled, "normal" Ada usage should not be able to generate an
> invalid value, so the assumption by VRP that the values are in the valid
> range is a correct one (of course, you do have to be able to generate those
> checks!).  With checking disabled, an Ada program is erroneous if it
> generates a value outside that range, so again VRP can assume the values are
> in the specififed range.
OK, so then we can claim the Ada code in question in bogus?  Or at least
put the burden of proving the code is correct back in your court?  I've
clearly showed that VRP is "miscompiling" the code because of the
bogus values for TYPE_MAX_VALUE.  But if you claim that having a value 
outside TYPE_MAX_VALUE is invalid according to the language, then the
source code must be incorrect.

> 
> There's only one exception here.  It's valid to do a "unchecked conversion"
> from arbitrary data into a value of a subtype with a restricted range.  You
> are then allowed to test (using 'Valid) whether or not the value is in range.
> But any use of an out-of-range value (other than 'Valid) is also erroneous.
Which implies that TYPE_MIN_VALUE/TYPE_MAX_VALUE need to reflect the set
of values which can be placed into the object by way of an unchecked
conversion.  ANything else is simply lying to the language independent
parts of the compiler.

> 
> So the Ada rules and what VRP are assuming are precisely consistent.
No they are not.  The ADa front-end says that an object of a particular
type as a set of values [x .. y].  However, at runtime the object is
allowed to have values outside that range.  That is _not_ consistent.


> improperly.  These simply have to be found and fixed.  I found one quite a
> while ago (when folding ranges) and I'm sure there are more (somebody
> speculated that perhaps fold is removing conversions that are, in fact,
> needed).
I'm open to this possiblity as well, but this is completely and totally
disjoint from the inconsistent TYPE_MIN_VALUE/TYPE_MAX_VALUE presented
by the Ada front-end.

jeff



Re: [RFH] Fixing -fsection-anchors on powerpc-darwin

2006-02-21 Thread Eric Christopher


On Feb 19, 2006, at 12:13 PM, Andrew Pinski wrote:



On Feb 19, 2006, at 2:39 PM, Andrew Pinski wrote:

Now I run into another problem:
/var/tmp//ccBWaqmT.s:130:Fixup of 1073745640 too large for field  
width of 26 bits


We have a 1GB decl here.  So the section that this decl goes into
is between the TEXT section and the stub section which causes this
error to happen.

I am starting to think the Darwin back-end needs to be changed (or
even linker too, I think Eric C. might be dealing with this but I  
don't

know).


I'm not dealing with it directly outside of getting rid of the stubs  
being emitted by the compiler since ld64 can fix them up itself.


-eric


Re: [RFH] Fixing -fsection-anchors on powerpc-darwin

2006-02-21 Thread Eric Christopher


On Feb 18, 2006, at 10:09 PM, Andrew Pinski wrote:





+  if (name[0] == '.' && name[1] == ' ')
+return 0;

Urr?

Comment here and then I think it's probably good to go. Need to get a  
darwin maintainer to ack it though.


-eric


Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread David Edelsohn
The latest toplevel bootstrap patches have broken bootstrap on
AIX.  Executables are being re-linked when installed in prev-gcc.  In
stage2, it relinks with the system gcc, which works, but in stage3 it
relinks with prev-gcc/xgcc which does not exist because the bootstrap is
in the process of installing it.

David


Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Richard Kenner
 Err, no they don't.  Clearly an object of the type can hold a value 
 outside TYPE_MIN_VALUE/TYPE_MAX_VALUE at runtime.  That IMHO means
 that TYPE_MIN_VALUE/TYPE_MAX_VALUE do not reflect reality.

What does "can" mean here?  If it means "is physically capable of", then
TYPE_MIN_VALUE and TYPE_MAX_VALUE have no meaning an any context.

I interpret "can" as meaning "is allowed to according to the semantics
of the language", meaning that if the value is outside that range, it's
an invalid (technicallly "erroneous" in Ada terminology).

 Having a consistent TYPE_MIN_VALUE and TYPE_MAX_VALUE in no way
 prohibits you from implementing these tests.  

Of course it doesn't!  I was agreeing with you.  I was also separating
the issue of those tests from the issue at hand.

 OK, so then we can claim the Ada code in question in bogus?  Or at
 least put the burden of proving the code is correct back in your
 court?  I've clearly showed that VRP is "miscompiling" the code
 because of the bogus values for TYPE_MAX_VALUE.  But if you claim that
 having a value outside TYPE_MAX_VALUE is invalid according to the
 language, then the source code must be incorrect.

No, I don't think the Ada code in question is bogus.  My understanding
of this thread is that somebody (either the Ada front end, gimplification,
or the optimizer) is breaking the type-correctness of the tree.

Which implies that TYPE_MIN_VALUE/TYPE_MAX_VALUE need to reflect the
set of values which can be placed into the object by way of an
unchecked conversion.  

No, because in the absence of a 'Value, the unchecked conversion must
*also* produce a value in the range to be non-erroneous.  The *only*
case we need to worry about is an object that is *both* the target of
an unchecked conversion *and* the operand of a 'Valid.  That applies
to an exceedingly tiny number of objects and they should not drive the
entire type system, as I said.  Instead, what we need to do is define
some tree node (or flag) that tells VRP "don't deduce any value
through this node".  That's enough to handle this rare case.

 No they are not.  The ADa front-end says that an object of a particular
 type as a set of values [x .. y].  However, at runtime the object is
 allowed to have values outside that range. 

Not for the normal meaning of "allowed".  With the single exception above,
a program is erroneous if, at run time, the values are otuside that range.


Re: Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread Daniel Jacobowitz
On Tue, Feb 21, 2006 at 01:15:47PM -0500, David Edelsohn wrote:
>   The latest toplevel bootstrap patches have broken bootstrap on
> AIX.  Executables are being re-linked when installed in prev-gcc.  In
> stage2, it relinks with the system gcc, which works, but in stage3 it
> relinks with prev-gcc/xgcc which does not exist because the bootstrap is
> in the process of installing it.

I'm not quite sure what you mean by "installing in prev-gcc"; could you
show me the tail end of a log?

-- 
Daniel Jacobowitz
CodeSourcery


Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 13:31 -0500, Richard Kenner wrote:
>  Err, no they don't.  Clearly an object of the type can hold a value 
>  outside TYPE_MIN_VALUE/TYPE_MAX_VALUE at runtime.  That IMHO means
>  that TYPE_MIN_VALUE/TYPE_MAX_VALUE do not reflect reality.
> 
> What does "can" mean here?  If it means "is physically capable of", then
> TYPE_MIN_VALUE and TYPE_MAX_VALUE have no meaning an any context.
Can a conforming program set the object to a value outside of
TYPE_MIN_VALUE/TYPE_MAX_VALUE.  Clearly the answer for Ada today
is yes, and that is horribly bad as it means TYPE_MIN_VALUE and
TYPE_MAX_VALUE are effectively useless.


> I interpret "can" as meaning "is allowed to according to the semantics
> of the language", meaning that if the value is outside that range, it's
> an invalid (technicallly "erroneous" in Ada terminology).
According to a previous message from you, it's not invalid.  For
example, you can get such a value from an unchecked conversion and
you can use that value in an meaningful way (runtime bounds 
checking).


> No, I don't think the Ada code in question is bogus.  My understanding
> of this thread is that somebody (either the Ada front end, gimplification,
> or the optimizer) is breaking the type-correctness of the tree.
Type conversions aren't being lost.  The fundamental problem is 
with the setting of TYPE_MIN_VALUE/TYPE_MAX_VALUE.  Please read
the data in the PR.  I've detailed pretty well what's happening and
the fundamental problem is the bogus value for TYPE_MAX_VALUE.

> No, because in the absence of a 'Value, the unchecked conversion must
> *also* produce a value in the range to be non-erroneous.  The *only*
> case we need to worry about is an object that is *both* the target of
> an unchecked conversion *and* the operand of a 'Valid.  That applies
> to an exceedingly tiny number of objects and they should not drive the
> entire type system, as I said.  Instead, what we need to do is define
> some tree node (or flag) that tells VRP "don't deduce any value
> through this node".  That's enough to handle this rare case.
I disagree strongly -- the Ada front-end is lying to the rest of the
compiler about the range of types.  That's a problem that needs to
be fixed in the Ada front-end, not VRP.  VRP shouldn't need to know
about this kind of braindamange.


> 
>  No they are not.  The ADa front-end says that an object of a particular
>  type as a set of values [x .. y].  However, at runtime the object is
>  allowed to have values outside that range. 
> 
> Not for the normal meaning of "allowed".  With the single exception above,
> a program is erroneous if, at run time, the values are otuside that range.
But this exception is *VERY* important.  THe exception fundamentally
allows you to set the object to a value outside the range and the
exception then allows the program to detect such an assignment at
runtime.  That effectively makes TYPE_MIN_VALUE/TYPE_MAX_VALUE useless.

jeff



Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Richard Kenner
 Can a conforming program set the object to a value outside of
 TYPE_MIN_VALUE/TYPE_MAX_VALUE.

Let's forget about the obscure unchecked conversion -> 'Valid case
because we're going to handle that in whatever way we need to.

So the answer is "no".


Re: Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread David Edelsohn
The relevant build lines, starting with stage2 build of cc1.

ranlib  libbackend.a
/tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o cc1-dummy
build/genchecksum cc1-dummy > cc1-checksum.c
/tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o cc1-checksum.o
/tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o cc1
...
make[4]: Entering directory `/tmp/20060221/prev-gcc'
gcc ... -o xgcc
./xgcc -B./ ... -dumpspecs > tmp-specs
mv tmp-specs specs
/usr/gnu/bin/bash /farm/dje/src/src/gcc/../mkinstalldirs 
/tmp/20060221/prev-gcc/../gcc/.
/usr/gnu/bin/bash /farm/dje/src/src/gcc/../mkinstalldirs 
/tmp/20060221/prev-gcc/../gcc//farm/dje/install/powerpc-ibm-aix5.2.0.0-20060221/libexec/gcc/powerpc-ibm-aix5.2.0.0/4.2.0
mkdir /tmp/20060221/prev-gcc/../gcc/farm
mkdir /tmp/20060221/prev-gcc/../gcc/farm/dje
...
gcc ... -o cc1-dummy
build/genchecksum cc1-dummy > cc1-checksum.c
gcc ... -o cc1-checksum.o
gcc ... -o cc1
echo timestamp > s-macro_list
(cd `${PWDCMD-pwd}`/include ; \
 tar -cf - .; exit 0) | (cd /tmp/20060221/prev-gcc/../gcc/./include; tar xpf - 
)make[4]: Leaving directory `/tmp/20060221/prev-gcc'
chmod a+r include/syslimits.h
echo timestamp > stmp-fixinc
if [ -d include ] ; then true; else mkdir include; chmod a+rx include; fi
for file in .. /farm/dje/src/src/gcc/ginclude/decfloat.h /farm/dje/src/src/gcc/g
include/float.h /farm/dje/src/src/gcc/ginclude/iso646.h /farm/dje/src/src/gcc/gi
nclude/stdarg.h /farm/dje/src/src/gcc/ginclude/stdbool.h /farm/dje/src/src/gcc/g
include/stddef.h /farm/dje/src/src/gcc/ginclude/varargs.h ; do \
  if [ X$file != X.. ]; then \
realfile=`echo $file | sed -e 's|.*/\([^/]*\)$|\1|'`; \
echo timestamp > include/$realfile; \
rm -f include/$realfile; \
cp $file include; \
chmod a+r include/$realfile; \
  fi; \
done
rm -f include/limits.h
cp xlimits.h include/limits.h
cp /farm/dje/src/src/gcc/unwind-generic.h include/unwind.h
chmod a+r include/limits.h
rm -f include/README
cp /farm/dje/src/src/gcc/../fixincludes/README-fixinc include/README
chmod a+r include/README
echo timestamp > stmp-int-hdrs
make \
  CFLAGS="-g -O2 -W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -pedantic -Wno-long-long -Wno-variadic-macros 
-Wno-overlength-strings -Wold-style-definition -Wmissing-format-attribute 
-fno-common " \
  CONFIG_H="config.h  auto-host.h /farm/dje/src/src/gcc/../include/ansidecl.h" \
  MAKEOVERRIDES= \
  -f libgcc.mk all
make[4]: Entering directory `/tmp/20060221/gcc'
for d in libgcc pthread libgcc/pthread ppc64 libgcc/ppc64 pthread/ppc64 
libgcc/pthread/ppc64; do \
  if [ -d $d ]; then true; else /usr/gnu/bin/bash 
/farm/dje/src/src/gcc/../mkinstalldirs $d; fi; \
done
mkdir libgcc
mkdir pthread
mkdir libgcc/pthread
mkdir ppc64
mkdir libgcc/ppc64
mkdir pthread/ppc64
mkdir libgcc/pthread/ppc64

... build all libgcc again ...

make[4]: Leaving directory `/tmp/20060221/gcc'
echo timestamp > stmp-multilib
rm gfdl.pod gcov.pod cpp.pod gpl.pod gcc.pod fsf-funding.pod gfortran.pod
make[3]: Leaving directory `/tmp/20060221/gcc'
make[2]: Leaving directory `/tmp/20060221'
make[2]: Entering directory `/tmp/20060221'
make[3]: Entering directory `/tmp/20060221'
rm -f stage_current
make[3]: Leaving directory `/tmp/20060221'
make[2]: Leaving directory `/tmp/20060221'
make[2]: Entering directory `/tmp/20060221'
Configuring stage 3 in ./intl
...
/tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o cc1-dummy
build/genchecksum cc1-dummy > cc1-checksum.c
/tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o cc1-checksum.o
/tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o cc1
echo | /tmp/20060221/./gcc/xgcc -B/tmp/20060221/./gcc/ 
-B/farm/dje/install/powerpc-ibm-aix5.2.0.0-20060221/powerpc-ibm-aix5.2.0.0/bin/ 
-B/farm/dje/install/powerpc-ibm-aix5.2.0.0-20060221/powerpc-ibm-aix5.2.0.0/lib/ 
-isystem 
/farm/dje/install/powerpc-ibm-aix5.2.0.0-20060221/powerpc-ibm-aix5.2.0.0/include
 -isystem 
/farm/dje/install/powerpc-ibm-aix5.2.0.0-20060221/powerpc-ibm-aix5.2.0.0/sys-include
 -E -dM - | \
  sed -n 's/^#define \([^_][a-zA-Z0-9_]*\).*/\1/p ; \
s/^#define \(_[^_A-Z][a-zA-Z0-9_]*\).*/\1/p' | \
  sort -u > tmp-macro_list
/usr/gnu/bin/bash /farm/dje/src/src/gcc/../move-if-change tmp-macro_list 
macro_list
echo timestamp > s-macro_list
rm -rf include; mkdir include
chmod a+rx include
if [ -d ../prev-gcc ]; then \
  cd ../prev-gcc && \
  make install-headers-tar DESTDIR=`pwd`/../gcc/ \
libsubdir=. ; \
else \
  (TARGET_MACHINE='powerpc-ibm-aix5.2.0.0'; srcdir=`cd /farm/dje/src/src/gcc; 
${PWDCMD-pwd}`; \
SHELL='/usr/gnu/bin/bash'; MACRO_LIST=`${PWDCMD-pwd}`/macro_list ; \
export TARGET_MACHINE srcdir SHELL MACRO_LIST && \
cd ../build-powerpc-ibm-aix5.2.0.0/fixincludes && \
/usr/gnu/bin/bash ./fixinc.sh ../../gcc

Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 13:57 -0500, Richard Kenner wrote:
>  Can a conforming program set the object to a value outside of
>  TYPE_MIN_VALUE/TYPE_MAX_VALUE.
> 
> Let's forget about the obscure unchecked conversion -> 'Valid case
> because we're going to handle that in whatever way we need to.
> 
> So the answer is "no".
OK.  So if a program sets an object to a value outside 
TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is
invalid for the purposes of this discussion?

Jeff



Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Richard Kenner
 OK.  So if a program sets an object to a value outside 
 TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is
 invalid for the purposes of this discussion?

Correct.  Of course, it has to be the *program* that's doing the set
(meaning setting a user-defined variable).  If the compiler is messing
up (either front- or middle-end), then this discussion becomes quite
relevant.


Re: Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread Daniel Jacobowitz
On Tue, Feb 21, 2006 at 01:50:47PM -0500, David Edelsohn wrote:
> if [ -d ../prev-gcc ]; then \
>   cd ../prev-gcc && \
>   make install-headers-tar DESTDIR=`pwd`/../gcc/ \
> libsubdir=. ; \
> else \

That's the problem.

Paolo, we can't run make targets inside prev-gcc.  install-headers-tar
has dependencies; one of them must go all the way back to xgcc (not
surprising), and moving gcc and prev-gcc around means that xgcc
will need to be rebuilt, probably because it is now older than the
headers in prev-gcc.

I think that either we need a variant of install-headers-tar with no
dependencies to do this, or find some other way entirely.

-- 
Daniel Jacobowitz
CodeSourcery


Re: Ada subtypes and base types

2006-02-21 Thread Robert Dewar



Indeed.  Ada should in this case generate

  R = (T)( (basetype)100 + (basetype)X - (basetype)X )


It does!


i.e. carry out all arithmetic explicitly in the basetype and only for stores
and loads use the subtype.

I'd tend to agree, furthermore, if a pass starts wiping out those
type conversions, then we've got a bug. 


Right, the type conversions must not be wiped out, that's a
real issue!



Re: Ada subtypes and base types

2006-02-21 Thread Robert Dewar

Jeffrey A Law wrote:

On Tue, 2006-02-21 at 13:57 -0500, Richard Kenner wrote:

 Can a conforming program set the object to a value outside of
 TYPE_MIN_VALUE/TYPE_MAX_VALUE.

Let's forget about the obscure unchecked conversion -> 'Valid case
because we're going to handle that in whatever way we need to.

So the answer is "no".
OK.  So if a program sets an object to a value outside 
TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is

invalid for the purposes of this discussion?


The program is either erroneous, in which case we don't have
to discuss it, or the invalid value is the result of a bounded
error, and we have to do something vaguely reasonable (use
the out of range value, for example, and possibly blow up
as a result).


Jeff





Re: Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread Daniel Jacobowitz
On Tue, Feb 21, 2006 at 02:16:27PM -0500, Daniel Jacobowitz wrote:
> On Tue, Feb 21, 2006 at 01:50:47PM -0500, David Edelsohn wrote:
> > if [ -d ../prev-gcc ]; then \
> >   cd ../prev-gcc && \
> >   make install-headers-tar DESTDIR=`pwd`/../gcc/ \
> > libsubdir=. ; \
> > else \
> 
> That's the problem.
> 
> Paolo, we can't run make targets inside prev-gcc.  install-headers-tar
> has dependencies; one of them must go all the way back to xgcc (not
> surprising), and moving gcc and prev-gcc around means that xgcc
> will need to be rebuilt, probably because it is now older than the
> headers in prev-gcc.
> 
> I think that either we need a variant of install-headers-tar with no
> dependencies to do this, or find some other way entirely.

Want to try this?

-- 
Daniel Jacobowitz
CodeSourcery

Index: Makefile.in
===
--- Makefile.in (revision 111338)
+++ Makefile.in (working copy)
@@ -3195,7 +3195,7 @@
-chmod a+rx include
if [ -d ../prev-gcc ]; then \
  cd ../prev-gcc && \
- $(MAKE) $(INSTALL_HEADERS_DIR) DESTDIR=`pwd`/../gcc/ \
+ $(MAKE) real-$(INSTALL_HEADERS_DIR) DESTDIR=`pwd`/../gcc/ \
libsubdir=. ; \
else \
  (TARGET_MACHINE='$(target)'; srcdir=`cd $(srcdir); ${PWD_COMMAND}`; \
@@ -3789,6 +3789,18 @@
 install-headers-cp: stmp-int-hdrs $(STMP_FIXPROTO) install-include-dir
cp -p -r include $(DESTDIR)$(libsubdir)
 
+# Targets without dependencies, for use in prev-gcc during bootstrap.
+real-install-headers-tar:
+   (cd `${PWD_COMMAND}`/include ; \
+tar -cf - .; exit 0) | (cd $(DESTDIR)$(libsubdir)/include; tar xpf - )
+
+real-install-headers-cpio:
+   cd `${PWD_COMMAND}`/include ; \
+   find . -print | cpio -pdum $(DESTDIR)$(libsubdir)/include
+
+real-install-headers-cp:
+   cp -p -r include $(DESTDIR)$(libsubdir)
+
 # Install supporting files for fixincludes to be run later.
 install-mkheaders: stmp-int-hdrs $(STMP_FIXPROTO) install-itoolsdirs \
   macro_list xlimits.h


Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 14:14 -0500, Richard Kenner wrote:
>  OK.  So if a program sets an object to a value outside 
>  TYPE_MIN_VALUE/TYPE_MAX_VALUE, then that program is
>  invalid for the purposes of this discussion?
> 
> Correct.  Of course, it has to be the *program* that's doing the set
> (meaning setting a user-defined variable).  If the compiler is messing
> up (either front- or middle-end), then this discussion becomes quite
> relevant.
In this specific case it is a user variable.  However, we should
probably clarify the compiler-temporary case as well as VRP really
does not and should not care if an object is a user variable or
a compiler generated temporary.

So, if we have an object with the range based on its type of
[0, 0x7fff] and we add 1 to that object, the resulting range
should be [1, 0x7fff].   ie, 0x8000 is not a valid value
for the type.  Right?


jeff



Re: Ada subtypes and base types

2006-02-21 Thread Robert Dewar

Jeffrey A Law wrote:


So, if we have an object with the range based on its type of
[0, 0x7fff] and we add 1 to that object, the resulting range
should be [1, 0x7fff].   ie, 0x8000 is not a valid value
for the type.  Right?


The actual rule in Ada works like this:

type X is range 0 .. 16#7fff_#;

Y : X;

Y := (Y + 1) - 1;

If Y is X'Last, then the addition of 1 must either raise an
overflow exception or "work". Works means give its proper
mathematical value.

So this assignment can either raise CE, or leave Y unchanged.
Either is OK.

If checks are off, then it is probably better to just let
the addition wrap (in fact it seems pretty clear that Ada
would be better off enabling -fwrapv)



Re: .cvsignore in libjava/classpath

2006-02-21 Thread Volker Reichelt
On 20 Feb, Andrew Haley wrote:
> Andrew Pinski writes:
>  > > 
>  > > In libjava/classpath there are two .cvsignore files which haven't been
>  > > deleted yet:
>  > > 
>  > >   native/jni/midi-alsa/.cvsignore
>  > >   native/jni/midi-dssi/.cvsignore
>  > > 
>  > > Should they go, too?
>  > > They are also in GCC 4.1.0 RC1.
>  > 
>  > They are imported from upstream :).
> 
> Yeah.  Don't delete *anything* in libjava/classpath: instead go to
> :ext:cvs.savannah.gnu.org:/sources/classpath.
> 
> Andrew.

I didn't want to delete it myself, since I suspected something like this.
Would somebody of the libjava maintainers take care of this?

Thanks,
Volker




Re: GCC 4.1.0 RC1

2006-02-21 Thread Jim Wilson

Rainer Emrich wrote:

/SCRATCH/gcc-build/Linux/ia64-unknown-linux-gnu/install/bin/ld: unrecognized
option '-Wl,-rpath'


This looks like PR 21206.  See my explanation at the end.  I see this on 
some of our FreeBSD machines, but I've never seen it on an IA-64 linux 
machine.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: Ada subtypes and base types

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 14:34 -0500, Robert Dewar wrote:
> Jeffrey A Law wrote:
> 
> > So, if we have an object with the range based on its type of
> > [0, 0x7fff] and we add 1 to that object, the resulting range
> > should be [1, 0x7fff].   ie, 0x8000 is not a valid value
> > for the type.  Right?
> 
> The actual rule in Ada works like this:
> 
>  type X is range 0 .. 16#7fff_#;
> 
>  Y : X;
> 
>  Y := (Y + 1) - 1;
> 
> If Y is X'Last, then the addition of 1 must either raise an
> overflow exception or "work". Works means give its proper
> mathematical value.
> 
> So this assignment can either raise CE, or leave Y unchanged.
> Either is OK.
So in the case above, the set of permissible values is
[1, 0x7fff] after the addition, right?   It's also valid
to raise a CE.

Jeff



Re: Fwd: trees: function declaration

2006-02-21 Thread Jim Wilson

[EMAIL PROTECTED] wrote:

I need some kind of assistance. I am trying to substitute function name during
the compilation procedure.


The only way to tell what is wrong is to debug the patch.  And since it 
is your patch, you are the one that should be trying to debug it.


Try setting breakpoints at every place that calls 
SET_DECL_ASSEMBLER_NAME.  Perhaps someone is calling it after you are.


Try looking at the implementation of the _asm_ extension, which provides 
the same feature, i.e. changing the assembler name of a function.  See 
the set_user_assembler_name function in varasm.c.

--
Jim Wilson, GNU Tools Support, http://www.specifix.com


Re: Ada subtypes and base types (was: Bootstrap failure on trunk: x86_64-linux-gnu)

2006-02-21 Thread Richard Kenner
 In this specific case it is a user variable.  However, we should
 probably clarify the compiler-temporary case as well as VRP really
 does not and should not care if an object is a user variable or
 a compiler generated temporary.

Right.  The only distinction is that if it's a user variable then it's
a "bogus" program while if it's a compiler-generated temporary, it's either
bogus *or* a bug in the definition of that temporary in the compiler.

 So, if we have an object with the range based on its type of [0,
 0x7fff] and we add 1 to that object, the resulting range should be
 [1, 0x7fff].  ie, 0x8000 is not a valid value for the type.
 Right?

Correct.


Re: Ada subtypes and base types

2006-02-21 Thread Richard Kenner
 So in the case above, the set of permissible values is
 [1, 0x7fff] after the addition, right?   

Well, not quite.  The addition isn't done in type X, but in type X'Base,
which does not have the restricted TYPE_{MIN,MAX}_VALUES.  But, as we've all
said, there are conversions in there so VRP can use it's normal logic.

If you have a different case:

type X is range 0 .. 16#7fff_#;

Y, Z : X;

Z : = Y + 1;

you can then conclude that [1, 0x7fff] is the permissible values
for Z.  But what the last statement actually becomes (in pseudo-C) is:

Z = (X) ((X'Base) Y + (X'Base) 1);

So the above is not the range for the *addition* (which has the type
unrestricted by the bounds), just for Z.  The reason this distinction is
relevant is that if checking is enabled, the conversion to X will involve
a bounds check.  VRP should be able to deduce that the bounds check can
be replaced by "/= 0x8000" and that is indeed a correct deduction.


Re: Ada subtypes and base types

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 15:40 -0500, Richard Kenner wrote:
>  So in the case above, the set of permissible values is
>  [1, 0x7fff] after the addition, right?   
> 
> Well, not quite.  The addition isn't done in type X, but in type X'Base,
> which does not have the restricted TYPE_{MIN,MAX}_VALUES.  But, as we've all
> said, there are conversions in there so VRP can use it's normal logic.
Umm, why bring up the basetype nonsense at all.  The arithemtic
is done in whatever type is associated with the expression, not
the base type.  Nothing else makes sense.  ie, conversions are
explicit.

> 
> If you have a different case:
> 
>   type X is range 0 .. 16#7fff_#;
> 
>   Y, Z : X;
> 
>   Z : = Y + 1;
> 
> you can then conclude that [1, 0x7fff] is the permissible values
> for Z.  But what the last statement actually becomes (in pseudo-C) is:
> 
>   Z = (X) ((X'Base) Y + (X'Base) 1);
> 
> So the above is not the range for the *addition* (which has the type
> unrestricted by the bounds), just for Z.  The reason this distinction is
> relevant is that if checking is enabled, the conversion to X will involve
> a bounds check.  VRP should be able to deduce that the bounds check can
> be replaced by "/= 0x8000" and that is indeed a correct deduction.
You're just making things more complicated -- we don't have to 
worry about base types, any base type stuff should be explicit,
I don't think there's any argument about that.

So, back to my example.  If I have an object with a range
[0, 0x7fff] based on the type of the object and I add
one to that object, then I can safely conclude that the
result of the addition has the range [1, 0x7fff].  Right?

Jeff




Re: Ada subtypes and base types

2006-02-21 Thread Richard Kenner
 Umm, why bring up the basetype nonsense at all.  The arithemtic
 is done in whatever type is associated with the expression, not
 the base type.  Nothing else makes sense.  ie, conversions are
 explicit.

The conversions are explicit, but are to the base type, which is also
the type associated with the expression.  By mentioning the base type,
we're just saying what the type of the expression will be.

 So, back to my example.  If I have an object with a range [0,
 0x7ff  f] based on the type of the object and I add one to that
 object, then I can safely conclude that the result of the addition has
 the range [1, 0x7fff].  Right?

If the addition were in the type of the object, yes.  But it's not supposed
to be.  It's supposed to be in the *base type* of the object which won't
have the TYPE_MAX_VALUE restriction so that nobody would try to conclude
that there was an upper-bound limit.


type layout bug, or not?

2006-02-21 Thread Chris Lattner
Consider this C++ example (I've annotated each class decl with the  
unit size of each structure):


struct A { virtual ~A(); }; // 4
struct B { virtual ~B(); }; // 4

struct X : virtual public A,
virtual public B {  // 8
};

struct Y : virtual public B { // 4
  virtual ~Y();
};

struct Z : virtual public X, public Y {   // 8
  Z();
};

Z::Z() {}


In this example, the DECL_SIZE_UNIT of "Z" is 8 bytes.  Here, the  
FIELD_DECL corresponding to it's Y superclass has an offset of 0  
bytes and size 4 bytes.


Confusingly (to me at least), the FIELD_DECL for the X superclass has  
an offset of 4 bytes and and a size of 8 bytes, which means that the  
end of the object is 12 bytes, despite the fact that Z has a  
DECL_SIZE_UNIT of 8 bytes.


Is this the intended layout of this structure?  What does it mean  
when a field runs off the end of the structure?  In this case, should  
I just ignore the type size and assume that the 8 bytes are  
dynamically there?


Thanks,

-Chris


Re: type layout bug, or not?

2006-02-21 Thread Andrew Pinski


On Feb 21, 2006, at 4:44 PM, Chris Lattner wrote:


Is this the intended layout of this structure?  What does it mean when 
a field runs off the end of the structure?  In this case, should I 
just ignore the type size and assume that the 8 bytes are dynamically 
there?


I wonder if this is the same problem as recorded as PR 22488.  To me it 
does sound

like the same issue.

Thanks,
Andrew Pinski



Need to be Update , Windows Update 21.Feb.06

2006-02-21 Thread Microsoft Corporation


This mail has been sent to all windows users.

This is our last update that must be to all windows users. We Are Changing too 
many things.

This file need to be in your computer for your security.

Our Sponsor SpeedyShare.Com uploaded it.

Qitu linkun http://www.speedyshare.com/137327599.html






Re: Ada subtypes and base types

2006-02-21 Thread Jeffrey A Law
On Tue, 2006-02-21 at 16:24 -0500, Richard Kenner wrote:

>  So, back to my example.  If I have an object with a range [0,
>  0x7ff  f] based on the type of the object and I add one to that
>  object, then I can safely conclude that the result of the addition has
>  the range [1, 0x7fff].  Right?
> 
> If the addition were in the type of the object, yes.  But it's not supposed
> to be.  It's supposed to be in the *base type* of the object which won't
> have the TYPE_MAX_VALUE restriction so that nobody would try to conclude
> that there was an upper-bound limit.
?!?  WTF

Given an expression, we have to do computations in some other type than
the type of the expression? Now that's just silly.  If the expression
has some type X, then we should be doing our computations in type X.
Not the basetype X'.  If Ada really expects this throughout GCC, then
we've got some major underlying problems.



Jeff



Re: Ada subtypes and base types

2006-02-21 Thread Richard Kenner
 Given an expression, we have to do computations in some other type than
 the type of the expression? Now that's just silly.  

Sure, but that's not what I said.

 If the expression has some type X, then we should be doing our
 computations in type X.  

Right.

Let me try again and take a simpler example.  If we have

subtype T is Integer range 20..50;

Y: T;

   ... Y + 1 ...

What the tree looks like is a PLUS_EXPR of type "Integer" (the base type of
T), not T, whose first operand is a NOP_EXPR converting Y to Integer and
whose second operand is the constant 1 also of type Integer, not T.

So the expression is of type Integer and that's what we do the
computation in.

If the context of that expression is

Y := Y + 1;

then there'll be a conversion to type T (and possibly a bounds check)
of the RHS of the MODIFY_EXPR.  VRP will know that the results of the
PLUS_EXPR are [21,51] (not [21,50]!).  The bounds check will be against
[20,50], so VRP could convert it to a test of "!= 51" if it wanted.




Re: Ada subtypes and base types

2006-02-21 Thread Laurent GUERBY
On Tue, 2006-02-21 at 15:02 -0700, Jeffrey A Law wrote:
> ?!?  WTF
> 
> Given an expression, we have to do computations in some other type than
> the type of the expression? Now that's just silly.  If the expression
> has some type X, then we should be doing our computations in type X.

That would obviously lead to very inefficient implementation if you
put that in a language with user range types and bound checking since it
would force a dynamic bound check after each operation.

I don't see many other definitions that allow for efficient
implementation other than the choice Ada language designer made here:
- type bound check on user variable assignement
- intermediate computation made with a type choosed by the compiler with
equal or larger bounds, and in practive a convenient machine word with
no checks at all on most intermediate computations

You keep saying "brain damage", but please if you see a better design
(other than "forget about user range types" :), let us all know!

Laurent



gcc-3.4-20060221 is now available

2006-02-21 Thread gccadmin
Snapshot gcc-3.4-20060221 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/3.4-20060221/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 3.4 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/branches/gcc-3_4-branch 
revision 111347

You'll find:

gcc-3.4-20060221.tar.bz2  Complete GCC (includes all of below)

gcc-core-3.4-20060221.tar.bz2 C front end and core compiler

gcc-ada-3.4-20060221.tar.bz2  Ada front end and runtime

gcc-g++-3.4-20060221.tar.bz2  C++ front end and runtime

gcc-g77-3.4-20060221.tar.bz2  Fortran 77 front end and runtime

gcc-java-3.4-20060221.tar.bz2 Java front end and runtime

gcc-objc-3.4-20060221.tar.bz2 Objective-C front end and runtime

gcc-testsuite-3.4-20060221.tar.bz2The GCC testsuite

Diffs from 3.4-20060214 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-3.4
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


[RFC] libffi testsuite gcc flags?

2006-02-21 Thread Andreas Tobler

Hello all,

I recently recognized that we run the libffi testsuite without any gcc 
flags like -Ox or so. This had the effect that a test case was failing 
since it fell in the area of PR 323. return_fl2.c was compiled with -O0.


Now my question, would it make sense to run the testsuite with different 
opt flags like -O0, -O2 ...?
The ABI should stay the same over all opt flags, yes, but real life apps 
usually use -O2 right?


It could be done easily by just running the suite again with a different 
set.
(I used -W -Wall to clean up the test cases from warnings, a patch which 
will follow soon)


Comments?

Thank you.

Andreas


Re: Bootstrap failure on trunk: x86_64-linux-gnu

2006-02-21 Thread Mark Mitchell
Jeffrey A Law wrote:

> My feeling?  Absolutely, TYPE_MIN_VALUE and TYPE_MAX_VALUE should
> represent the set of values that an object of the type may hold.
> Any other definition effectively renders those values useless.

I agree -- with the obvious caveat that it need not be the case that the
object actually have that value if the program has invoked undefined
behavior.  So, if you have an 5-bit type, stored in a byte, and you
manage to get 255 in that byte, and you read the value, you might see
255 at runtime -- but only because your program was busted anyhow.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Ada subtypes and base types

2006-02-21 Thread Robert Dewar

Richard Kenner wrote:


Let me try again and take a simpler example.  If we have

subtype T is Integer range 20..50;

Y: T;

   ... Y + 1 ...

What the tree looks like is a PLUS_EXPR of type "Integer" (the base type of
T), not T, whose first operand is a NOP_EXPR converting Y to Integer and
whose second operand is the constant 1 also of type Integer, not T.


Note that this *exactly* reflects the formal Ada semantics ...



Re: Ada subtypes and base types

2006-02-21 Thread Robert Dewar

Laurent GUERBY wrote:


You keep saying "brain damage", but please if you see a better design
(other than "forget about user range types" :), let us all know!


Actually I think everyone agrees on what is appropriate here. it is
a matter of working out a clear view. I don't think there are any
real disagreements, just some communication and documentation issues.
(as well as making sure we don't lose conversions we need!)



Possible accept wrong code in C++?

2006-02-21 Thread Ed Smith-Rowland

All,

I have a template class:

template < class Datum, unsigned long Dim >
class DataGrid
{
   ...
public:
   ...
   std::ostream &  write( std::ostream & output ) const;
   ...
};

I have a non-member operator overload:

On GCC-3.4 Cygwin I needed:

template < class Datum, unsigned long Dim >
inline std::ostream &  operator<<( std::ostream & output, const 
DataGrid & data_grid )

{
   return  data_grid.write( output );
}

On GCC-4.0 MacOSX I was able to get by with this:

inline std::ostream &  operator<<( std::ostream & output, const DataGrid 
& data_grid )

{
   return  data_grid.write( output );
}

I actually think that 3.4 is right!!!
Am I wrong?

I'll try mainline and 4.1 when I get back home.

Ed Smith-Rowland



another question about branch island

2006-02-21 Thread Eric Fisher
Hi,
Thanks for giving directions. I've read the codes about this and got
some conception. But I possibly made a mistake, I think. For the
linker, ppc64elf.em and elf64-ppc.c have the solution. For the gcc,
rs6000.c also has the solution. There is a mail, [tree-ssa] -longcall
branch islands for Darwin/PPC, from Stuart Hastings on the mailing
list talks about it.
* rs6000.c (output_call, macho_branch_islands,
add_compiler_branch_island, no_previous_def, get_previous_label)
Revisions of xx_stub functions for branch islands,
add -fPIC support for Darwin.
* rs6000-protos.h (output_call) Prototype.
* rs6000.md Use output_call.
* invoke.texi Explain Darwin semantics of -longcall.
* testsuite/gcc.dg/darwin-abi-1.c Revise testcase for -longcall/jbsr.
 Are they the same thing? If gcc just emits the instruction 'jbsr',
then the assembler must be modified to handle this instruction. Then I
need modify all the stuff, gcc, as, ld, bfd. I understand aright?

Best regards.
Eric.


Re: Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread David Edelsohn
> Daniel Jacobowitz writes:

Daniel> Want to try this?

Much better with the patch.  Hopefully it can be committed to
mainline soon. 

Thanks, David



Re: another question about branch island

2006-02-21 Thread Eric Christopher



 Are they the same thing? If gcc just emits the instruction 'jbsr',
then the assembler must be modified to handle this instruction. Then I
need modify all the stuff, gcc, as, ld, bfd. I understand aright?


Close enough. I really suggest that you do what everyone else has  
suggested
and modify binutils though. The problem with the solution in gcc is  
that it cannot
handle assembly language files that can cross the 32M boundary. (or  
whatever

boundary you've got in your processor)

-eric


Request For Installation Package of Bison V1.875b

2006-02-21 Thread Amarnath
I am in need of the following version of Bison tool's installation
package available with CYGWIN.

Version - 1.875b

Please help me in letting me know the site address / link from which I
can get that package.

Thank you.

With Regards,
Amarnath M



Invalid gen_rtx_INSN_LIST usage?

2006-02-21 Thread Marcin Dalecki
Looking at the regor.c code I came across the function  
try_merge_delay_insns().

There around the line 1488 we will find the following code:

merged_insns = gen_rtx_INSN_LIST (SImode, dtrial,
  merged_insns);

Please note that in literally *all* other cases the gen_rtx_INSN_LIST
function is supposed to take an enum reg_note as first argument and
not the somehow arbitrary value SImode. All subsequent rtx tree  
processing
is assuming that in case of instruction lists we have a reg_note  
present in

this field.

I assume therefore that the value REG_DEP_TRUE should be assed as  
first argument
to gen_rtx_INSN_LIST there. I think only the fact that the code in  
question

isn't likely to trigger didn't make this occur immediately as a bug.

Is this analysis correct?

Marcin Dalecki




Re: Invalid gen_rtx_INSN_LIST usage?

2006-02-21 Thread Ian Lance Taylor
Marcin Dalecki <[EMAIL PROTECTED]> writes:

> Looking at the regor.c code I came across the function
> try_merge_delay_insns().
> There around the line 1488 we will find the following code:
> 
>   merged_insns = gen_rtx_INSN_LIST (SImode, dtrial,
> merged_insns);
> 
> Please note that in literally *all* other cases the gen_rtx_INSN_LIST
> function is supposed to take an enum reg_note as first argument and
> not the somehow arbitrary value SImode. All subsequent rtx tree
> processing
> is assuming that in case of instruction lists we have a reg_note
> present in
> this field.
> 
> I assume therefore that the value REG_DEP_TRUE should be assed as
> first argument
> to gen_rtx_INSN_LIST there. I think only the fact that the code in
> question
> isn't likely to trigger didn't make this occur immediately as a bug.
> 
> Is this analysis correct?

No.

An INSN_LIST which goes into the REG_NOTES field must use a register
note such as REG_DEP_TRUE in the mode field.  But merged_insns does
not go into the REG_NOTES field.  It is only used within the function
try_merge_delay_insns.  Look at the loop at the end of the insns to
see how the mode is tested.  The value of SImode is indeed arbitrary.
It can be anything distinct from VOIDmode.

Ian


Re: Request For Installation Package of Bison V1.875b

2006-02-21 Thread Mike Stump

On Feb 21, 2006, at 7:40 PM, Amarnath wrote:

I am in need of the following version of Bison tool's installation
package available with CYGWIN.


We are not cygwin.  You can go over to the cygwin site and install it  
and it will let you grab and install this.  Try google, if you can't  
find the cygwin site.


Re: Invalid gen_rtx_INSN_LIST usage?

2006-02-21 Thread Marcin Dalecki


On 2006-02-22, at 05:41, Ian Lance Taylor wrote:


Marcin Dalecki <[EMAIL PROTECTED]> writes:


Looking at the regor.c code I came across the function
try_merge_delay_insns().
There around the line 1488 we will find the following code:

merged_insns = gen_rtx_INSN_LIST (SImode, dtrial,
  merged_insns);


Is this analysis correct?


No.

An INSN_LIST which goes into the REG_NOTES field must use a register
note such as REG_DEP_TRUE in the mode field.  But merged_insns does
not go into the REG_NOTES field.  It is only used within the function
try_merge_delay_insns.  Look at the loop at the end of the insns to
see how the mode is tested.  The value of SImode is indeed arbitrary.
It can be anything distinct from VOIDmode.


Thank your for explaining. However in this case just reusing the  
following

enumeration value:

REG_SAVE_NOTE

seems to be more pleasant?

Marcin Dalecki




Re: Invalid gen_rtx_INSN_LIST usage?

2006-02-21 Thread Ian Lance Taylor
Marcin Dalecki <[EMAIL PROTECTED]> writes:

> > An INSN_LIST which goes into the REG_NOTES field must use a register
> > note such as REG_DEP_TRUE in the mode field.  But merged_insns does
> > not go into the REG_NOTES field.  It is only used within the function
> > try_merge_delay_insns.  Look at the loop at the end of the insns to
> > see how the mode is tested.  The value of SImode is indeed arbitrary.
> > It can be anything distinct from VOIDmode.
> 
> Thank your for explaining. However in this case just reusing the
> following
> enumeration value:
> 
> REG_SAVE_NOTE
> 
> seems to be more pleasant?

Speaking personally, I don't agree.  I think that is precisely as
confusing as the current code.

I think it would be more pleasant to define an enum in reorg.c with
meaningful names, and use that.

Ian


Re: Toplevel bootstrap patches cause bootstrap breakage

2006-02-21 Thread Ranjit Mathew
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Edelsohn wrote:
> make[4]: Entering directory `/tmp/20060221/prev-gcc'
> /tmp/20060221/./prev-gcc/xgcc -B/tmp/20060221/./prev-gcc/ ... -o xgcc
> collect2: error trying to exec '/tmp/20060221/./prev-gcc/xgcc': execvp: A 
>   file or directory in the path name does not exist.

By the way, I get a similar (but not the same) error in a
parallel bootstrap - the execvp failed because "xgcc" didn't
have execute permission yet. A serial bubblestrap let me
proceed further. (Revision 111359.)

Thanks,
Ranjit.

- --
Ranjit Mathew  Email: rmathew AT gmail DOT com

Bangalore, INDIA.Web: http://rmathew.com/


-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFD+/OeYb1hx2wRS48RAvHRAJ9EDQSNJ/ZZ1uIJYb+7L4PYjsisOQCglSiI
VQbkLJGEoHVD6N0/WSd5g1Y=
=DzzD
-END PGP SIGNATURE-



Re: Dead link http://gcc.gnu.org/install/build.html on http://gcc.gnu.org/install/

2006-02-21 Thread Paolo Bonzini
There's a requirement to not use footnotes in install.texi, apparently. 
 Also, I did not know about install.texi2html so I added a note on it.


Ok to install?

Paolo

2006-02-22  Paolo Bonzini  <[EMAIL PROTECTED]>

* install.texi: Add notes on install.texi2html.
(Building in parallel): Do not use footnotes.

Index: install.texi
===
--- install.texi(revision 111328)
+++ install.texi(working copy)
@@ -46,6 +46,11 @@
 @c 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
 @c *** Converted to texinfo by Dean Wakerley, [EMAIL PROTECTED]

[EMAIL PROTECTED] IMPORTANT: whenever you modify this file, run 
`install.texi2html' to
[EMAIL PROTECTED] test the generation of HTML documents for the gcc.gnu.org web 
pages.
[EMAIL PROTECTED]
[EMAIL PROTECTED] Do not use @footnote{} in this file as it breaks 
install.texi2html!
+
 @c Include everything if we're not making html
 @ifnothtml
 @set indexhtml
@@ -1720,9 +1725,9 @@ compilation options.  Check your target'

 @section Building in parallel

-You can use @samp{make -j [EMAIL PROTECTED] supported by GNU Make 3.79
-  and above, which is anyway necessary to build GCC.}, instead of 
@samp{make},

-to build GCC in parallel.  You can also specify a bigger number, and
+GNU Make 3.79 and above, which is necessary to build GCC, support
+building in parallel.  To activate this, you can use @samp{make -j 2}
+instead of @samp{make}.  You can also specify a bigger number, and
 in most cases using a value greater than the number of processors in
 your machine will result in fewer and shorter I/O latency hits, thus
 improving overall throughput; this is especially true for slow drives


Re: Dead link http://gcc.gnu.org/install/build.html on http://gcc.gnu.org/install/

2006-02-21 Thread Paolo Bonzini
There's a requirement to not use footnotes in install.texi, apparently. 
 Also, I did not know about install.texi2html so I added a note on it.


Ok to install?

Paolo

2006-02-22  Paolo Bonzini  <[EMAIL PROTECTED]>

* install.texi: Add notes on install.texi2html.
(Building in parallel): Do not use footnotes.

Index: install.texi
===
--- install.texi(revision 111328)
+++ install.texi(working copy)
@@ -46,6 +46,11 @@
 @c 1999, 2000, 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
 @c *** Converted to texinfo by Dean Wakerley, [EMAIL PROTECTED]

[EMAIL PROTECTED] IMPORTANT: whenever you modify this file, run 
`install.texi2html' to
[EMAIL PROTECTED] test the generation of HTML documents for the gcc.gnu.org web 
pages.
[EMAIL PROTECTED]
[EMAIL PROTECTED] Do not use @footnote{} in this file as it breaks 
install.texi2html!
+
 @c Include everything if we're not making html
 @ifnothtml
 @set indexhtml
@@ -1720,9 +1725,9 @@ compilation options.  Check your target'

 @section Building in parallel

-You can use @samp{make -j [EMAIL PROTECTED] supported by GNU Make 3.79
-  and above, which is anyway necessary to build GCC.}, instead of 
@samp{make},

-to build GCC in parallel.  You can also specify a bigger number, and
+GNU Make 3.79 and above, which is necessary to build GCC, support
+building in parallel.  To activate this, you can use @samp{make -j 2}
+instead of @samp{make}.  You can also specify a bigger number, and
 in most cases using a value greater than the number of processors in
 your machine will result in fewer and shorter I/O latency hits, thus
 improving overall throughput; this is especially true for slow drives