Problems with compiling svn trunk

2007-05-16 Thread mark
This also occurred with the latest snapshot:

Configure syntax was:

./configure --prefix=/opt/gcc-4.3

Configure completed fine. Make is getting stuck at:

/part/build/mark/gcc/host-x86_64-unknown-linux-gnu/gcc/xgcc 
-B/part/build/mark/gcc/host-x86_64-unknown-linux-gnu/gcc/ 
-B/opt/gcc-4.3/x86_64-unknown-linux-gnu/bin/ 
-B/opt/gcc-4.3/x86_64-unknown-linux-gnu/lib/ -isystem 
/opt/gcc-4.3/x86_64-unknown-linux-gnu/include -isystem 
/opt/gcc-4.3/x86_64-unknown-linux-gnu/sys-include -g -fkeep-inline-functions 
-O2  -O2 -g -O2  -DIN_GCC-W -Wall -Wwrite-strings -Wstrict-prototypes 
-Wmissing-prototypes -Wold-style-definition  -isystem ./include  -fPIC -g 
-DHAVE_GTHR_DEFAULT -DIN_LIBGCC2 -D__GCC_FLOAT_NOT_NEEDED   -I. -I. 
-I../../host-x86_64-unknown-linux-gnu/gcc -I../.././libgcc -I../.././libgcc/. 
-I../.././libgcc/../gcc -I../.././libgcc/../include 
-I../.././libgcc/../libdecnumber/bid -I../.././libgcc/../libdecnumber 
-I../../libdecnumber -o decContext.o -MT decContext.o -MD -MP -MF 
decContext.dep -c ../.././libgcc/../libdecnumber/decContext.c
In file included from ../.././libgcc/../libdecnumber/decContext.c:36:
../.././libgcc/../libdecnumber/decContext.h:52:50: error: gstdint.h: No such 
file or directory
... lots of other errors ...


gstdint.h seems to be at:
/part/build/mark/gcc/host-x86_64-unknown-linux-gnu/libdecnumber/gstdint.h


I'm a bit lost at the moment as I've never had to figure out how the
gcc make/bootstrap works until now, nor has it ever failed me in the
past.

Any ideas?

Thanks,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Re: Problems with compiling svn trunk [libdecnumber]

2007-05-17 Thread mark
> >>You are compiling in the source directory which is not a well
> >>supported way of compiling GCC.

To confirm this theory - yes, make completed with most targets when
built from a different directory than the source directory. I should
have read the latest install instructions carefully.

On Thu, May 17, 2007 at 09:20:55AM +0200, Paolo Bonzini wrote:
> >Yeah, but I wasn't and still ran into that (or similar) problem. :)
> Anyway, this patch solves it:

Thanks for the patch.

Unfortunately, due my unfamiliarity with GCC (or my incompetence? :-) ),
I don't know how to re-generate libgcc/configure from the alterred
libgcc/configure.ac. I checked the install instructions and couldn't
find a reference to this. I tried both autoconf and automake. Neither
did the 'right' thing. :-)

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Re: 4.3 release plan

2007-05-20 Thread mark
On Sun, May 20, 2007 at 10:39:43PM -0700, Brooks Moses wrote:
> Bernardo Innocenti wrote:
> >(the next proposal is likely to cause some dissent)
> >What about moving 4.3 to stage 3 *now* and moving everything
> >else in 4.4 instead?  Hopefully, it will be a matter of just
> >a few months.  From http://gcc.gnu.org/gcc-4.3/changes.html,
> >it looks like it would already be quite a juicy release.
> Why?
> I mean, I suppose there could be advantages to doing this, but you 
> haven't mentioned even one.

I think a few people (me!) are waiting for GCJ to have official
support for Java 5 syntax and class libraries. Not that I would like
to rush you - or skip any valuable merges - but if the code that is in
right now is in a near ready state, waiting up to a year before
releasing seems unfortunate. :-(

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Re: Type-punning

2007-06-26 Thread mark
On Tue, Jun 26, 2007 at 11:42:27PM +0200, Herman Geza wrote:
> struct Point {
>   float x, y, z;
> };
> struct Vector {
>   float x, y, z;
> 
>   Point &asPoint() {
>   return reinterpret_cast(*this);
>   }
> };

> Point and Vector have the same layout, but GCC treats them different when 
> it does aliasing analysis.  I have problems when I use Vector::asPoint.  
> I use asPoint very trivially, it is very easy to detect (for a human) 
> that references point to the same address.  Like

As a "human", I don't see how they are the same. Other than having three
fields with the same type, and same name, what makes them the same?

What happens when Point or Vector have a fourth field added? What if somebody
decides to re-order the fields to "float z, x, y;"? Would you expect the
optimization to be silently disabled? Or would you expect it to guess based
on the variables names?

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Re: I'm sorry, but this is unacceptable (union members and ctors)

2007-06-27 Thread mark
On Wed, Jun 27, 2007 at 10:14:18PM -0700, michael.a wrote:
> >> For instance, say you need to impliment a GUI, so you have yourself a
> >> rectangle struct which consists of four floating point values (the origin
> >> and difference between the opposite corner) ...Now you want those four
> >> values, but you also have a 2D vector struct.
> ...
> I pointed this out as the obvious portable solution somewhere in the thread.
> I just firmly believe this is an unnecessarily back breaking way of going
> about it (and physically backbreaking for whoever would have to change all
> of the code) 
> It would be a blessing were intelligible code somewhat higher up on the
> rungs of c++ priorities (being the ever ubiquitous mainstay systems
> programming language it has become and will likely remain)

Minding reading has always been considered a blessing when it comes
to programming languages. Also, an impossibility.

I don't understand what is being requested. Have one structure with
four fields, and another with two, and allow them to be used
automatically interchangeably? How is this a good thing? How will
this prevent the implementor from making a stupid mistake?

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Re: I'm sorry, but this is unacceptable (union members and ctors)

2007-06-28 Thread mark
On Wed, Jun 27, 2007 at 11:36:23PM -0700, michael.a wrote:
> mark-28 wrote:
> > I don't understand what is being requested. Have one structure with
> > four fields, and another with two, and allow them to be used
> > automatically interchangeably? How is this a good thing? How will
> > this prevent the implementor from making a stupid mistake?
> Its less a question of making a stupid mistake, as code being intelligible
> for whoever must work with it / pick it up quickly out of the blue. The more
> intelligible (less convoluted) it is, the easier it is to quickly grasp what
> is going on at the macroscopic as well as the microscopic levels. The way I
> personally program, typically a comment generally only adds obscusification
> to code which already more effeciently betrays its own function.

I agree with the sentiment, but not with the relevance. I don't see
how having a four field structure automatically appear as a completley
different two field structure, based only upon a match up between
field types and names seems more complicated and magical to me.

> Also syntacticly, I think its bad form for a function to simply access a
> data member directly / fudge its type. A function should imply that
> something functional is happening (or could be happening -- in the case of
> protected data / functionality)

This would disagree with much of the modern programming world. Black box
programming implies that you do not need to know whether a field is a real
field, or whether it is derived from other fields. Syntactically, it sounds
you are asking for operator overloading on fields, which maps to properties
in some other language. This is not more "simpler", as it only conceals
that a function may be performed underneath. It may look prettier, or use
fewer symbol characters.

Even so - I don't see why the following simplification of the example
provided would not suffice for your listed requirement:

class Rectangle {
Vector2d position;
Vector2d size;
};

... rectangle.position.x = ... ...

To have these automatically be treated as compatible types seems very
wrong:

class Rectangle { int x, y, w, h; };
class Vector2d  { int dx, dy; };

Imagine the fields were re-arranged:

class Rectangle { int y, x, h, w; };
class Vector2d  { int dx, dy; };

Now, what should it do?

> Granted, often intelligibility can demand too much semantic overhead from
> strict languages like c, but just as often perhaps, in just such a case, a
> simple accommodation is plainly obvious.

My best interpretation of this thread is that you are substituting
itnelligibility with DWIM, where DWIM for you is not the same as DWIM
for me, and I don't believe you could write out what your DWIM
expectation is in a way that would not break.

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Re: I'm sorry, but this is unacceptable (union members and ctors)

2007-06-28 Thread mark
> Mark Mielke wrote "Why not This?":
> > class Rectangle {
> > Vector2d position;
> > Vector2d size;
> > };
> > ... rectangle.position.x = ... ...

On Thu, Jun 28, 2007 at 03:00:07AM -0700, michael.a wrote:
> My foremost personal requirement is that no code need change outside the
> object definition files. And besides it is ridiculous to jump through two
> hoops, when one would suffice.
> ...
> Have you read the thread? (not that you should -- but before making such an
> assessment)

You find it unacceptable that GCC implements the C++ spec, and fails to compile
your software that was not implemented according to the C++ spec.

You find it unacceptable that you would need to change your code to
match the spec, but would instead rather modify GCC and have your patch
to GCC rushed out so that you can release your product on Linux.

You find it unacceptable that union members not be allowed to contain
struct with constructors, because you believe that the practice is safe,
and valid, because the designer knows best. You believe the spec should
be changed and that Microsoft has lead the way in this regard.

Do I have this right?

Is there a reason that this discovery did not occur until late in your
development cycle? It seems to me that the first mistake on your part
was not testing on Linux/GCC when writing the original code, if you
knew that this was an intended port.

Cheers,
mark

P.S. I apologize for getting confused on my last post. I was tired + ill
 and stupidly posting after midnight. Perhaps this one will be more relevant
 and effective?

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/



Implicit conversions between vectors

2006-10-12 Thread Mark Shinwell

Currently we permit implicit conversions between vectors whose total
bitsizes are equal but which are divided into differing numbers of subparts.
It seems that in some circumstances this rule is overly lax.  For example
the following code, using vector types (whose definitions I have provided
from the intrinsics header file) defined for the ARM NEON instruction set,
is accepted without error or warning:

...
typedef __builtin_neon_qi int8x8_t  __attribute__ ((__vector_size__ (8)));
typedef __builtin_neon_hi int16x4_t __attribute__ ((__vector_size__ (8)));
...

int8x8_t f (int16x4_t a)
{
  return a;
}

Here, the compiler is not complaining about an attempt to implicitly
convert a vector of four 16-bit quantities to a vector of eight
8-bit quantities.

This lack of type safety is unsettling, and I wonder if it should be fixed
with a patch along the lines of the (not yet fully tested) one below.  Does
that sound reasonable?  It seems right to try to fix the generic code here,
even though the testcase in hand is target-specific.  If this approach
is unreasonable, I guess some target-specific hooks will be needed.

Mark


--


Index: gcc/c-common.c
===
--- gcc/c-common.c  (revision 117639)
+++ gcc/c-common.c  (working copy)
@@ -1014,7 +1014,8 @@ vector_types_convertible_p (tree t1, tre
 && (TREE_CODE (TREE_TYPE (t1)) != REAL_TYPE ||
 TYPE_PRECISION (t1) == TYPE_PRECISION (t2))
 && INTEGRAL_TYPE_P (TREE_TYPE (t1))
-   == INTEGRAL_TYPE_P (TREE_TYPE (t2)));
+   == INTEGRAL_TYPE_P (TREE_TYPE (t2))
+&& TYPE_VECTOR_SUBPARTS (t1) == TYPE_VECTOR_SUBPARTS (t2));
 }

 /* Convert EXPR to TYPE, warning about conversion problems with constants.


Re: Implicit conversions between vectors

2006-10-12 Thread Mark Shinwell

Andrew Pinski wrote:

On Thu, 2006-10-12 at 13:03 +0100, Mark Shinwell wrote:

typedef __builtin_neon_qi int8x8_t  __attribute__ ((__vector_size__ (8)));
typedef __builtin_neon_hi int16x4_t __attribute__ ((__vector_size__ (8)));
...

int8x8_t f (int16x4_t a)
{
   return a;
}


This should error out and it is a regression from previous versions (I
can check which ones but I think 3.4.0 rejected it).  The two targets
that I work on daily at work (including the language extension
specifications), both say this is invalid code and should be rejected.


Thanks, will test properly and post to gcc-patches.

Mark


Re: Implicit conversions between vectors

2006-10-12 Thread Mark Shinwell

Ian Lance Taylor wrote:

I believe that the problem with changing this unconditionally is that
the Altivec programming guidelines specify the rules which gcc
currently follows: you are permitted to assign one vector variable to
another, without an explicit cast, when the vectors are the same size.
So please check the Altivec programming manual.


Will do, thanks.

Mark


Re: aligned attribute and the new operator (pr/15795)

2006-10-12 Thread Mark Mitchell

[EMAIL PROTECTED] wrote:


If we are willing to consider an ABI change, I think an approach that
allows new to call some form of memalign would be better than having the
compiler force alignment after calling new.  


Are we open to making an ABI change?


Personally, I think an ABI change, at the compiler level should be off 
the table.  (I say "Personally" to make clear that this is just my 
opinion as a C++ maintainer and as a co-developer of the C++ ABI 
specification, but not an SC decision.  And, for those who may find 
these parentheticals tedious, they're there because some people have 
previously interpreted statements from me as dictates; I'm trying to be 
very careful to make sure it's clear what hat I'm wearing.)


The C++ ABI has actually been stable for years now, which is a huge 
achievement.  We've gotten binary interoperability to work for most 
programs between a lot of C++ compilers, which is a good thing for all. 
 In my opinion, the next change to the C++ ABI should come if (and only 
if) C++0x requires changes.  Even there, I would hope for 
backwards-compatible changes -- for example, mangling for variadic 
templates would ideally be an extension to the current mangling scheme. 
 In other words, we should strive to make it possible to link current 
C++ libraries with C++0x programs, which means that the sort of change 
you're considering would be off the table.


Adding a compiler command-line option to specify the alignment of memory 
returned by "operator new", or a GNU attribute that libstdc++ could add 
to the default declaration (with a system-dependent value, of course), 
etc. seems fine to me, but I'd be very hesitant to change the ABI proper.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Proposed semantics for attributes in C++ (and in C?)

2006-10-15 Thread Mark Mitchell
pedef __attribute__((...)) S T;
  T v;

where T is some invented type name different from all others in the program.

For example given:

  __attribute__((packed)) S v;

the type of "&v" is "__attribute__((packed)) S *", and cannot be passed 
to a function expecting an "S*", but can of course be passed to a 
function expecting an "__attribute__((packed)) S *", or a typedef for 
such a type.


Thoughts?

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Proposed semantics for attributes in C++ (and in C?)

2006-10-15 Thread Mark Mitchell

Joseph S. Myers wrote:


On Sun, 15 Oct 2006, Mark Mitchell wrote:


We have a number of C++ PRs open around problems with code like this:

  struct S {
void f();
virtual void g();
  };

  typedef __attribute__((...)) struct S T;


I was happy with the state before r115086 (i.e. with it being documented 
that such attributes on typedefs are ignored).  But given that we are now 
attempting to honour them, the proposed semantics seem reasonable.


Yes, I would be happy to explicitly ignore semantic attributes in 
typedefs as well, with a warning (or even an error).  However, I had not 
realized that we ever did that; I'm surprised that the change that 
instituted this is so recent.  I suppose that explains why we're 
suddenly seeing a rash of such problems.  Jason, as you made this 
change, do you have any comments on the proposal?


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Proposed semantics for attributes in C++ (and in C?)

2006-10-16 Thread Mark Mitchell

Jason Merrill wrote:

I don't think my patch changed the handling of class typedefs; certainly 
my intent was only to change how we handle


  class __attribute ((foo)) C

Previously we rejected it, now we apply the attributes to the class.


OK, that certainly makes sense.  (That's one of the items in the 
proposal I wrote up: that you can apply attributes at the point of 
declaration of a class.)



Which PRs are you referring to?


One example is:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28558

However, this is a problem with non-semantic attributes, and not related 
to your patch.  So, I apologize for any aspersions cast.  But, it does 
motivate for writing something down about what we want semantics we 
want.  Here, what I think we want is (as per the proposal) to create a 
new anonymous typedef for "__attribute__((unused)) A", but consider that 
the same type as "A".


I was pretty sure there were other PRs, but I'm not able to find them 
now, so perhaps I was dreaming.  I thought there were also PRs about 
typeid and mangling failing (and/or doing the wrong thing) for types 
with attributes (including scalars with attributes).


I'd be inclined to prohibit semantic attributes on typedefs in general. 


That's certainly simpler.  I'm happy to be talked out of that idea. :-)

 Extending the type system to handle attribute types seems excessively 
complicated.  I think we should define a set of attributes which prevent 
us from taking the address of a variable with that attribute 
(packed...anything else?) and check for them at the same places we check 
for taking the address of a bitfield.


That seems like a good idea to me.  However, one consequence is that a 
packed class cannot have member functions (since you'd never be able to 
get a "this" pointer for them); do you think that's OK?


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: g++ -print-prefix or -print-install-prefix

2006-10-16 Thread Mark Mitchell

Benjamin Kosnik wrote:

For testing outside of the build directory, it would be convenient to
have $SUBJECT. 


This could be used in conjunction with -dumpversion to create
on-the-fly include and library directory paths for C++ includes in a
sane manner, much like the following:


Why do you need this?  For installed-compiler testing, the compiler 
already searches the obvious places.  (I'm not trying to be cute: I'm 
genuinely curious.)


I agree that it would be nice if -print-search-dirs listed include 
directories.  It already lists the paths searched for programs and 
libraries, so that seems like a logical place to add header directories.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-10-16 Thread Mark Mitchell

Ian Lance Taylor wrote:

"Carlos O'Donell" <[EMAIL PROTECTED]> writes:


A relocated compiler should not look in $prefix.


I agree.

I can't approve your patches, though.


This patch is OK, once we reach Stage 1.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Proposed semantics for attributes in C++ (and in C?)

2006-10-17 Thread Mark Mitchell

Geoffrey Keating wrote:


A typedef declaration which adds semantic attributes to a non-class
type is valid, but again creates an entirely new type. 


>> It is invalid to

do anything that would require either type_info or a mangled name for
"Q", including using it as an argument to typeid, thowing an exception
of a type involving "Q", or declaring a template to take a parameter
of a type involving "Q".  (We could relax some of these restrictions
in future, if we add mangling support for attributes.)


Declaring a function which takes a 'Q' also requires the mangled name of 'Q'.


Good point!


where T is some invented type name different from all others in the program.

For example given:

   __attribute__((packed)) S v;

the type of "&v" is "__attribute__((packed)) S *", and cannot be
passed to a function expecting an "S*", but can of course be passed to
a function expecting an "__attribute__((packed)) S *", or a typedef
for such a type.


... except that there can't be any such functions.  You could assign
it to another variable of the same type, or a field of a class with
that type.


Right.  And, since there seems to be consensus that you shouldn't be 
able to apply semantic attributes to class types, "packed" is a bad 
example there too.  (If you applied "packed" at the point of declaration 
of "S", then "S" has a different layout than it otherwise would, but we 
don't need to do anything regarding mangling, etc.)


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.2/4.3 Status Report (2006-10-17)

2006-10-17 Thread Mark Mitchell
As Gerald noticed, there are now fewer than 100 serious regressions open 
against mainline, which means that we've met the criteria for creating 
the 4.2 release branch.  (We still have 17 P1s, so we've certainly got 
some work left to do before creating a 4.2 release, and I hope people 
will continue to work on them so that we can get 4.2 out the door in 
relatively short order.)


The SC has reviewed the primary/secondary platform list, and approved it 
unchanged, with the exception of adding S/390 GNU/Linux as a secondary 
platform.  I will reflect that in the GCC 4.3 criteria.html page when I 
create it.


In order to allow people to organize themselves for Stage 1, I'll create 
the branch, and open mainline as Stage 1, at some point on Friday, 
October 20th.  Between now and then, I would like to see folks negotiate 
amongst themselves to get a reasonable order for incorporating patches.


See:

  http://gcc.gnu.org/ml/gcc/2006-09/msg00454.html

I've also reviewed the projects listed here:

  http://gcc.gnu.org/wiki/GCC_4.3_Release_Planning

The variadic templates project is in limbo, I'm afraid.  The SC doesn't 
seem to have a clear opinion on even the general C++ policy discussed on 
the lists, which means that Jason, Nathan, and I have to talk about 
variadic templates and work out what to do.


IMA for C++ is another difficult case.  This is unambiguously useful, 
though duplicative of what we're trying to build with LTO.  That's not a 
bad thing, since LTO is clearly at least one more release cycle away, 
and IMA might be ready sooner.  On the other hand, if the IMA changes 
were disruptive to the C++ front end in general, then that might be a 
problem.  I guess we just have to evaluate the patch, when it's ready.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2/4.3 Status Report (2006-10-17)

2006-10-18 Thread Mark Mitchell

Kaveh R. GHAZI wrote:


The configury bit was approved by DJ for stage1, but do you see any reason
to hold back?  Or is this posting sufficient warning that people may need
to upgrade?  (I.e.  people should start upgrading their libraries now.)


I don't see any reason to hold back.

Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++ name mangling for local entities

2006-10-19 Thread Mark Mitchell

Geoffrey Keating wrote:

For GCC, I've found it necessary to have a way to name local (that is,
namespace-scope 'static') variables and functions which allows more
than one such symbol to be present and have distinct mangled names.


With my GCC hat on, I don't think this is desirable.  For ELF at least, 
there's nothing that prevents us using the same name for multiple local 
symbols (since "ld -r" does it).  For the sake of both LTO and IMA, we 
should add a feature to the assembler like:


   .asm_alias x = y

that says references to "x" are emitted as references to a new "y", 
distinct from all other "y" references.  That would obviate the need for 
multiple statics with the same name, since in the case that you want to 
do this (IMA) you could instead emit them using whatever name was 
convenient for generating the assembly file, and then let the assembler 
emit a symbol with the correct name.  That would help to meet the 
objective that the output from IMA and/or LTO looks like the output from 
"ld -r", modulo optimization.  I think it would be great if you would 
help implement that, which would then make extending the C++ ABI change 
unnecessary.


Now, with my C++ ABI hat on, and assuming that the idea above is 
intractable, then: (a) as you note, this is out-of-scope for the C++ 
ABI, if we confine ourselves to pure ISO C++, but (b) if the other ABI 
stakeholders don't mind, I don't see any reason not to consider 
reserving a chunk of the namespace.



What I currently have implemented is

 ::= 
   ::= 
   ::= 
   ::=// new

 ::= L  _// new

It's distinguishable from the other possibilies, because operator-name
starts with a lowercase letter, ctor-dtor-name starts with 'C' or 'D',
and source-name starts with a digit.  There is no semantic meaning
attached to the number in a local-source-name, it exists only to keep
different names distinct (so it is not like  in a
local-name).


That's true, but is there a reason not to use the discriminator 
encoding?  There might well be an ambiguity, but I didn't see at first 
blush.  If so, that would seem most natural to me.


I do think that your proposed encoding is unambiguous, though, so it 
certainly seems like a reasonable choice, especially if the 
discriminator approach doesn't work.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++ name mangling for local entities

2006-10-20 Thread Mark Mitchell



we should add a feature to the assembler like:

   .asm_alias x = y

that says references to "x" are emitted as references to a new "y", 
distinct from all other "y" references. 


On Darwin, all the DWARF information in .o files is matched by name¹ 
with symbols in the executable, so this won't work. 


In that case, on Darwin, the assembler could leave the name "x" as "x", 
so that all the names in the object file were unique.  Since this is 
only for local symbols, there's no ABI impact, as you mentioned.  Then, 
we'd have better behavior on ELF platforms and would not have to make 
any change to the C++ ABI.  You could use your suggested encoding in GCC 
as "x", but it would only show up in object files on systems that don't 
support multiple local symbols with the same name.



Now, with my C++ ABI hat on


That's true, but is there a reason not to use the discriminator 
encoding? 



You mean

 ::= Z  

?


Yes, that's what I meant.  I think that would be best, partly because it 
avoids having to reserve "L", but:



 ::= L  

will work and is more consistent, so consider the proposal amended to 
have that.


also seems OK, assuming that we need to do this at all.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.2 branch created; mainline open in Stage 1

2006-10-20 Thread Mark Mitchell
I have created the GCC 4.2 branch.  The branch is open for 
regression-fixes only, as is customary for release branches.  I believe 
I have completed the steps in branching.html with two exceptions:


1. I have not created a mainline snapshot manually.  I don't quite 
understand how to do that, and if the only issue is incorrect "previous 
snapshot" references in the generated mail, it doesn't really seem worth 
the trouble.  If there's some more grievous problem, please let me know, 
and I will try to fix it tomorrow.


2. I have not regenerated {gcc,cpplib}.pot, or sent them off to the 
translation project.  Joseph, would you please do that, at your convenience?


The mainline is now in Stage 1.

Thanks to those who helped fix PRs to meet the 4.2 branch criteria!

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Question about LTO dwarf reader vs. artificial variables and formal arguments

2006-10-21 Thread Mark Mitchell

Diego Novillo wrote:

Ian Lance Taylor wrote on 10/21/06 14:59:


That is, we are not going to write out DWARF.  We can't, because DWARF
is not designed to represent all the details which the compiler needs
to represent.  What we are going to write out is a superset of DWARF.
And in fact, if it helps, I think that we shouldn't hesitate to write
out something which is similar to but incompatible with DWARF.

In general reading and writing trees is far from the hardest part of
the LTO effort.  I think it is a mistake for us to get too tied up in
the details of how to represent things in DWARF.  (I also think that
we could probably do better by defining our own bytecode language, one
optimized for our purposes, but it's not an issue worth fighting
over.)

Agreed.  I don't think we'll get far if we focus too much on DWARF, as 
it clearly cannot be used as a bytecode language for our purposes.


I think the bytecode issue is a red herring, because we are no longer 
talking about using DWARF for the bodies of functions.  DWARF is only 
being used for declarations and types.


There, yes, we will need some extensions to represent things.  However, 
DWARF is designed to be extended, so that's no problem.  I continue to 
think think that using DWARF (with extensions) since it makes this 
information accessible to other tools (including GDB).  I think that 
before there ought to be a compelling reason to abandon a strategy based 
on DWARF.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Question about LTO dwarf reader vs. artificial variables and formal arguments

2006-10-21 Thread Mark Mitchell

Steven Bosscher wrote:


contains
   subroutine sub(c)
   character*10 c
   end subroutine

end

produces as a GIMPLE dump:




sub (c, _c)
{
  (void) 0;
}

where _c is strlen("Hi World!").  From a user perspective, it would be better 
to hide _c for the debugger because it is not something that the user had in 
the original program. 


I think that _c should be emitted in DWARF, as an artificial parameter, 
both for the sake of the debugger and for LTO.  LTO is supposed to be 
language-independent, which means that the information it reads in needs 
to be sufficient to compute the types of things (as they will be at the 
level of GIMPLE) without language hooks.  It may be that this idea turns 
out to be too idealistic, and that some language hooks are necessary to 
interpret the DWARF, but I would hope to avoid that.


Similarly, LTO has to somehow deal with DECL_VALUE_EXPR and the debug 
information that is produced from it.  Is there already some provision 
to handle this kind of trickery in LTO?


No, not yet.

but what would happen if LTO reads this in and re-constructs the type of "i" 
from this information?  I imagine it would lead to mis-matches of the GIMPLE 
code that you read in, where "i" is a 1x100 array, and the re-constructed 
variable "i" which would be a 10x10 2D array.


Has anyone working on LTO already thought of these challanges?


Yes, I've thought about these things -- but that doesn't mean I have 
ready answers.  I've been thinking first and foremost about C, and then 
about C and C++.


Some of the same issues apply, but some don't.  In C/C++, we don't 
linearize the array type.  I don't know if that's viable in gfortran or 
not; is there a way to get the same performance in the middle end that 
you currently get by doing this in the front end?


In the worst case, we will provide a separate type attribute in DWARF 
giving the "GIMPLE type" of the variable.  Then, that type would be the 
linearized array.  LTO would use the GIMPLE type attribute (if present) 
when reconstructing the type.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2 branch created; mainline open in Stage 1

2006-10-23 Thread Mark Mitchell

Andrew Pinski wrote:

On Sun, 2006-10-22 at 12:58 +, Joseph S. Myers wrote:
All the bugs with "4.2" in their summaries ("[4.1/4.2 Regression]" etc.) 
need to have it changed to "4.2/4.3".  I don't know the procedure for 
this, but perhaps it needs adding to the branching checklist.


As I understand it, it involves editing the mysql database by hand (well
by a script) instead of doing it inside bugzilla.  Daniel Berlin has
done that the last couple of releases.


I have checked in the attached patch to add this step to the branching 
checklist.  I will now ask Daniel to help with the SQL bits.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2 branch created; mainline open in Stage 1

2006-10-23 Thread Mark Mitchell

Mark Mitchell wrote:

Andrew Pinski wrote:

On Sun, 2006-10-22 at 12:58 +, Joseph S. Myers wrote:
All the bugs with "4.2" in their summaries ("[4.1/4.2 Regression]" 
etc.) need to have it changed to "4.2/4.3".  I don't know the 
procedure for this, but perhaps it needs adding to the branching 
checklist.


As I understand it, it involves editing the mysql database by hand (well
by a script) instead of doing it inside bugzilla.  Daniel Berlin has
done that the last couple of releases.


I have checked in the attached patch to add this step to the branching 
checklist.  I will now ask Daniel to help with the SQL bits.


Sorry, here's the patch.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
Index: branching.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/branching.html,v
retrieving revision 1.24
diff -c -5 -p -r1.24 branching.html
*** branching.html  21 Sep 2006 14:17:36 -  1.24
--- branching.html  23 Oct 2006 19:49:16 -
*** milestone for 3.4.1 for PRs that can't b
*** 73,81 
--- 73,85 
  Update the email parsing script to handle bugs against the new versions.
  The script is in CVS at wwwdocs/bugzilla/contrib/bug_email.pl. 
  Search for an existing version (like "3.3"), and update both places
  it occurs to handle the new version through copy and paste.
  
+ Ask Daniel Berlin to adjust all PRs with the just-branched version
+ in their summary to also contain the new version corresponding to
+ mainline.
+ 
  
  
  
  


[Fwd: gcc-4.3-20061023 is now available]

2006-10-23 Thread Mark Mitchell
Here is the announcement mail for the special first-from-mainline 4.3 
snapshot.  The references to "Diffs from" below should say 
"4.2-20061014" rather than "4.3-".


I have reactivated the cronjob so that future snapshots for 4.3 should 
be generated automatically.


Thanks to Joseph for helping me with this.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
--- Begin Message ---
Snapshot gcc-4.3-20061023 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20061023/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.3 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 117985

You'll find:

gcc-4.3-20061023.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.3-20061023.tar.bz2 C front end and core compiler

gcc-ada-4.3-20061023.tar.bz2  Ada front end and runtime

gcc-fortran-4.3-20061023.tar.bz2  Fortran front end and runtime

gcc-g++-4.3-20061023.tar.bz2  C++ front end and runtime

gcc-java-4.3-20061023.tar.bz2 Java front end and runtime

gcc-objc-4.3-20061023.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.3-20061023.tar.bz2The GCC testsuite

Diffs from 4.3- are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.3
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.
--- End Message ---


Re: [Fwd: gcc-4.3-20061023 is now available]

2006-10-23 Thread Mark Mitchell

Jack Howarth wrote:

Mark,
   What happened to the gcc 4.2 snapshot
tarball for this week?


It gets build on Tuesdays, or at least it does now according to crontab.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2 branch created; mainline open in Stage 1

2006-10-23 Thread Mark Mitchell

Daniel Berlin wrote:

Anyway, i made 43changer.pl and ran it, so the bug summaries have been 
updated.


Thanks!

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Fix PR29519 Bad code on MIPS with -fnon-call-exceptions

2006-10-25 Thread Mark Mitchell

Eric Botcazou wrote:

Finally before I finish the retrospective part of this e-mail, I'll
point out this isn't a sudden recent unilateral policy decision, but
purely a crystallization of the prescribed GCC work-flow outlined in
contributing.html that has been refined over many years.


I've reviewed this thread, because there was some discussion about how 
to handle release branches.


In general, I'd prefer that all patches to fix regressions go on the 
release branch at the same time as they go to mainline.  However, I have 
myself failed to do that at times; I presently have a few C++ patches 
which need backporting to 4.1, and I have not yet done that.  At a 
minimum, in such a case, there should be a PR open for the release 
branch failure, and it should note the presence of the patch on 
mainline.  (I've done that for my C++ patches, in that the check-in 
messages on mainline are in the PRs.)  From my perspective, as RM, the 
critical thing is that we have a PR and a record of the patch, so that 
as we approach the release we know we have a bug, and we know we have an 
option available to fix it.


I also recognize that there may sometimes be patches that appear risky, 
and that we therefore want to apply them to mainline before applying 
them to release branches too.  I think that's perfectly appropriate.  In 
other words, I think this is a judgment call, and I think maintainers 
should be free to make it.  But, in general, please do try to put 
patches on release branches, especially if they fix P1 regressions. 
Sacrificing code quality for correctness is the right tradeoff for a 
release branch, if we have to pick, so if a patch is "only" going to 
pessimize code, it should be a very strong candidate for a release branch.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: memory benchmark of tuples branch

2006-10-27 Thread Mark Mitchell

Aldy Hernandez wrote:


I don't know if this merits merging into mainline, or if it's preferable to
keep plodding along and convert the rest of the tuples.  What do you guys
think?  Either way, I have my work cut out for me, though I believe the
hardest part is over (FLW).


I thinking merging as you go is fine, in principle.  Every little bit 
helps.  My only concern would be whether you'll disrupt other 
large-scale projects that might find global changes hard to handle.  I'd 
suggest posting your patch and seeing if anyone makes unhappy sounds. :-)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: memory benchmark of tuples branch

2006-10-27 Thread Mark Mitchell

Aldy Hernandez wrote:

Does the tuples branch include the CALL_EXPR reworking from the LTO branch?


No.


Though, that is a similar global-touch-everything project, so hopefully 
whatever consensus develops from tuples will carry over.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-30 Thread Mark Mitchell

Ian Lance Taylor wrote:


I'm not sure I entirely agree with Mark's reasoning.  It's true that
we've always required a big set of tools to do development with gcc.
And it's true that we require GNU make to be installed and working in
order to build gcc.  But this is the first time that we've ever
required a non-standard library to be installed before J. Random User
can build gcc.  And plenty of people do try to build gcc themselves,
as can be seen on gcc-help.


I don't believe there's a serious problem with the concept, as long as 
"./configure; make; make install" for GMP DTRT.  If you can do it for 
GCC, you can do it for a library it uses too.


I would strongly oppose downloading stuff during the build process. 
We're not in the apt-get business; we can leave that to the GNU/Linux 
distributions, the Cygwin distributors, etc.  If you want to build a KDE 
application, you have to first build/download the KDE libraries; why 
should GCC be different?



I think that if we stick with our current approach, we will have a lot
of bug reports and dissatisfied users when gcc 4.3 is released.


I'd argue that the minority of people who are building from source 
should not be our primary concern.  Obviously, all other things being 
equal, we should try to make that easy -- but if we can deliver a better 
compiler (as Kaveh has already shown we can with his patch series), then 
we should prioritize that.  For those that want to build from source, we 
should provide good documentation, and clear instructions as to where to 
find what they need, but we should assume they can follow complicated 
instructions -- since the process is already complicated.


I do think it's important that we make sure there's a readily buildable 
GMP available, including one that works on OS X, in time for 4.3.  We 
should provide a tarball for it from gcc.gnu.org, if there isn't a 
suitable GMP release by then.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Steven Bosscher wrote:

On 30 Oct 2006 22:56:59 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:


I'm certainly not saying that we should pull out GMP and MPFR.  But I
am saying that we need to do much much better about making it easy for
people to build gcc.


Can't we just make it so that, if gmp/  amd mpfr/ directories exist in
the toplevel, they are built along with GCC?  I don't mean actually
including gmp and mpfr in the gcc SVN repo, but just making it
possible to build them when someone unpacks gmp/mpfr tarballs in the
toplevel dir.


I wouldn't object to that.  It's a bit more build-system complexity, but 
if it makes it easier for people, then it's worth it.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Ian Lance Taylor wrote:

Mark Mitchell <[EMAIL PROTECTED]> writes:


I would strongly oppose downloading stuff during the build
process. We're not in the apt-get business; we can leave that to the
GNU/Linux distributions, the Cygwin distributors, etc.  If you want to
build a KDE application, you have to first build/download the KDE
libraries; why should GCC be different?


Because gcc is the first step in bringing up a new system. 


I don't find this as persuasive as I used to.  There aren't very many 
new host systems, and when there are, you get started with a cross compiler.



I disagree: the process of building gcc from a release (as opposed to
building the development version of gcc) really isn't complicated.
The only remotely non-standard thing that is required is GNU make.
Given that, all you need to do is "SRCDIR/configure; make".


OK, I agree: a native compiler, with no special options, isn't too hard. 
 I don't think typing that sequence twice would be too hard either, 
though. :-)



I'm certainly not saying that we should pull out GMP and MPFR.  But I
am saying that we need to do much much better about making it easy for
people to build gcc. 


I agree; I just don't think an external library is the problem.  For 
example, the unfortunate tendency of broken C++ compilers to manifest as 
autoconf errors about "run-time test after link-time failure" (that's 
not the right error) in libstdc++ builds confused me a bunch.  The fact 
that you can pass configure options that are silently ignored is a trap. 
 I'm sure we don't have good documentation for all of the configuration 
options we do have.  The way the libgcc Makefiles get constructed from 
shell scripts and the use of recursive make to invoke them confuses me, 
and the fact that "make" at the top level does things differently than 
"make" in the gcc/ directory also confuses me.  IIRC, --with-cpu= works 
on some systems, but not others.


In other words, the situation that you're on a GNU/Linux system, and 
have to  type "configure; make; make install" several times for several 
packages to GCC doesn't seem too bad to me.  What seems bad, and 
off-putting to newcomers interested in working on the source, is that as 
soon as you get past that point it all gets very tangled very quickly.


But, that's just me.  I wouldn't try to stop anybody from adding 
--with-gmp=http://www.gmp.org/gmp-7.3.tar.gz to the build system, even 
though I'd find it personally frightening. :-)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Geoffrey Keating wrote:

OK, I agree: a native compiler, with no special options, isn't too 
hard.  I don't think typing that sequence twice would be too hard 
either, though. :-)


For something that's not too hard, it's sure causing me a lot of trouble...


But, the trouble you're having is not because you have to build an 
external library; it's because the external library you're building 
doesn't work on your system, or, at least, doesn't work with obvious 
default build options.  So, we should fix the external library, or, in 
the worst case, declare that external library beyond salvage.


In contrast, as I understand it, Ian's perturbed about the idea of 
having an external library at all.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Even stricter implicit conversions between vectors

2006-10-31 Thread Mark Shinwell

Recently I proposed that implicit conversions between vectors with
differing numbers of elements yet the same total bitlength be disallowed.
It was agreed that this was reasonable, and I shall be submitting a
patch to vector_types_convertible_p and the testsuite in the near future.

I would now like to propose that the check in that function be made
even stronger such that it disallows conversions between vectors
whose element types differ -- even if an implicit conversion exists
between those element types.

There are three examples I know about where it seems that this decision
has been made:

- the C++ valarray class, whose assignment operators are declared to
be of the form (note the parameterization upon only one type):

valarray& operator=(const valarray&);

- the AltiVec programming interface manual, which reads in section
2.4.2 "If either the left hand side or right hand side of an expression
has a vector type, then both sides of the expression must be of the same
vector type.";

- the ARM NEON architecture, where there is a similar expectation.

To my mind it seems reasonable that the decisions made by
vector_types_convertible_p should match examples like this.  What do
others think?

Mark


Re: Even stricter implicit conversions between vectors

2006-10-31 Thread Mark Shinwell

Ian Lance Taylor wrote:

Mark Shinwell <[EMAIL PROTECTED]> writes:


I would now like to propose that the check in that function be made
even stronger such that it disallows conversions between vectors
whose element types differ -- even if an implicit conversion exists
between those element types.


As far as I can see, what that amounts to is saying that there should
not be an implicit conversion between vector of int and vector of
unsigned int.  It seems to me that a cast would then be required to
call, e.g., vec_add with arguments of type __vector unsigned int.  I
don't see how that is useful.

But perhaps I am mistaken; can you give an example of the type of
thing which is currently permitted which you think should be
prohibited?


Things exactly like what you write above ("an implicit conversion
between vector of int and vector of unsigned int").  My main argument
is that the gcc behaviour seems to be at odds with the semantics
envisaged by the designers of these various vector-based instruction
sets we see around -- and indeed the designers of the STL -- although
perhaps that is a misconception and there are actually many more
examples where it is believed that these implicit conversions are
reasonable.

I don't see a reason _per se_ why having the implicit conversions
is a good thing -- for things such as vec_add, presumably more
alternatives can be added (there are various ones already I see for
differing orders of "vector bool short" / "vector signed short") etc.
I think it is open to debate whether it is a good thing to have to
make those alternatives explicit, or whether some should be filled in
by implicit conversions.

However, this...

Mike Stump wrote:
> My only concern is that we have tons of customers with tons of code and
> you don't have any

That isn't quite true :-)

> and that you break their code.

...is more of a concern, I agree, and is what I worry about most.

Mark


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Geoffrey Keating wrote:


do you think this is likely to be:
1. some problem in gmp or mpfr,
2. some problem in my build of gmp and/or mpfr, that wouldn't occur if 
I'd built it in some other (unspecified) way,
3. some problem in my existing system configuration, maybe I already 
have a gmp installed that is somehow conflicting, or

4. a well-known but not serious bug in GCC's Darwin port?


In contrast, as I understand it, Ian's perturbed about the idea of 
having an external library at all.


I don't think Ian would object to an external library that users could 
always find easily, that always built cleanly, that didn't have bugs...  
but such a library doesn't exist.


But, neither does such an internal library exist.  Whether the library 
is part of the GCC source tree or not doesn't affects its quality, or 
even its buildability.  The issue isn't where the source code for the 
library lives, but whether it's any good or not.


I can think of one big advantage of an internal library, though: instead 
of (in addition to?) documenting its build process, you can automate it. 
 One would rather hope that the build process isn't complicated, 
though, in which case this doesn't matter.  After all, we're trying to 
cater to the users for whom "configure; make; make install" works to 
build GCC; as long as the same pattern works for the external libraries, 
I think we're OK.


We might have to improve GMP/MPFR in order to make them work that 
smoothly, but that would have to happen if we imported them too.  So, I 
think you could argue that these libraries are too immature for us to 
depend on in GCC.  But, I don't think that's what Ian was arguing. 
(And, I don't think they're too immature; the problems we're seeing 
don't seem particularly worse than the problems I would expect in early 
Stage 1 with any other kind of big infrastructure change.)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mark Mitchell

Ian Lance Taylor wrote:


Here is a review followed by a proposal.


How does this proposal handle the current problematic situation that 
-std=c99 is broken on Linux?


We could either revert Geoff's patch, or conditionalize it on Linux. 
I'd argue against the latter approach (which is what I believe Geoff 
posted), in that it would break one of the key advantages of GCC: that 
the compiler behaves the same way on multiple systems.


I think the proposal is as good as we can do, given the box that we're 
in (and let this be a caution to us with respect to implementing 
extensions before standardization, especially without use of GNU 
keywords/syntax), but I think we should preserve both cross-system 
compatibility and Linux compilation in the meanwhile.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mark Mitchell

Ian Lance Taylor wrote:

Mark Mitchell <[EMAIL PROTECTED]> writes:


Ian Lance Taylor wrote:


Here is a review followed by a proposal.

How does this proposal handle the current problematic situation that
-std=c99 is broken on Linux?


According to the proposal, we will restore the GNU handling for
"extern inline" even when using -std=c99, which will fix the problem
when using glibc.


Sorry, I didn't pick up on that aspect.  FWIW, I'm completely happy with 
the proposal -- or at least as happy as one can be about changing the 
meaning of existing programs which conformed to our documentation to 
mean something else...


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mark Mitchell

Joseph S. Myers wrote:

On Wed, 1 Nov 2006, Ian Lance Taylor wrote:


According to the proposal, we will restore the GNU handling for
"extern inline" even when using -std=c99, which will fix the problem
when using glibc.


We definitely need to revert that until the fixincludes changes are 
available.  (The precise nature of the fix - whether we disable the 
inlines, change them to be C99-aware or add an attribute to give yet 
another form of inline function in gnu99 mode that mimics gnu89 extern 
inline - is less important.)


If we restore the previous behavior, then we don't need the fixincludes 
as immediately -- but since you're right that people will no doubt want 
to use GCC 4.4 with RHEL3, SLES8, etc., I think you're correct that when 
we do switch, we should be armed with fixincludes for GLIBC.  It's 
certainly not enough just to change the current GLIBC sourcebase to use 
C99 semantics going forward, as we must expect that people will want to 
install the software on older versions of the OS.



Thus, I hereby propose starting the 48 hour reversion timer.


I concur.

Once we have the fixincludes fixes, I don't think we need to wait for 4.4 
to switch the default in gnu99 mode back to C99 inline semantics, as long 
as we have those fixes during Stage 1.


I think it would be better to have GLIBC changed before changing the 
behavior of the compiler.  It might even be better to have a released 
version of GLIBC with the changes.  fixincludes causes sufficient 
problems for people that ensuring that only users putting new compilers 
on old systems suffer might be a good goal.


On the other hand, I agree that if we have fixincludes in place, then 
4.3 would not be in any way broken on GNU/Linux, so I think this is a 
judgment call.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Even stricter implicit conversions between vectors

2006-11-02 Thread Mark Shinwell

Paolo Bonzini wrote:

Ian Ollmann wrote:


Assuming I understand the proposal properly, this sounds to me like it 
amounts reversing the change  we experienced in the Apple GCC from 3.3 
-> 4.0.  Type checking became a lot more lax for us in 4.0.


This was a bug and has been fixed recently.  I cannot recall if the fix 
has been backported all the way up to 4.0.


Which fix are you thinking of here?  My previous one for rejecting
implicit conversions between vector types between differing numbers
of subparts hasn't been submitted as a patch yet, but has pretty much
been approved in principle.

Mark


Re: Even stricter implicit conversions between vectors

2006-11-02 Thread Mark Shinwell

Ian Ollmann wrote:

stronger type checking seems like a good idea to me in general.


I agree, but I don't really want to break lots of code all at once,
even if that code is being slightly more slack than it perhaps ought
to be :-)

Given that no-one has really objected to stronger type-checking here
_per se_, then I see two ways forward:

1. Treat this as a regression: fix it and cause errors upon bad
conversions, but risk breaking code.

2. Emit a warning in cases of converting "vector signed int" to
"vector unsigned int", etc., and state that the behaviour will change
to an error in a later version.

Thoughts?

Mark


Re: Even stricter implicit conversions between vectors

2006-11-02 Thread Mark Shinwell

Paolo Bonzini wrote:
Assuming I understand the proposal properly, this sounds to me like 
it amounts reversing the change  we experienced in the Apple GCC 
from 3.3 -> 4.0.  Type checking became a lot more lax for us in 4.0.


This was a bug and has been fixed recently.  I cannot recall if the 
fix has been backported all the way up to 4.0.


Which fix are you thinking of here?  My previous one for rejecting
implicit conversions between vector types between differing numbers
of subparts hasn't been submitted as a patch yet, but has pretty much
been approved in principle.

Oops, yes, that was it.  Does it have a PR number?


I don't believe so (but there will be a patch submitted soon).

Mark


Re: Even stricter implicit conversions between vectors

2006-11-02 Thread Mark Shinwell

Ian Lance Taylor wrote:

I would vote for: break the code, but provide an option to restore the
old behaviour, and mention the option in the error message.


I like this -- I shall prepare a patch and circulate it for review.

Mark


Why doesn't libgcc define _chkstk on MinGW?

2006-11-03 Thread Mark Mitchell
This may be a FAQ, but I was unable to find the answer on the web, so I 
hope people will forgive me asking it here.


I recently tried to use a MinGW GCC (built from FSF sources) to link 
with a .lib file that had been compiled with MSVC, and got link-time 
errors about _chkstk.  After some searching, I understand what this 
function is for (it's a stack-probing thing that MSVC generates when 
allocating big stack frames), and that GCC has an equivalent in libgcc 
(called _alloca).  There also seems to be widespread belief that in fact 
the libgcc routine is compatible with _chkstk.  And, there are lots of 
people that have reported link failures involving _chkstk.


So, my (perhaps naive) question is: why don't we define _chkstk as an 
alias for _alloca in MinGW, so that we can link with these MSVC libraries?


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Why doesn't libgcc define _chkstk on MinGW?

2006-11-03 Thread Mark Mitchell

Ross Ridge wrote:


There are other MSC library functions that MinGW doesn't provide, so
other libraries may not link even with a _chkstk alias.


Got a list?

Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: compiling very large functions.

2006-11-05 Thread Mark Mitchell

Paolo Bonzini wrote:

Kenneth Zadeck wrote:

I think that it is time that we in the GCC community took some time to
address the problem of compiling very large functions in a somewhat
systematic manner.


While I agree with you, I think that there are so many things we are 
already trying to address, that this one can wait. 


It certainly can, but I see no reason why it should.  This is a class of 
issues that users run into, and if someone is motivated to work on this 
class, then that's great!


I like Kenny's idea of having a uniform set of metrics for size (e.g., 
number of basic blocks, number of variables, etc.) and a limited set of 
gating functions because it will allow us to explain what's going on to 
users, and allow users to tune them.  For example, if the metric for 
disabling a pass (by default) is "# basic blocks > 10", then we can have 
a -foptimize-bigger=2 switch to change that to "20".  If the gating 
condition was instead some arbitrary computation, that would be harder 
to implement, and harder to explain.


Certainly, setting the default thresholds reasonably will be 
non-trivial.  If we can agree on the basic mechanism, though, we could 
add thresholding on a pass-by-pass basis.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mark Mitchell

Dale Johannesen wrote:


On Nov 7, 2006, at 11:47 AM, Douglas Gregor wrote:

I just read Nathan's discussion [1] on changing GCC's type system to 
use canonical type nodes, where the comparison between two types 
requires only a pointer comparison. Right now, we use "comptypes", 
which typically needs to do deep structural checks to determine if two 
types are equivalent, because we often clone _TYPE nodes.


One difficulty is that compatibility of types in C is not transitive, 
especially when you're compiling more than one translation unit at a time.
See the thread "IMA vs tree-ssa" in Feb-Mar 2004.  Geoff Keating and 
Joseph Myers give good examples.


For example:

  http://gcc.gnu.org/ml/gcc/2004-02/msg01462.html

However, I still doubt that this is what the C committee actually 
intended.


Transitivity of type equivalence is fundamental in every type system 
(real and theoretical) with which I'm familiar.  In C++, these examples 
are not valid because the ODR, and, IIRC, in C you cannot produce them 
in a single translation unit -- which is the case that most C 
programmers think about.  So, I'm of the opinion that we should discount 
this issue.


I do think that canonical types (with equivalence classes, as Doug 
suggests) would be a big win, for all of the reasons he suggests.  We 
have known for a long time that comptypes is a bottleneck in the C++ 
front end, and while some of that could be solved in other ways, making 
it a near-free operation would be a huge benefit.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mark Mitchell

Richard Kenner wrote:

Like when int and long have the same range on a platform?
The answer is they are different, even when they imply the same object
representation.

The notion of unified type nodes is closer to syntax than semantics.


I'm more than a little confused, then, as to what we are talking about
canonicalizing.  We already have only one pointer to each type, for example.


Yes, but to compare two types, you have to recur on them, because of 
typedefs.  In:


  typedef int I;

"int *" and "I *" are distinct types, and you have to drill down to "I" 
to figure that out.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: bootstrap on powerpc fails

2006-11-07 Thread Mark Mitchell

David Edelsohn wrote:

Kaveh R GHAZI writes:


Kaveh> I tried many years ago and Mark objected:
Kaveh> http://gcc.gnu.org/ml/gcc-patches/2000-10/msg00756.html

Kaveh> Perhaps we could take a second look at this decision?  The average system
Kaveh> has increased in speed many times since then.  (Although sometimes I feel
Kaveh> like bootstrapping time has increased at an even greater pace than chip
Kaveh> improvements over the years. :-)

I object.


Me too.

I'm a big proponent of testing, but I do think there should be some 
bang/buck tradeoff.  (For example, we have tests in the GCC testsuite 
that take several minutes to run -- but never fail.  I doubt these tests 
are actually buying us a factor of several hundred more quality quanta 
over the average test.)  Machine time is cheap, but human time is not, 
and I know that for me the testsuite-latency time is a factor in how 
many patches I can write, because I'm not great at keeping track of 
multiple patches at once.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Planned LTO driver work

2006-11-09 Thread Mark Mitchell
This message outlines a plan for modifying the GCC driver to support
compilation in LTO mode.  The goal is that:

  gcc --lto foo.c bar.o

will generate LTO information for foo.c, while compiling it, then invoke
the LTO front end for foo.o and bar.o, and then invoke the linker.

However, as a first step, the LTO front end will be invoked separately
for foo.o and bar.o -- meaning that the LTO front end will not actually
do any link-time optimization.  The reason for this first step is that
it's easier, and that it will allow us to run through the GCC testsuite
in LTO mode, eliminating failures in single-file mode, before we move on
to multi-file mode.

The key idea is to leverage the existing collect2 functionality for
reinvoking the compiler.  That's presently used for static
constructor/destructor handling and for instantiating templates in
-frepo mode.

So, the work plan is as follows:

1. Add a --lto option to collect2.  When collect2 sees this option,
treat all .o files as if they were .rpo files and recompile them.  We
will do this after all C++ template instantiation has been done, since
we want to optimize the .o files after the program can actually link.

2. Modify the driver so that --lto passes -flto to the C front-end and
--lto to collect2.

Any objections to this plan?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-09 Thread Mark Mitchell
Andrew Pinski wrote:
> On Thu, 2006-11-09 at 12:32 -0800, Mark Mitchell wrote:
>> 1. Add a --lto option to collect2.  When collect2 sees this option,
>> treat all .o files as if they were .rpo files and recompile them.  We
>> will do this after all C++ template instantiation has been done, since
>> we want to optimize the .o files after the program can actually link.
>>
>> 2. Modify the driver so that --lto passes -flto to the C front-end and
>> --lto to collect2.
>>
>> Any objections to this plan?
> 
> Maybe not an objection but a suggestion with respect of static
> libraries.  It might be useful to also to look into archives for files
> with LTO info in them and be able to read them inside the compiler also.

Definitely -- but not yet. :-)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-09 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Mark Mitchell <[EMAIL PROTECTED]> writes:
> 
>> 1. Add a --lto option to collect2.  When collect2 sees this option,
>> treat all .o files as if they were .rpo files and recompile them.  We
>> will do this after all C++ template instantiation has been done, since
>> we want to optimize the .o files after the program can actually link.
>>
>> 2. Modify the driver so that --lto passes -flto to the C front-end and
>> --lto to collect2.
> 
> Sounds workable in general.  I note that in your example of
>   gcc --lto foo.c bar.o
> this presumably means that bar.o will be recompiled using the compiler
> options specified on that command line, rather than, say, the compiler
> options specified when bar.o was first compiled.  This is probably the
> correct way to handle -march= options.

I think so.  Of course, outright conflicting options (e.g., different
ABIs between the original and subsequent compilation) should be detected
and an error issued.

There has to be one set of options for LTO, so I don't see much benefit
in recording the original options and trying to reuse them.  We can't
generate code for two different CPUs, or optimize both for size and for
space, for example.  (At least not without a lot more stuff that we
don't presently have.)

> I assume that in the long run, the gcc driver with --lto will invoke
> the LTO frontend rather than collect2.  And that the LTO frontend will
> then open all the .o files which it is passed.

Either that, or, at least, collect2 will invoke LTO once with all of the
.o files.  I'm not sure if it matters whether it's the driver or
collect2 that does the invocation.  What do you think?

In any case, for now, I'm just trying to move forward, and the collect2
route looks a bit easier.  If you're concerned about that, then I'll
take note to revisit and discuss before anything goes to mainline.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-09 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Mark Mitchell <[EMAIL PROTECTED]> writes:
> 
>>> I assume that in the long run, the gcc driver with --lto will invoke
>>> the LTO frontend rather than collect2.  And that the LTO frontend will
>>> then open all the .o files which it is passed.
>> Either that, or, at least, collect2 will invoke LTO once with all of the
>> .o files.  I'm not sure if it matters whether it's the driver or
>> collect2 that does the invocation.  What do you think?
> 
> I think in the long run the driver should invoke the LTO frontend
> directly.  

> That will save a process--if collect2 does the invocation, we have to
> run the driver twice.

Good point.  Probably not a huge deal in the context of optimizing the
whole program, but still, why be stupid?

Though, if we *are* doing the template-repository dance, we'll have to
do that for a while, declare victory, then invoke the LTO front end,
and, finally, the actual linker, which will be a bit complicated.  It
might be that we should move the invocation of the real linker back into
gcc.c, so that collect2's job just becomes generating the right pile of
object files via template instantiation and static
constructor/destructor generation?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-10 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Mark Mitchell <[EMAIL PROTECTED]> writes:
> 
>> Though, if we *are* doing the template-repository dance, we'll have to
>> do that for a while, declare victory, then invoke the LTO front end,
>> and, finally, the actual linker, which will be a bit complicated.  It
>> might be that we should move the invocation of the real linker back into
>> gcc.c, so that collect2's job just becomes generating the right pile of
>> object files via template instantiation and static
>> constructor/destructor generation?
> 
> For most targets we don't need to invoke collect2 at all anyhow,
> unless the user is using -frepo.  It's somewhat wasteful that we
> always run it.
> 
> Moving the invocation of the linker into the gcc driver makes sense to
> me, especially if it we can skip invoking collect2 entirely.  Note
> that on some targets, ones which do not use GNU ld, collect2 does
> provide the feature of demangling the ld error output.  That facility
> would have to be moved into the gcc driver as well.

I agree that this sounds like the best long-term plan.  I'll try to work
out whether it's actually a short-term win for me to do anything to
collect2 at all; if not, then I'll just put stuff straight into the
driver, since that's what we really want anyhow.

Thanks for the feedback!

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: subreg transformation causes incorrect post_inc

2006-11-10 Thread Mark Shinwell

[EMAIL PROTECTED] wrote:

My port, based on (GCC) 4.2.0 20061002 (experimental), is producing
incorrect code for the following test case:

[snip]

I've only had a very quick look at your code, but I have a feeling that
this is an instance of the kind of slip-up with GO_IF_MODE_DEPENDENT_ADDRESS
that my patch at http://gcc.gnu.org/ml/gcc-patches/2006-08/msg00858.html is
aimed at preventing.  (This patch is currently only applied to addrmodes
branch.)

Mark



Re: How to create both -option-name-* and -option-name=* options?

2006-11-10 Thread Mark Mitchell
Dave Korn wrote:

>   It may seem a bit radical, but is there any reason not to modify the
> option-parsing machinery so that either '-' or '=' are treated interchangeably
> for /all/ options with joined arguments?  That is, whichever is specified in
> the .opt file, the parser accepts either?  

I like that idea.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-10 Thread Mark Mitchell
Ian Lance Taylor wrote:

> This assumes, of course, that we can build an equivalence set for
> types.  I think that we need to make that work in the middle-end, and
> force the front-ends to conform.  As someone else mentioned, there are
> horrific cases in C like a[] being compatible with both a[5] and a[10]
> but a[5] and a[10] not being compatible with each other, and similarly
> f() is compatible with f(int) and f(float) but the latter two are not
> compatible with each other. 

I don't think these cases are serious problems; they're compatible
types, not equivalent types.  You don't need to check compatibility as
often as equivalence.  Certainly, in the big C++ test cases, Mike is
right that templates are the killer, and they're you're generally
testing equivalence.

So, if you separate same_type_p from compatible_type_p, and make
same_type_p fast, then that's still a big win.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: How to create both -option-name-* and -option-name=* options?

2006-11-10 Thread Mark Mitchell
Dave Korn wrote:
> On 10 November 2006 20:06, Mark Mitchell wrote:
> 
>> Dave Korn wrote:
>>
>>>   It may seem a bit radical, but is there any reason not to modify the
>>> option-parsing machinery so that either '-' or '=' are treated
>>> interchangeably for /all/ options with joined arguments?  That is,
>>> whichever is specified in the .opt file, the parser accepts either?
>> I like that idea.
> 
> 
>   Would it be a suitable solution to just provide a specialised wrapper around
> the two strncmp invocations in find_opt? 

FWIW, that seems reasonable to me, but I've not looked hard at the code
to be sure that's technically 100% correct.  It certainly seems like the
right idea.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++: Implement code transformation in parser or tree

2006-11-10 Thread Mark Mitchell
Sohail Somani wrote:

> struct __some_random_name
> {
> void operator()(int & t){t++;}
> };
> 
> for_each(b,e,__some_random_name());
> 
> Would this require a new tree node like LAMBDA_FUNCTION or should the
> parser do the translation? In the latter case, no new nodes should be
> necessary (I think).

Do you need new class types, or just an anonymous FUNCTION_DECL?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: subreg transformation causes incorrect post_inc

2006-11-12 Thread Mark Shinwell

[EMAIL PROTECTED] wrote:

From: Mark Shinwell <[EMAIL PROTECTED]>

[EMAIL PROTECTED] wrote:

My port, based on (GCC) 4.2.0 20061002 (experimental), is producing
incorrect code for the following test case:

[snip]

I've only had a very quick look at your code, but I have a feeling 
thatthis is an instance of the kind of slip-up with 
GO_IF_MODE_DEPENDENT_ADDRESSthat my patch at 
http://gcc.gnu.org/ml/gcc-patches/2006-08/msg00858.html is
aimed at preventing.  (This patch is currently only applied to 
addrmodesbranch.)


Mark


Hhmm.  Is the intent of your patch simply to prevent the mistake of
backends not defining GO_IF_MODE_DEPENDENT_ADDRESS properly?


That's right.  Presumably something else is wrong, then :-)

Mark


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:
> GCC 4.1.1 for PowerPC generates a 162K executable for a
> minimal program  "int main() { return 0; }".  GCC 3.4.1
> generated a 7.2K executable.  Mark Mitchell mentioned the
> same problem for ARM and proposed a patch to remove the
> reference to malloc in atexit
> (http://sourceware.org/ml/newlib/2006/msg00181.html).
> 
> There are references to malloc in eh_alloc.c and
> unwind-dw2-fde.c.  It looks like these are being
> included even when there are no exception handlers.
> 
> Any suggestions on how to eliminate the references
> to these routines?

These aren't full implementation sketches, but, yes, we can do better.
Here are some ideas:

1. For the DWARF unwind stuff, have the linker work out how much space
is required and pre-allocate it.  The space required is a statically
knowable property of the executable, modulo dynamic linking, and on the
cases where we care most (bare-metal) we don't have to worry about
dynamic linking.  (If you can afford a dynamic linker, you can probably
afford malloc, and it's in a shared library.)

2. For the EH stuff, the maximum size of an exception is also statically
knowable, again assuming no dynamic linking.  The maximum exception
nesting depth (i.e., the number of simultaneously active exceptions) is
not, though.  So, here, what I think you want is a small, statically
allocated stack, at least as big as the biggest exception, out of which
you allocate exception objects.  Happily, we already have this, in the
form of "emergency_buffer" -- although it uses a compile-time estimate
of the biggest object, rather than having the linker fill it in, as
would be ideal.  But, in the no-malloc case, just fall back to the
emergency mode.

You could also declare malloc "weak" in that file, and just not call it
if the value is zero.  That way, if malloc is around, you can use it --
but if it's not, you use the emergency buffer.  Put the emergency_buffer
in a separate file (rather than in eh_alloc.cc), so that users can
provide their own implementation to control the size, overriding the one
in the library.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++: Implement code transformation in parser or tree

2006-11-12 Thread Mark Mitchell
Sohail Somani wrote:
> On Fri, 2006-11-10 at 19:46 -0800, Andrew Pinski wrote:
>> On Fri, 2006-11-10 at 15:23 -0800, Sohail Somani wrote:
>>>> Do you need new class types, or just an anonymous FUNCTION_DECL?
>>> Hi Mark, thanks for your reply.
>>>
>>> In general it would be a new class. If the lambda function looks like:
>>>
>>> void myfunc()
>>> {
>>>
>>> int a;
>>>
>>> ...<>(int i1,int i2) extern (a) {a=i1+i2}...
>>>
>>> }
>>>
>>> That would be a new class with an int reference (initialized to a) and
>>> operator()(int,int).
>>>
>>> Does that clarify?
>> Can lambda functions like this escape myfunc?  If not then using the
>> nested function mechanism that is already in GCC seems like a good
>> thing.  In fact I think of lambda functions as nested functions.
> 
> Yes they can in fact. So the object can outlive the scope.

As I understand the lambda proposal, the lambda function may not refer
to things that have gone out of scope.  It can use *references* that
have gone out of scope, but only if the referent is still in scope.
Since the way that something like:

  int i;
  void f() {
int &a = i;
...<>() { return a; } ...
  }

should be implemented is with a lambda-local copy of "a" (rather than a
pointer to "a"), this is OK.

So, I do think that nested functions would be a natural implementation
in GCC, since they already provide access to a containing function's
stack frame.  You could also use the anonymous-class approach that you
suggested, but, as the lambda proposal mentions, using a nested function
may result in better code.  I suspect that what you want is a class (to
meet the requirements on ret_type, etc.) whose operator() is marked as a
nested function for GCC, in the event -- and *only* in event -- that the
lambda function refers to variables with non-static storage duration.

Also, it appears to me that there is something missing from N1958: there
is no discussion about what happens when you apply typeid to a lambda
function, or otherwise use it in a context that requires type_info.
(Can you throw it as an exception, for example?)  Can you capture its
type with typeof()?  Can you declare a function with a paramter of type
pointer-to-lambda-function?  Is this discussed, or am I missing something?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:

> Preallocating space is a good thing, particularly if the size
> can be computed at compile time.  It's a little bit more awkward
> if it has to be calculated at link time.

It's a bit awkward, but it's also one of the clever tricks ARM's
proprietary linker uses, and we should use it too!

> Generating __gxx_personality_v0 is suppressed with the -fno-exceptions
> flag, but it would seem better if this symbol were only generated
> when catch/throw was used.  This happens in cxx_init_decl_processing(),
> which is called before it's known whether or not EH is really needed.

I believe that you need the personality routine if you will be unwinding
through a function, which is why -fno-exceptions is the test.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:
> Mark Mitchell wrote:
>>> Generating __gxx_personality_v0 is suppressed with the -fno-exceptions
>>> flag, but it would seem better if this symbol were only generated
>>> when catch/throw was used.  This happens in cxx_init_decl_processing(),
>>> which is called before it's known whether or not EH is really needed.
>>
>> I believe that you need the personality routine if you will be unwinding
>> through a function, which is why -fno-exceptions is the test.
> 
> You mean unwinding stack frames to handle a thrown exception?
> 
> That's true, but shouldn't this only be included when there
> exceptions are used?  

No, it must be included if exceptions are enabled, and there are any
objects which might require cleanups, which, in most C++ programs, is
equivalent to there are any objects with a destructor.

> One of the C++ percepts is that there
> is no overhead for features which are not used.

That objective does not hold for space, especially in the presence of
exceptions.

> Why should the personality routine be included in all C++ programs?

Because all non-trivial, exceptions-enabled programs, may need to do
stack unwinding.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:
> Mark Mitchell wrote:
>> Michael Eager wrote:
>>> Why should the personality routine be included in all C++ programs?
>>
>> Because all non-trivial, exceptions-enabled programs, may need to do
>> stack unwinding.
> 
> It would seem that the place to require the personality
> routine would be in the routine which causes the stack
> unwinding, not in every C++ object file, whether needed
> or not.

But, the way the ABI works requires a reference from each unit which may
cause unwinding.  Even if you lose the personality routine, you will
still have the exception tables, which themselves are a significant
cost.  If you don't want to pay for exceptions, you really have to
compile with -fno-exceptions.  In that case, certainly, we should be
able to avoid pulling in the personality routine.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Daniel Jacobowitz wrote:

> If you try what Michael's been saying, you'll notice that trivial
> C++ files get the personality routine reference even if they don't
> have anything with a destructor which would need cleaning up.  We ought
> to be able to emit (somewhat smaller) unwind information which doesn't
> reference the personality routine if it's going to have nothing to do,
> shouldn't we?

Certainly, there are at least some such cases.  I guess a function whose
only callees (if any) are no-throw functions, and which itself does not
use "throw", does not need frame information.

But, for something like:

  extern void f();
  void g() {
f(); f();
  }

we do need unwind information, even though "g" has nothing to do with
exceptions.

However, I think you and Michael are right: we don't need to reference
the personality routine here.   Unless the entire program doesn't
contain anything that needs cleaning up, we'll still need it in the
final executable, but omitting it would make our object files smaller,
and unwinding a little faster, since we don't call personality routines
that aren't there.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Daniel Jacobowitz wrote:
> On Sun, Nov 12, 2006 at 05:11:39PM -0800, Mark Mitchell wrote:
>> Daniel Jacobowitz wrote:
>>
>>> If you try what Michael's been saying, you'll notice that trivial
>>> C++ files get the personality routine reference even if they don't
>>> have anything with a destructor which would need cleaning up.  We ought
>>> to be able to emit (somewhat smaller) unwind information which doesn't
>>> reference the personality routine if it's going to have nothing to do,
>>> shouldn't we?
>> Certainly, there are at least some such cases.  I guess a function whose
>> only callees (if any) are no-throw functions, and which itself does not
>> use "throw", does not need frame information.
> 
> You've talked right past me, since I wasn't saying that...

Well, for something like:

  int g() throw();
  int f(int a) {
return g() + a;
  }

I don't think you ever have to unwind through "f".  Exceptions are not
allowed to leave "g", and nothing in "f" can throw, so as far as the EH
systems is concerned, "f" doesn't even exist.  I think we could just
drop its frame info on the floor.  This might be a relatively
significant size improvement.

>> Unless the entire program doesn't
>> contain anything that needs cleaning up, we'll still need it in the
>> final executable,
> 
> Right.  So if you use local variables with destructors, even though you
> don't use exceptions, you'll get the personality routine.  The linker
> could straighten that out if we taught it to, though.

Correct.  It could notice that, globally, no throw-exception routines
(i.e., __cxa_throw, and equivalents for other languages) were included
and then discard the personality routine -- and, maybe, all of
.eh_frame.  You'd still have the cleanup code in function bodies,
though, so if you really want minimum size, you still have to compile
with -fno-exceptions.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 Status Report (2006-11-12)

2006-11-12 Thread Mark Mitchell
I realize that it's been a long time since a GCC 4.1.x release.

I'd like to put together a GCC 4.1.2 release in the relatively near
future.  (Then, there may or may not be a GCC 4.1.3 release at the same
time as 4.2.0, depending on where it seems like we are at that point.)

Since, in theory, the only changes on the 4.1 release branch were to fix
regressions, GCC 4.1.2 should be ready for release today, under the
primary condition that it be no worse than 4.1.1.  But, I recognize that
while theory and practice are, in theory, the same, they are, in
practice, different.

I also see that there are some 30 P1s open against 4.1.2.  Many of these
also apply to 4.2.0, which means that fixing them helps both releases.
So, I'd appreciate people working to knock down those PRs, in particular.

I would also like to know which PRs are regressions from 4.1.0 or 4.1.1.
 Please update the list here:

  http://gcc.gnu.org/wiki/GCC_4.1.2_Status

as you encounter such PRs.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: odd severities in bugzilla

2006-11-19 Thread Mark Mitchell
[EMAIL PROTECTED] wrote:

> So, are we using "P1" instead to mark release-blocking bugs?  Should
> we fix the severities of existing bugs?

I am using priorities to indicate how important it is to fix a bug
before the next release.  This is consistent with the meanings of the
terms "priority" and "severity".  In particular, the "severity"
indicates how severe the problem is, if you are affected by the bug.
The "priority" indicates how important it is to fix it.  In various
commercial environments I've worked in, customers set "severity" (e.g.,
help, this bug is really getting in my way!) and developers/support set
"priority (this bug is affecting only one customer, so it's medium
priority).

So, that's a long answer, but basically: "yes, we're using P1 to mark
release-critical bugs".  Also, in the paradigm described above,
"blocker" should mean "blocks the user from making progress, there is no
workaround", not "blocks the release".  (In my experience, severities
are normally things like "mild", "critical", "emergency.)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Some clarifications regarding GIMPLE and LTO

2006-11-26 Thread Mark Mitchell
[EMAIL PROTECTED] wrote:

> Does the LTO branch try to achieve that the complete information for a 
> "Program"
> can be sufficiently stored (in a file)? If this is already there, could anyone
> provide some pointers to the API?

Yes, on the LTO branch, we are working to store the entire translation
unit in a form that can then be read back into the compiler.  The global
symbol table is stored using DWARF3, so you can read it back with a
standard DWARF reader.  See lto/lto.c on the LTO branch for the code
that does this.

At present, there are a few things that are not yet written out to the
DWARF information (like GCC machine modes), but most things (types,
functions, variables) are present.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Canonical types (1/3)

2006-11-29 Thread Mark Mitchell
Doug Gregor wrote:
> This patch introduces canonical types into GCC, which allow us to
> compare two types very efficiently and results in an overall
> compile-time performance improvement. 

Thanks for working on this.  It's the sort of project I used to have
time to do. :-)

I will review these patches in the next couple of days.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [C/C++] same warning/error, different text

2006-12-03 Thread Mark Mitchell
Manuel López-Ibáñez wrote:
> The message for the following error:
> 
> enum e {  E3 = 1 / 0 };
> 
> is in C: error: enumerator value for 'E3' not integer constant
> and in C++: error: enumerator value for 'E3' is not an integer constant
> 
> Is there someone against fixing this? What would be the preferred message?

I slightly prefer the more-grammatical C++ version, but, if there's any
controversy at all, I'm perfectly happy with the C version too, and it's
certainly a good thing to use the same message in both languages.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Canonical types (1/3)

2006-12-04 Thread Mark Mitchell
Doug Gregor wrote:
> This patch introduces canonical types into GCC, which allow us to
> compare two types very efficiently and results in an overall
> compile-time performance improvement. I have been seeing 3-5%
> improvements in compile time on the G++ and libstdc++ test suites,
> 5-10% on template-heavy (but realistic) code in Boost, and up to 85%
> improvement for extremely template-heavy metaprogramming.

The new macros in tree.h (TYPE_CANONICAL and TYPE_STRUCTURAL_EQUALITY)
need documentation, at least in tree.h, and, ideally, in the ill-named
c-tree.texi as well.

I want to make sure I understand this idiom, in
build_pointer_type_for_mode, and elsewhere:

+  if (TYPE_CANONICAL (to_type) != to_type)
+TYPE_CANONICAL (t) =
+  build_pointer_type_for_mode (TYPE_CANONICAL (to_type),
+  mode, can_alias_all);

If there was already a pointer type to the canonical type of to_type,
then the call build_pointer_type_for_mode will return it.  If there
wasn't, then we will build a new canonical type for that pointer type.
We can't use the pointer type we're building now (i.e., "T") as the
canonical pointer type because we have would have no way to find it in
future, when creating another pointer type for the canonical version of
to_type.

So, we are actually creating more type nodes in this case.  That seems
unfortunate, though I fully understand we're intentionally trading space
for speed just by adding the new type fields.  A more dramatic version
of your change would be to put the new pointer type on the
TYPE_POINTER_TO list for the canonical to_type, make it the canonical
pointer type, and then have the build_pointer_type_for_mode always go to
the canonical to_type to search TYPE_POINTER_TO, considering types to be
an exact match only if they had more fields in common (like, TYPE_NAME
and TYPE_CONTEXT, say).  Anyhow, your approach is fine, at least for now.

+  TYPE_STRUCTURAL_EQUALITY (t) = TYPE_STRUCTURAL_EQUALITY (to_type);

Does it ever make sense to have both TYPE_CANONICAL and
TYPE_STRUCTURAL_EQUALITY set?  If we have to do the structural equality
test, then it seems to me that the canonical type isn't useful, and we
might as well not construct it.

> +  type = build_variant_type_copy (orig_type);
>TYPE_ALIGN (type) = boundary;
> +  TYPE_CANONICAL (type) = TYPE_CANONICAL (orig_type);

Eek.  So, despite having different alignments, we consider these types
"the same"?  If that's what we already do, then it's OK to preserve that
behavior, but it sure seems worrisome.

I'm going to review patch 2/3 here too, since I don't think we should
add the fields in patch 1 until we have something that can actually take
advantage of them; otherwise, we'd just be wasting (more) memory.

+  else if (strict == COMPARE_STRUCTURAL)
+return structural_comptypes (t1, t2, COMPARE_STRICT);

Why do we ever want the explicit COMPARE_STRUCTURAL?

+static hashval_t
+cplus_array_hash (const void* k)
+{
+  hashval_t hash;
+  tree t = (tree) k;
+
+  hash = (htab_hash_pointer (TREE_TYPE (t))
+ ^ htab_hash_pointer (TYPE_DOMAIN (t)));
+
+  return hash;
+}

Since this hash function is dependent on pointer values, we'll get
different results on different hosts.  I was worried that will lead to
differences in generated debug information, perhaps due to different
TYPE_UIDs -- but it looks like there is only ever one matching entry in
the table, so we never have to worry about the compiler "randomly"
choosing between two equally good choices?

Have you tested with flag_check_canonical_types on, and verified that
you get no warnings?  (I agree that a --param for that would be better;
if a user ever has to turn this on, we broke the compiler.)

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Announce: MPFR 2.2.1 is released

2006-12-05 Thread Mark Mitchell
Richard Guenther wrote:

>> As far as I know both versions are released.  What I said was
>> "undistributed," by which I mean: the required version of MPFR is not
>> on my relatively up to date Fedora system.
> 
> It also missed the openSUSE 10.2 schedule (which has the old version
> with all patches).  So I don't like rejecting the old version at any point.

I think the issue of whether to reject the old version of the library is
at least partially orthogonal to the import issue.  Even if we import
the sources, we'll still want to support using an external MPFR, so that
people who do have it on their system can leverage that.  So, we still
have to decide whether to allow older versions.

On that point, I agree with previous posters who have suggested we
should be liberal; we can issue a warning saying that we recommend
2.2.1, but not require it.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Unwinding CFI gcc practice of assumed `same value' regs

2006-12-12 Thread Mark Kettenis
>  Jan Kratochvil writes:
>
>   > currently (on x86_64) the gdb backtrace does not properly stop at
>   > the outermost frame:
>   >
>   > #3  0x0036ddb0610a in start_thread () from
>  /lib64/tls/libpthread.so.0
>   > #4  0x0036dd0c68c3 in clone () from /lib64/tls/libc.so.6
>   > #5  0x in ?? ()
>   >
>   > Currently it relies only on clearing %rbp (0x above is
>   > unrelated to it, it got read from uninitialized memory).
>
>  That's how it's defined to work: %rbp is zero.
>
>   > http://sourceware.org/ml/gdb/2004-08/msg00060.html suggests frame
>   > pointer 0x0 should be enough for a debugger not finding CFI to stop
>   > unwinding, still it is a heuristic.
>
>  Not by my understanding it isn't.  It's set up by the runtime system,
>  and 0 (i.e. NULL on x86-64) marks the end of the stack.  Officially.
>
>  See page 28, AMD64 ABI Draft 0.98 \u2013 September 27, 2006 -- 9:24.

Unfortunately whoever wrote that down didn't think it through.  In
Figure 3.4 on page 20, %rbp is listed as "callee-saved register;
optionally used as frame pointer".  So %rbp can be used for anything, as
long as you save its contents and restore it before you return.  Since it
may be used for anything, it may contain 0 at any point in the middle of
the call stack.  So it is unusable as a stack trace termination condition.
The only viable option is explicitly marking it as such in the CFI.

Initializing %rbp to 0 in the outermost frame is sort of pointless on amd64.



Re: Unwinding CFI gcc practice of assumed `same value' regs

2006-12-12 Thread Mark Kettenis
>  Ian Lance Taylor writes:
>   > Andrew Haley <[EMAIL PROTECTED]> writes:
>   >
>   > > In practice, %ebp either points to a call frame -- not necessarily
>  the
>   > > most recent one -- or is null.  I don't think that having an optional
>   > > frame pointer mees you can use %ebp for anything random at all, but
>  we
>   > > need to make a clarification request of the ABI.
>   >
>   > I don't see that as feasible.  If %ebp/%rbp may be used as a general
>   > callee-saved register, then it can hold any value.
>
>  Sure, we already know that, as has been clear.  The question is *if*
>  %rbp may be used as a general callee-saved register that can hold any
>  value.

The amd64 ABI is specifically *designed* to allow this.

Mark



Re: [PATCH] Relocated compiler should not look in $prefix.

2006-12-12 Thread Mark Mitchell
Andrew Pinski wrote:
> On Fri, 2006-10-13 at 12:51 -0400, Carlos O'Donell wrote:
>> A relocated compiler should not look in $prefix.
>> Comments?
>> OK for Stage1?
> 
> I do have another issue with these set of patches which I did not notice
> until today.
> I can no longer do inside a just built GCC do:
> ./cc1 t.c
> or
> ./xgcc -B. t.c
> If I used the same prefix of an already installed GCC.
> This makes debugging driver issues without installing the driver again.

What are the contents of t.c?  What if you set GCC_EXEC_PREFIX?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-12-12 Thread Mark Mitchell
Andrew Pinski wrote:
>> What are the contents of t.c?  What if you set GCC_EXEC_PREFIX?
> 
> t.c:
> 
> #include 
> int main(void)
> {
>  printf("Hello World\n");
>  return 0;
> }
> 
> --
> No I did not set GCC_EXEC_PREFIX as I did not know I have to set that now.
> Seems like a change like this should be mentioned on
> http://gcc.gnu.org/gcc-4.3/changes.html
> Because some people liked the old behavior when debugging the driver.

This not a user-visible change; it does not affect installed compilers.
 It only affects GCC developers who are working with the uninstalled driver.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-12-12 Thread Mark Mitchell
Andrew Pinski wrote:

> But other non user-visible changes are mentioned on changes.html already.
> Forward prop in 4.3.
> Incompatible changes to the build system in 4.2 which seems very related to 
> stuff like
> this.

If you want to make a patch, and Gerald approves it, it's fine by me.
But, fwprop is described as a new feature (faster compiler, better
code), and the build system affects people building the compiler.  The
change we're talking about seems to affect only people debugging the
compiler.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


libjvm.la and problems with libtool relink mode

2006-12-14 Thread Mark Shinwell

I am currently involved in building GCC toolchains by configuring them with
the prefix that is to be used on the target system (somewhere in /opt)
and then installing them in a separate "working installation" directory
(say somewhere in my scratch space).  The installation step into this
"working installation" directory is performed by invoking a command of
the form "make prefix=/path/to/working/dir/install install".

Unfortunately it seems that, when using current mainline and building
for native x86_64, there is an awkward problem involving the building
of libjvm.la.  This libtool file is built from two others: one .lo file
and one .la file.  The difficulty arises at the libtool "install" step
for libjvm.la.  It seems that since the "link" command that constructed
libjvm.la built that library using another .la file (libgcj.la) then
libtool writes a relink command into the libjvm.la file.  Then, when
installing libjvm.la using libtool, it attempts to execute that relink
command.  This however fails because the installation location is not
a subdirectory of the configured prefix -- it is somewhere completely
different.  This piece of libtool code causes the error:

  # Don't allow the user to place us outside of our expected
  # location b/c this prevents finding dependent libraries that
  # are installed to the same prefix.
  # At present, this check doesn't affect windows .dll's that
  # are installed into $libdir/../bin (currently, that works fine)
  # but it's something to keep an eye on.
  if test "$inst_prefix_dir" = "$destdir"; then
$echo "$modename: error: cannot install \`$file' to a directory not 
ending in $libdir" 1>&2

exit 1
  fi

This problem does not arise with various other .la files created during
the build process of gcc (such as libstdc++.la for example) because those
are directly created from a bunch of .lo files; none of them were built
from another .la as libjvm.la is.

The writing of the relink command into libjvm.la comes as a result of
"need_relink=yes" on line 2095 of the ltmain.sh in the toplevel directory
of mainline gcc.  I wonder if this assignment ought to be guarded
by a test involving $hardcode_into_libs, which is currently set to "yes" in
my generated /x86_64-unknown-linux-gnu/libjava/libtool.
Perhaps I should be having $hardcode_into_libs set to "no" in addition to
changing the libtool code so that need_relink is only set to "yes" if
$hardcode_into_libs is set to "yes" also?  The exact behaviour of the
two modes of $hardcode_into_libs isn't clear to me, however, so I'm not
very sure.  With that "solution" I'm also assuming that libtool is doing
the right thing in forcing a relink in this situation (if $hardcode_into_libs
is "yes"); perhaps even that isn't the case.

If anyone could offer any advice as to how to proceed here, I'd be most
grateful.  I've copied this to Alexandre since a colleague suggested he
might know the answer :-)

Mark


Paolo Bonzini appointed build system maintainer

2006-12-18 Thread Mark Mitchell
Paolo --

The GCC SC has appointed you as a co-maintainer of the build machinery.

Please add an appropriate MAINTAINERS entry.

Congratulations, and thank you for accepting this position!

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-01 Thread Mark Mitchell
Daniel Berlin wrote:

>> Richard Guenther added -fwrapv to the December 30 run of SPEC at
>> <http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html>
>> and
>> <http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html>.
>> Daniel Berlin and Geert Bosch disagreed about how to interpret
>> these results; see <http://gcc.gnu.org/ml/gcc/2007-01/msg00034.html>.

Thank you for pointing that out.  I apologize for having missed it
previously.

As others have noted, one disturbing aspect of that data is that it
shows that there is sometimes an inverse correlation between the base
and peak flags.  On the FP benchmarks, the results are mostly negative
for both base and peak (with 168.wupwise the notable exception); on the
integer benchmarks it's more mixed.  It would be nice to have data for
some other architectures: anyone have data for ARM/Itanium/MIPS/PowerPC?

So, my feeling is similar to what Daniel expresses below, and what I
think Ian has also said: let's disable the assumption about signed
overflow not wrapping for VRP, but leave it in place for loop analysis.

Especially given:

>> We don't have an exhaustive survey, but of the few samples I've
>> sent in most of code is in explicit overflow tests.  However, this
>> could be an artifact of the way I searched for wrapv-dependence
>> (basically, I grep for "overflow" in the source code).  The
>> remaining code depended on -INT_MIN evaluating to INT_MIN.  The
>> troublesome case that started this thread was an explicit overflow
>> test that also acted as a loop bound (which is partly what caused
>> the problem).

it sounds like that would eliminate most of the problem.  Certainly,
making -INT_MIN evaluate to INT_MIN, when expressed like that, is an
easy thing to do; that's just a guarantee about constant folding.
There's no reason for us not to document that signed arithmetic wraps
when folding constants, since we're going to fold the constant to
*something*, and we may as well pick that answer.

I don't even necessarily think we need to change our user documentation.
 We can just choose to make the compiler not make this assumption for
VRP, and to implement folding as two's-complement arithmetic, and go on
with life.  In practice, we probably won't "miscompile" many
non-conforming programs, and we probably won't miss two many useful
optimization opportunities.

Perhaps Richard G. would be so kind as to turn this off in VRP, and
rerun SPEC with that change?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-01 Thread Mark Mitchell
Richard Kenner wrote:

>> Where there are reasonable positions on both sides, nobody ever
>> accurately predicts what the majority of a hugely diverse population
>> of language users is going to want, and almost everyone believes
>> they are in that majority.
> 
> I agree.  That's why I support a middle-of-the-road position where we make
> very few "guarantees", but do the best we can anyway to avoid gratuitously
> (meaning without being sure we're gaining a lot of optimization) breaking
> legacy code.

Yes, I think that you, Danny, Ian, and I are all agreed on that point,
and, I think, that disabling the assumption about signed overflow not
occurring during VRP (perhaps leaving that available under control of a
command-line option, for those users who think it will help their code),
 is the right thing to try.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-02 Thread Mark Mitchell
Richard Guenther wrote:

>> Perhaps Richard G. would be so kind as to turn this off in VRP, and
>> rerun SPEC with that change?
> 
> I can do this.

Thank you very much!

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Top level libgcc checked in

2007-01-03 Thread Mark Mitchell
Ben Elliston wrote:

> So I take it that at this stage we've not commenced the process of
> having libgcc's configury run autoconf tests on the target compiler?
> (Rather than having to hardwire most target details into the t-* files?)
> Any objections to starting down this path?

We should also be very careful not to introduce differences between
native and cross compilers.  So, we should have no run-time tests, no
tests that look at /proc, headers in /usr/include, etc.  I consider it
important that a Canadian-native compiler (i.e., one where $host =
$target, but $build != $host) and a native compiler (i.e., one where
$host = $build = $target) behave identically, given the same
configuration options.

If we decide to go with autoconf, and we are building a Canadian cross,
we should of course test the $build->$target compiler (which is the one
we'll be using to build the libraries), rather than the $host->$target
compiler (which may be the one in the tree).

Given the constraints, I'm not sure that autoconf is a huge win.  I'm
not violently opposed, but I'm not sure there are big benefits.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 Status Report [2007-01-04]

2007-01-04 Thread Mark Mitchell
I've decided to focus next on GCC 4.1.2.  After GCC 4.1.2, I will focus
on GCC 4.2.0.  At this point, I expect GCC 4.3 to remain in Stage 1 for
some time, while we work on GCC 4.1.2 and GCC 4.2.0.  So, I've been
looking at the GCC 4.1.2 Bugzilla entries.

(I'm sure one of your New Year's resolutions was "I shall fix more
regressions in 2007."  If not, it's not too late!)

Bugzilla has entries for 156 P1-P3 regressions.  Of course, some of the
P3s will in fact end up being P4 or P5, so that is not an entirely
accurate count.  There are 18 P1 regressions.  However, I am only aware
of two regressions relative to 4.1.0 or 4.1.1, as recorded here:

http://gcc.gnu.org/wiki/GCC_4.1.2_Status#preview

If you know of more, please let me know, and please update the Wiki page.

I'm not going to let bugs which existed in 4.1.[01] block 4.1.2 -- but I
am going to take a hard line on P1 regressions relative to the previous
4.1.x releases, and I'm going to grumble a lot about P2s.

So, I think we're relatively close to being able to do a 4.1.2 release.
 Let's tentatively plan on a first release candidate on or about January
19th, with a final release approximately a week later.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


libgcc-math

2007-01-11 Thread Mark Mitchell
Richard --

The GCC SC has been discussing libgcc-math.  RMS says that he will need
to consider the issue, and that he has other things he wants to do
first.  So, I'm afraid that we can't give you a good timeline for a
resolution of the question, but I can say that some progress is being made.

FYI,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: bug management: WAITING bugs that have timed out

2007-01-23 Thread Mark Mitchell
Mike Stump wrote:
> On Jan 11, 2007, at 10:47 PM, Joe Buck wrote:
>> The description of WORKSFORME sounds closest: we don't know how to
>> reproduce the bug.  Should that be used?
> 
> No, not generally. 

Of the states we have, WORKSFORME seems best to me, and I agree with Joe
that there's benefit in getting these closed out.  On the other hand, if
someone wants to create an UNREPRODUCIBLE state (which is a "terminal"
state, like WONTFIX), that's OK with me too.  But, let's not dither too
much over what state to use.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Andreas Bogk <[EMAIL PROTECTED]> writes:

> I think a better way to describe your argument is that the compiler
> can remove a redundant test which would otherwise be part of a defense
> in depth.  That is true.  The thing is, most people want the compiler
> to remove redundant comparisons; most people don't want their code to
> have defense in depth, they want it to have just one layer of defense,
> because that is what will run fastest.

Exactly.  I think that Ian's approach (giving us a warning to help track
down problems in real-world code, together with an option to disable the
optimizations) is correct.  Even if the LIA-1 behavior would make GCC
magically better as a compiler for applications that have
not-quite-right security checks, it wouldn't make it better as a
compiler for lots of other applications.

I would rather hope that secure applications would define a set of
library calls for some of these frequently-occurring checks (whether, in
GLIBC, or libiberty, or some new library) so that application
programmers can use them.

(I've also been known to claim that writing secure applications in C may
provide performance advantages, but makes the security part harder.  If
someone handed me a contract to write a secure application, with a
penalty clause for security bugs, I'd sure be looking for a language
that raised exceptions on overflow, bounds-checking failures, etc.)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: raising minimum version of Flex

2007-01-23 Thread Mark Kettenis
Vaclav Haisman wrote:
> Gerald Pfeifer wrote:
> [...]
> > openSUSE 10.2 now comes with flex 2.5.33, but FreeBSD, for example, still 
> > is at flex 2.5.4.  Just some additional data pointes...
> FreeBSD has version 2.5.33 as textproc/flex port.

But that will not replace the system flex, so it will require tweaking
environment variables or passing configure options.

OpenBSD also still ships with flex 2.5.4.  That version has been the
defacto standard for years and is by far the most widespread version.
In my experience newer versions of flex are much less stable, and I
think requiring a newer version should not be done lightly.

Mark


Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Mark Mitchell
Diego Novillo wrote:

> So, I was doing some archeology on past releases and we seem to be
> getting into longer release cycles.  With 4.2 we have already crossed
> the 1 year barrier.

I think there are several factors here.

First, I haven't had as much time to put in as RM lately as in past, so
I haven't been nagging people as much.  I also haven't as much time to
put in as a developer.  For some previous releases, I was the bug-fixer
of last resort, fixing many of the critical bugs -- or at least
proposing broken patches that goaded others into fixing things. :-)
Holidays are over, CodeSourcery's annual meeting is behind us, and I'm
almost caught up on the mailing lists.  So, I expect do more goading --
but probably not much more coding.

Second, I think there are conflicting desires.  In reading this thread,
some people want/suggest more frequent releases.  But, I've also had a
number of people tell me that the 4.2 release cycle was too quick in its
early stages, and that we didn't allow enough time to get features in --
even though doing so would likely have left us even more bugs to fix.
RMS has recently suggested that any wrong code bug (whether a regression
or not) that applies to relatively common code is a severe embarrassment
in a release.  Some people want to put more features onto release
branches, while others think we're too lax about changes.  If there's
one thing I've learned from years of being RM, it's that I can't please
everyone. :-) In any case, I've intentionally been letting 4.3 stage 1
drag out, because it looks like there's a lot of important functionality
coming in, and I didn't want to leave those bits stranded until 4.4.

Some folks have suggested that we ought to try to line up FSF releases
to help the Linux distributors.  Certainly, in practice, the
distributors are naturally most focused at the points that make sense in
their own release cycles.  However, I think it would be odd for the FSF
to try to specifically align with (say) Red Hat and Novell releases
(which may not themselves always be well-synchronized) at the possible
expense of (say) MontaVista and Wind River.  And, there are certainly a
large number of non-Linux users -- even on free operating systems.

In practice, I think that the creation of release branches has been
reasonably useful.  It may be true that some of the big server Linux
distributors aren't planning on picking up 4.2, but I know of other
major users who will be using it.  Even without much TLC, the currently
4.2 release branch represents a reasonably stable point, with some key
improvements over 4.1 (e.g., OpenMP).  Similarly, without much TLC, the
current 4.1 branch is pretty solid, and substantially better than 4.1.1.
 So, the existence of the branch, and the regression-only discipline
thereon, has produced a useful point for consumers, even though there's
not yet a 4.1.2.

I don't think that some of the ideas (like saying that you have to fix N
bugs for every patch you contribute) are very practical.  What we're
seeing is telling us something about "the market" for GCC; there's more
pressure for features, optimization, and ports than bug fixes.  If there
were enough people unhappy about bugs, there would be more people
contributing bug fixes.

It may be that not too many people pick up 4.2.0.  But, if 4.3 isn't
looking very stable, there will be a point when people decide that 4.2.0
is looking very attractive.  The worst outcome of trying to do a 4.2.0
release is that we'll fix some things that are also bugs in 4.3; most
4.2 bugs are also in 4.3.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Signed int overflow behaviour in the security context

2007-01-25 Thread Mark Mitchell
Robert Dewar wrote:

>> So basically you're saying gcc developers should compensate for other
>> people's sloppy engineering?  ;-)
> 
> Yes I would say! where possible this seems an excellent goal.

I agree: when it's possible to support non-standard legacy semantics
that do not conflict with the standard, without substantial negative
impact, then that's a good thing to do.

In this specific case, we know there is a significant performance
impact, and we know that performance is very important to both the
existing and potential GCC user base, so I think that making the
compiler more aggressive at -O2 is sensible.

And, Ian is working on -fno-strict-overflow, so that users have a
choice, which is also very sensible.  Perhaps the distribution vendors
will then add value by selectively compiling packages that need it with
-fno-strict-overflow so that security-critical packages are that much
less likely to do bad things, while making the rest of the system go
faster by not using the option.

I think we've selected a very reasonable path here.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


LTO Status

2007-01-26 Thread Mark Mitchell
Several people have asked for an update on the status of the LTO
project, so Kenny and I have put together a summary of what we believe
the status and remaining issues to be.

The short story is that, unfortunately, we have not had as much time as
we would have liked to make progress on LTO.  Kenny has been working on
the dataflow project, and I have had a lot of other things on my plate
as well.  So -- as always! -- we would be delighted to have other people
helping out.  (One kind person asked me if contributing to LTO would
hurt CodeSourcery by potentially depriving us of contracts.  I doubt
that very much, and I certainly don't think that should stop anybody
from contributing to LTO!)

I still think that LTO is a very important project, and that the design
outline we have is sound.  I think that a relatively small amount of
work (measured in terms of person-months) is required to get us to being
able to handle most of C.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
Introduction

  This document summarizes work remaining in the LTO front end to
  achieve the initial goal of correct operation on single-file C
  programs.

Changes to the DWARF Reader Required

  The known limitations of the DWARF reader are as follows:

  * Modify 'LTO_READ_TYPE' to byte-swap data, as required for cross
compiling for targets with different endianness.

  * The DWARF reader skips around in the DWARF tree to read types.
It's possible that this will not work in situations with complex
nesting, or that a fixup will be required later when the DIE is
encountered again, during the normal walk.

  * Function-scope static variables are not handled.

  * Once more information about types is saved (see below), references
to layout_type, etc., should be removed or modified, so that the
saved data is not overritten.

  * Unprototyped functions are not handled.

DWARF Extensions Required
  
  The following sections summarize augmentations we must make to the
  DWARF generated by GCC.

  GNU Attributes

Semantic GNU attributes (e.g., dllimport) are not recorded in
DWARF.  Therefore, this information is lost.

  Type Information

At present, the LTO front end recomputes some type attributes (like
machine modes).  However, there is no guarantee that the type
attributes that are computed will match those in the original
program.  Because there is presently no method for encoding this
information in DWARF, we need to take advantage of DWARF's
extensibility features to add these representations.

The type attributes which require DWARF extensions are:

* Type alignment

* Machine mode

  Declaration Flags

There are some flags on 'FUNCTION_DECL' and 'VAR_DECL' nodes that
may need to be preserved.  Some may be implied by GNU attributes,
but others are not.  Here are the flags that should be preserved.

Functions and Variables: 

* 'DECL_SECTION_NAME'

* 'DECL_VISIBILITY'

* 'DECL_ONE_ONLY.

* 'DECL_COMDAT'

* 'DECL_WEAK'

* 'DECL_DLLIMPORT_P'

* 'DECL_ASSEMBLER_NAME'

Functions:

* 'DECL_UNINLINABLE'

* 'DECL_IS_MALLOC'

* 'DECL_IS_RETURNS_TWICE'

* 'DECL_IS_PURE'

* 'DECL_IS_NOVOPS'

* 'DECL_STATIC_CONSTRUCTOR'

* 'DECL_STATIC_DESTRUCTOR'

* 'DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT'

* 'DECL_NO_LIMIT_STACK'

* 'DECL_NO_STATIC_CHAIN'

* 'DECL_INLINE'

Variables:

* 'DECL_HARD_REGISTER'

* 'DECL_HAS_INIT_PRIORITY'

* 'DECL_INIT_PRIORITY'

* 'DECL_TLS_MODEL'

* 'DECL_THREAD_LOCAL_P'

* 'DECL_IN_TEXT_SECTION'

* 'DECL_COMMON'

Gimple Reader and Writer

  Current Status

All gimple forms except for those related to gomp are now handled.
It is believed that this code is mostly correct.

The lto reader and the writer logically work after the ipa_cp pass.
At this point, the program has been fully gimplified and is in fact
in "low gimple".  The reader is currently able to read in and
recreate gimple, and the control flow graph.  Much of the eh handing
code has been written but not tested.

The reader and writer can be compiled in a self checking mode so
that the writer writes a text logging of what is is serializing into
the object file.  The lto reader uses the same logging library to
produce a log of what it is reading.  During reading, the process
aborts if the logs get out of sync.

The current state of the code is that much of the code to serialize
the cfun has not been written or tested.  Without this part of the
code, nothing can be executed downstream of the read

Re: G++ OpenMP implementation uses TREE_COMPLEXITY?!?!

2007-01-28 Thread Mark Mitchell
Steven Bosscher wrote:

> Can you explain what went through your mind when you picked the 
> tree_exp.complexity field for something implemented new...  :-(

Please don't take this tone.  I can't tell if you have your tongue
planted in your cheek, but if you do, it's not obvious.

It's entirely reasonable to look for a way to get rid of this use of
TREE_COMPLEXITY, but things like:

> You know (or so I assume) this was a very Very VERY BAD thing to do

are not helpful.  Of course, if RTH had thought it was a bad thing, he
wouldn't have done it.

Please just state the problem and ask for help (as you did) without
attacking anyone.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: remarks about g++ 4.3 and some comparison to msvc & icc on ia32

2007-01-28 Thread Mark Mitchell
tbp wrote:

> Secundo, while i very much appreciate the brand new string ops, it
> seems that on ia32 some array initialization cases where left out,
> hence i still see oodles of 'movl $0x0' when generating code for k8.
> Also those zeroings get coalesced at the top of functions on ia32, and
> i have a function where there's 3 pages of those right after prologue.
> See the attached 'grep 'movl   $0x0' dump.

It looks like Jan and Richard have answered some of your questions about
inlining (or are in the process of doing so), but I haven't seen a
response to this point.

Certainly, if we're generating zillions of zero-initializations to
contiguous memory, rather than using memset, or an inline loop, that
seems unfortunate.  Would you please file a bug report?

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: remarks about g++ 4.3 and some comparison to msvc & icc on ia32

2007-01-28 Thread Mark Mitchell
Jan Hubicka wrote:

> I though the comment was more reffering to fact that we will happily
> generate
> movl $0x0,  place1
> movl $0x0,  place2
> ...
> movl $0x0,  placeMillion
> 
> rather than shorter
> xor %eax, %eax
> movl %eax, ...

Yes, that would be an improvement, but, as you say, at some point we
want to call memset.

> With the repeated mov issue unforutnately I don't know what would be the
> best place: we obviously don't want to constrain register allocation too
> much and after regalloc I guess only machine dependent pass

I would hope that we could notice this much earlier than that.  Wouldn't
this be evident even at the tree level or at least after
stack-allocation in the RTL layer?  I wouldn't expect the zeroing to be
coming from machine-dependent code.

One possibility is that we're doing something dumb with arrays.  Another
possibility is that we're SRA-ing a lot of small structures, which add
up to a ton of stack space.

I realize that we need a full bug report to be sure, though.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [c++] switch ( enum ) vs. default statment.

2007-01-28 Thread Mark Mitchell
Paweł Sikora wrote:

> typedef enum { X, Y } E;
> int f( E e )
> {
> switch ( e )
> {
> case X: return -1;
> case Y: return +1;
> }
> }
> 
> In this example g++ produces a warning:
> 
> e.cpp: In function ‘int f(E)’:
> e.cpp:9: warning: control reaches end of non-void function
> 
> Adding `default' statemnet to `switch' removes the warning but
> in C++ out-of-range values in enums are undefined.

Not quite.  They are unspecified; see below.

> I see no reason to handling any kind of UB ( especially this ).
> IMHO this warning is a bug in C++ frontend.

This is a tricky issue.  You are correct that the values of "E" are
exactly { 0, 1 } (or, equivalently, { X, Y }).  But, the underlying type
of the enumeration is at least "char".  And, the standard says that
assigning an integer value to enumeration type has unspecified behavior
if it outside the range of the enumeration.

So:

  E e;
  e = (E) 7;

has unspecified behavior, which is defined as:

"behavior,  for  a well-formed program construct and correct data, that
depends on the implementation.  The implementation is not required to
document which behavior occurs."

Because the program is unspecified, not undefined, the usual "this could
erase your disk" thinking does not apply.  Unspecified is meant to be
more like Ada's bounded errors (though not as closely specified), in
that something vaguely sensible is supposed to happen.

For GCC, what happens (though we need not document it) is that the value
is converted to the underlying type -- but not masked down to { 0, 1 },
because that masking would be costly.  So, "((int) e == 7)" may be true
after the assignment above.  (Perhaps, in some modes GCC may optimize
away the assignment because it will "know" that "e" cannot be 7, but it
does not do so at -O2.)

So, now, what should we do about the warning?  I think there are good
arguments in both directions.  On the one hand, portable programs cannot
assume that assigning out-of-range values to "e" does anything
predictable, so portable programs should never do that.  So, if you've
written the entire program and are sure that it's portable, you don't
want the warning.  On the other hand, if you are writing a portable
library designed to be used with other people's programs, you might
every well want the warning -- because you can't be sure that they're
not going to pass "7" in as the value of "e", and you may want to be
robust in the face of this *unspecified* behavior.

In practice, this warning from GCC is keyed off what it thinks the
effective range of "E" is.  If it starts assuming that "e" can only have
the values { 0, 1 }, it will also stop warning about the missing case.
It would be hard to stop emitting the warning without making that
assumption, and it may not be easy to make the assumption, but still
avoid the expensive masking operations.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


  1   2   3   4   5   6   7   8   9   10   >