Re: aligned attribute and the new operator (pr/15795)

2006-10-12 Thread Mark Mitchell

[EMAIL PROTECTED] wrote:


If we are willing to consider an ABI change, I think an approach that
allows new to call some form of memalign would be better than having the
compiler force alignment after calling new.  


Are we open to making an ABI change?


Personally, I think an ABI change, at the compiler level should be off 
the table.  (I say "Personally" to make clear that this is just my 
opinion as a C++ maintainer and as a co-developer of the C++ ABI 
specification, but not an SC decision.  And, for those who may find 
these parentheticals tedious, they're there because some people have 
previously interpreted statements from me as dictates; I'm trying to be 
very careful to make sure it's clear what hat I'm wearing.)


The C++ ABI has actually been stable for years now, which is a huge 
achievement.  We've gotten binary interoperability to work for most 
programs between a lot of C++ compilers, which is a good thing for all. 
 In my opinion, the next change to the C++ ABI should come if (and only 
if) C++0x requires changes.  Even there, I would hope for 
backwards-compatible changes -- for example, mangling for variadic 
templates would ideally be an extension to the current mangling scheme. 
 In other words, we should strive to make it possible to link current 
C++ libraries with C++0x programs, which means that the sort of change 
you're considering would be off the table.


Adding a compiler command-line option to specify the alignment of memory 
returned by "operator new", or a GNU attribute that libstdc++ could add 
to the default declaration (with a system-dependent value, of course), 
etc. seems fine to me, but I'd be very hesitant to change the ABI proper.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Proposed semantics for attributes in C++ (and in C?)

2006-10-15 Thread Mark Mitchell
pedef __attribute__((...)) S T;
  T v;

where T is some invented type name different from all others in the program.

For example given:

  __attribute__((packed)) S v;

the type of "&v" is "__attribute__((packed)) S *", and cannot be passed 
to a function expecting an "S*", but can of course be passed to a 
function expecting an "__attribute__((packed)) S *", or a typedef for 
such a type.


Thoughts?

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Proposed semantics for attributes in C++ (and in C?)

2006-10-15 Thread Mark Mitchell

Joseph S. Myers wrote:


On Sun, 15 Oct 2006, Mark Mitchell wrote:


We have a number of C++ PRs open around problems with code like this:

  struct S {
void f();
virtual void g();
  };

  typedef __attribute__((...)) struct S T;


I was happy with the state before r115086 (i.e. with it being documented 
that such attributes on typedefs are ignored).  But given that we are now 
attempting to honour them, the proposed semantics seem reasonable.


Yes, I would be happy to explicitly ignore semantic attributes in 
typedefs as well, with a warning (or even an error).  However, I had not 
realized that we ever did that; I'm surprised that the change that 
instituted this is so recent.  I suppose that explains why we're 
suddenly seeing a rash of such problems.  Jason, as you made this 
change, do you have any comments on the proposal?


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Proposed semantics for attributes in C++ (and in C?)

2006-10-16 Thread Mark Mitchell

Jason Merrill wrote:

I don't think my patch changed the handling of class typedefs; certainly 
my intent was only to change how we handle


  class __attribute ((foo)) C

Previously we rejected it, now we apply the attributes to the class.


OK, that certainly makes sense.  (That's one of the items in the 
proposal I wrote up: that you can apply attributes at the point of 
declaration of a class.)



Which PRs are you referring to?


One example is:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=28558

However, this is a problem with non-semantic attributes, and not related 
to your patch.  So, I apologize for any aspersions cast.  But, it does 
motivate for writing something down about what we want semantics we 
want.  Here, what I think we want is (as per the proposal) to create a 
new anonymous typedef for "__attribute__((unused)) A", but consider that 
the same type as "A".


I was pretty sure there were other PRs, but I'm not able to find them 
now, so perhaps I was dreaming.  I thought there were also PRs about 
typeid and mangling failing (and/or doing the wrong thing) for types 
with attributes (including scalars with attributes).


I'd be inclined to prohibit semantic attributes on typedefs in general. 


That's certainly simpler.  I'm happy to be talked out of that idea. :-)

 Extending the type system to handle attribute types seems excessively 
complicated.  I think we should define a set of attributes which prevent 
us from taking the address of a variable with that attribute 
(packed...anything else?) and check for them at the same places we check 
for taking the address of a bitfield.


That seems like a good idea to me.  However, one consequence is that a 
packed class cannot have member functions (since you'd never be able to 
get a "this" pointer for them); do you think that's OK?


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: g++ -print-prefix or -print-install-prefix

2006-10-16 Thread Mark Mitchell

Benjamin Kosnik wrote:

For testing outside of the build directory, it would be convenient to
have $SUBJECT. 


This could be used in conjunction with -dumpversion to create
on-the-fly include and library directory paths for C++ includes in a
sane manner, much like the following:


Why do you need this?  For installed-compiler testing, the compiler 
already searches the obvious places.  (I'm not trying to be cute: I'm 
genuinely curious.)


I agree that it would be nice if -print-search-dirs listed include 
directories.  It already lists the paths searched for programs and 
libraries, so that seems like a logical place to add header directories.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-10-16 Thread Mark Mitchell

Ian Lance Taylor wrote:

"Carlos O'Donell" <[EMAIL PROTECTED]> writes:


A relocated compiler should not look in $prefix.


I agree.

I can't approve your patches, though.


This patch is OK, once we reach Stage 1.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Proposed semantics for attributes in C++ (and in C?)

2006-10-17 Thread Mark Mitchell

Geoffrey Keating wrote:


A typedef declaration which adds semantic attributes to a non-class
type is valid, but again creates an entirely new type. 


>> It is invalid to

do anything that would require either type_info or a mangled name for
"Q", including using it as an argument to typeid, thowing an exception
of a type involving "Q", or declaring a template to take a parameter
of a type involving "Q".  (We could relax some of these restrictions
in future, if we add mangling support for attributes.)


Declaring a function which takes a 'Q' also requires the mangled name of 'Q'.


Good point!


where T is some invented type name different from all others in the program.

For example given:

   __attribute__((packed)) S v;

the type of "&v" is "__attribute__((packed)) S *", and cannot be
passed to a function expecting an "S*", but can of course be passed to
a function expecting an "__attribute__((packed)) S *", or a typedef
for such a type.


... except that there can't be any such functions.  You could assign
it to another variable of the same type, or a field of a class with
that type.


Right.  And, since there seems to be consensus that you shouldn't be 
able to apply semantic attributes to class types, "packed" is a bad 
example there too.  (If you applied "packed" at the point of declaration 
of "S", then "S" has a different layout than it otherwise would, but we 
don't need to do anything regarding mangling, etc.)


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.2/4.3 Status Report (2006-10-17)

2006-10-17 Thread Mark Mitchell
As Gerald noticed, there are now fewer than 100 serious regressions open 
against mainline, which means that we've met the criteria for creating 
the 4.2 release branch.  (We still have 17 P1s, so we've certainly got 
some work left to do before creating a 4.2 release, and I hope people 
will continue to work on them so that we can get 4.2 out the door in 
relatively short order.)


The SC has reviewed the primary/secondary platform list, and approved it 
unchanged, with the exception of adding S/390 GNU/Linux as a secondary 
platform.  I will reflect that in the GCC 4.3 criteria.html page when I 
create it.


In order to allow people to organize themselves for Stage 1, I'll create 
the branch, and open mainline as Stage 1, at some point on Friday, 
October 20th.  Between now and then, I would like to see folks negotiate 
amongst themselves to get a reasonable order for incorporating patches.


See:

  http://gcc.gnu.org/ml/gcc/2006-09/msg00454.html

I've also reviewed the projects listed here:

  http://gcc.gnu.org/wiki/GCC_4.3_Release_Planning

The variadic templates project is in limbo, I'm afraid.  The SC doesn't 
seem to have a clear opinion on even the general C++ policy discussed on 
the lists, which means that Jason, Nathan, and I have to talk about 
variadic templates and work out what to do.


IMA for C++ is another difficult case.  This is unambiguously useful, 
though duplicative of what we're trying to build with LTO.  That's not a 
bad thing, since LTO is clearly at least one more release cycle away, 
and IMA might be ready sooner.  On the other hand, if the IMA changes 
were disruptive to the C++ front end in general, then that might be a 
problem.  I guess we just have to evaluate the patch, when it's ready.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2/4.3 Status Report (2006-10-17)

2006-10-18 Thread Mark Mitchell

Kaveh R. GHAZI wrote:


The configury bit was approved by DJ for stage1, but do you see any reason
to hold back?  Or is this posting sufficient warning that people may need
to upgrade?  (I.e.  people should start upgrading their libraries now.)


I don't see any reason to hold back.

Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++ name mangling for local entities

2006-10-19 Thread Mark Mitchell

Geoffrey Keating wrote:

For GCC, I've found it necessary to have a way to name local (that is,
namespace-scope 'static') variables and functions which allows more
than one such symbol to be present and have distinct mangled names.


With my GCC hat on, I don't think this is desirable.  For ELF at least, 
there's nothing that prevents us using the same name for multiple local 
symbols (since "ld -r" does it).  For the sake of both LTO and IMA, we 
should add a feature to the assembler like:


   .asm_alias x = y

that says references to "x" are emitted as references to a new "y", 
distinct from all other "y" references.  That would obviate the need for 
multiple statics with the same name, since in the case that you want to 
do this (IMA) you could instead emit them using whatever name was 
convenient for generating the assembly file, and then let the assembler 
emit a symbol with the correct name.  That would help to meet the 
objective that the output from IMA and/or LTO looks like the output from 
"ld -r", modulo optimization.  I think it would be great if you would 
help implement that, which would then make extending the C++ ABI change 
unnecessary.


Now, with my C++ ABI hat on, and assuming that the idea above is 
intractable, then: (a) as you note, this is out-of-scope for the C++ 
ABI, if we confine ourselves to pure ISO C++, but (b) if the other ABI 
stakeholders don't mind, I don't see any reason not to consider 
reserving a chunk of the namespace.



What I currently have implemented is

 ::= 
   ::= 
   ::= 
   ::=// new

 ::= L  _// new

It's distinguishable from the other possibilies, because operator-name
starts with a lowercase letter, ctor-dtor-name starts with 'C' or 'D',
and source-name starts with a digit.  There is no semantic meaning
attached to the number in a local-source-name, it exists only to keep
different names distinct (so it is not like  in a
local-name).


That's true, but is there a reason not to use the discriminator 
encoding?  There might well be an ambiguity, but I didn't see at first 
blush.  If so, that would seem most natural to me.


I do think that your proposed encoding is unambiguous, though, so it 
certainly seems like a reasonable choice, especially if the 
discriminator approach doesn't work.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++ name mangling for local entities

2006-10-20 Thread Mark Mitchell



we should add a feature to the assembler like:

   .asm_alias x = y

that says references to "x" are emitted as references to a new "y", 
distinct from all other "y" references. 


On Darwin, all the DWARF information in .o files is matched by name¹ 
with symbols in the executable, so this won't work. 


In that case, on Darwin, the assembler could leave the name "x" as "x", 
so that all the names in the object file were unique.  Since this is 
only for local symbols, there's no ABI impact, as you mentioned.  Then, 
we'd have better behavior on ELF platforms and would not have to make 
any change to the C++ ABI.  You could use your suggested encoding in GCC 
as "x", but it would only show up in object files on systems that don't 
support multiple local symbols with the same name.



Now, with my C++ ABI hat on


That's true, but is there a reason not to use the discriminator 
encoding? 



You mean

 ::= Z  

?


Yes, that's what I meant.  I think that would be best, partly because it 
avoids having to reserve "L", but:



 ::= L  

will work and is more consistent, so consider the proposal amended to 
have that.


also seems OK, assuming that we need to do this at all.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.2 branch created; mainline open in Stage 1

2006-10-20 Thread Mark Mitchell
I have created the GCC 4.2 branch.  The branch is open for 
regression-fixes only, as is customary for release branches.  I believe 
I have completed the steps in branching.html with two exceptions:


1. I have not created a mainline snapshot manually.  I don't quite 
understand how to do that, and if the only issue is incorrect "previous 
snapshot" references in the generated mail, it doesn't really seem worth 
the trouble.  If there's some more grievous problem, please let me know, 
and I will try to fix it tomorrow.


2. I have not regenerated {gcc,cpplib}.pot, or sent them off to the 
translation project.  Joseph, would you please do that, at your convenience?


The mainline is now in Stage 1.

Thanks to those who helped fix PRs to meet the 4.2 branch criteria!

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Question about LTO dwarf reader vs. artificial variables and formal arguments

2006-10-21 Thread Mark Mitchell

Diego Novillo wrote:

Ian Lance Taylor wrote on 10/21/06 14:59:


That is, we are not going to write out DWARF.  We can't, because DWARF
is not designed to represent all the details which the compiler needs
to represent.  What we are going to write out is a superset of DWARF.
And in fact, if it helps, I think that we shouldn't hesitate to write
out something which is similar to but incompatible with DWARF.

In general reading and writing trees is far from the hardest part of
the LTO effort.  I think it is a mistake for us to get too tied up in
the details of how to represent things in DWARF.  (I also think that
we could probably do better by defining our own bytecode language, one
optimized for our purposes, but it's not an issue worth fighting
over.)

Agreed.  I don't think we'll get far if we focus too much on DWARF, as 
it clearly cannot be used as a bytecode language for our purposes.


I think the bytecode issue is a red herring, because we are no longer 
talking about using DWARF for the bodies of functions.  DWARF is only 
being used for declarations and types.


There, yes, we will need some extensions to represent things.  However, 
DWARF is designed to be extended, so that's no problem.  I continue to 
think think that using DWARF (with extensions) since it makes this 
information accessible to other tools (including GDB).  I think that 
before there ought to be a compelling reason to abandon a strategy based 
on DWARF.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Question about LTO dwarf reader vs. artificial variables and formal arguments

2006-10-21 Thread Mark Mitchell

Steven Bosscher wrote:


contains
   subroutine sub(c)
   character*10 c
   end subroutine

end

produces as a GIMPLE dump:




sub (c, _c)
{
  (void) 0;
}

where _c is strlen("Hi World!").  From a user perspective, it would be better 
to hide _c for the debugger because it is not something that the user had in 
the original program. 


I think that _c should be emitted in DWARF, as an artificial parameter, 
both for the sake of the debugger and for LTO.  LTO is supposed to be 
language-independent, which means that the information it reads in needs 
to be sufficient to compute the types of things (as they will be at the 
level of GIMPLE) without language hooks.  It may be that this idea turns 
out to be too idealistic, and that some language hooks are necessary to 
interpret the DWARF, but I would hope to avoid that.


Similarly, LTO has to somehow deal with DECL_VALUE_EXPR and the debug 
information that is produced from it.  Is there already some provision 
to handle this kind of trickery in LTO?


No, not yet.

but what would happen if LTO reads this in and re-constructs the type of "i" 
from this information?  I imagine it would lead to mis-matches of the GIMPLE 
code that you read in, where "i" is a 1x100 array, and the re-constructed 
variable "i" which would be a 10x10 2D array.


Has anyone working on LTO already thought of these challanges?


Yes, I've thought about these things -- but that doesn't mean I have 
ready answers.  I've been thinking first and foremost about C, and then 
about C and C++.


Some of the same issues apply, but some don't.  In C/C++, we don't 
linearize the array type.  I don't know if that's viable in gfortran or 
not; is there a way to get the same performance in the middle end that 
you currently get by doing this in the front end?


In the worst case, we will provide a separate type attribute in DWARF 
giving the "GIMPLE type" of the variable.  Then, that type would be the 
linearized array.  LTO would use the GIMPLE type attribute (if present) 
when reconstructing the type.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2 branch created; mainline open in Stage 1

2006-10-23 Thread Mark Mitchell

Andrew Pinski wrote:

On Sun, 2006-10-22 at 12:58 +, Joseph S. Myers wrote:
All the bugs with "4.2" in their summaries ("[4.1/4.2 Regression]" etc.) 
need to have it changed to "4.2/4.3".  I don't know the procedure for 
this, but perhaps it needs adding to the branching checklist.


As I understand it, it involves editing the mysql database by hand (well
by a script) instead of doing it inside bugzilla.  Daniel Berlin has
done that the last couple of releases.


I have checked in the attached patch to add this step to the branching 
checklist.  I will now ask Daniel to help with the SQL bits.


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2 branch created; mainline open in Stage 1

2006-10-23 Thread Mark Mitchell

Mark Mitchell wrote:

Andrew Pinski wrote:

On Sun, 2006-10-22 at 12:58 +, Joseph S. Myers wrote:
All the bugs with "4.2" in their summaries ("[4.1/4.2 Regression]" 
etc.) need to have it changed to "4.2/4.3".  I don't know the 
procedure for this, but perhaps it needs adding to the branching 
checklist.


As I understand it, it involves editing the mysql database by hand (well
by a script) instead of doing it inside bugzilla.  Daniel Berlin has
done that the last couple of releases.


I have checked in the attached patch to add this step to the branching 
checklist.  I will now ask Daniel to help with the SQL bits.


Sorry, here's the patch.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
Index: branching.html
===
RCS file: /cvs/gcc/wwwdocs/htdocs/branching.html,v
retrieving revision 1.24
diff -c -5 -p -r1.24 branching.html
*** branching.html  21 Sep 2006 14:17:36 -  1.24
--- branching.html  23 Oct 2006 19:49:16 -
*** milestone for 3.4.1 for PRs that can't b
*** 73,81 
--- 73,85 
  Update the email parsing script to handle bugs against the new versions.
  The script is in CVS at wwwdocs/bugzilla/contrib/bug_email.pl. 
  Search for an existing version (like "3.3"), and update both places
  it occurs to handle the new version through copy and paste.
  
+ Ask Daniel Berlin to adjust all PRs with the just-branched version
+ in their summary to also contain the new version corresponding to
+ mainline.
+ 
  
  
  
  


[Fwd: gcc-4.3-20061023 is now available]

2006-10-23 Thread Mark Mitchell
Here is the announcement mail for the special first-from-mainline 4.3 
snapshot.  The references to "Diffs from" below should say 
"4.2-20061014" rather than "4.3-".


I have reactivated the cronjob so that future snapshots for 4.3 should 
be generated automatically.


Thanks to Joseph for helping me with this.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
--- Begin Message ---
Snapshot gcc-4.3-20061023 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.3-20061023/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.3 SVN branch
with the following options: svn://gcc.gnu.org/svn/gcc/trunk revision 117985

You'll find:

gcc-4.3-20061023.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.3-20061023.tar.bz2 C front end and core compiler

gcc-ada-4.3-20061023.tar.bz2  Ada front end and runtime

gcc-fortran-4.3-20061023.tar.bz2  Fortran front end and runtime

gcc-g++-4.3-20061023.tar.bz2  C++ front end and runtime

gcc-java-4.3-20061023.tar.bz2 Java front end and runtime

gcc-objc-4.3-20061023.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.3-20061023.tar.bz2The GCC testsuite

Diffs from 4.3- are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.3
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.
--- End Message ---


Re: [Fwd: gcc-4.3-20061023 is now available]

2006-10-23 Thread Mark Mitchell

Jack Howarth wrote:

Mark,
   What happened to the gcc 4.2 snapshot
tarball for this week?


It gets build on Tuesdays, or at least it does now according to crontab.

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.2 branch created; mainline open in Stage 1

2006-10-23 Thread Mark Mitchell

Daniel Berlin wrote:

Anyway, i made 43changer.pl and ran it, so the bug summaries have been 
updated.


Thanks!

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Fix PR29519 Bad code on MIPS with -fnon-call-exceptions

2006-10-25 Thread Mark Mitchell

Eric Botcazou wrote:

Finally before I finish the retrospective part of this e-mail, I'll
point out this isn't a sudden recent unilateral policy decision, but
purely a crystallization of the prescribed GCC work-flow outlined in
contributing.html that has been refined over many years.


I've reviewed this thread, because there was some discussion about how 
to handle release branches.


In general, I'd prefer that all patches to fix regressions go on the 
release branch at the same time as they go to mainline.  However, I have 
myself failed to do that at times; I presently have a few C++ patches 
which need backporting to 4.1, and I have not yet done that.  At a 
minimum, in such a case, there should be a PR open for the release 
branch failure, and it should note the presence of the patch on 
mainline.  (I've done that for my C++ patches, in that the check-in 
messages on mainline are in the PRs.)  From my perspective, as RM, the 
critical thing is that we have a PR and a record of the patch, so that 
as we approach the release we know we have a bug, and we know we have an 
option available to fix it.


I also recognize that there may sometimes be patches that appear risky, 
and that we therefore want to apply them to mainline before applying 
them to release branches too.  I think that's perfectly appropriate.  In 
other words, I think this is a judgment call, and I think maintainers 
should be free to make it.  But, in general, please do try to put 
patches on release branches, especially if they fix P1 regressions. 
Sacrificing code quality for correctness is the right tradeoff for a 
release branch, if we have to pick, so if a patch is "only" going to 
pessimize code, it should be a very strong candidate for a release branch.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: memory benchmark of tuples branch

2006-10-27 Thread Mark Mitchell

Aldy Hernandez wrote:


I don't know if this merits merging into mainline, or if it's preferable to
keep plodding along and convert the rest of the tuples.  What do you guys
think?  Either way, I have my work cut out for me, though I believe the
hardest part is over (FLW).


I thinking merging as you go is fine, in principle.  Every little bit 
helps.  My only concern would be whether you'll disrupt other 
large-scale projects that might find global changes hard to handle.  I'd 
suggest posting your patch and seeing if anyone makes unhappy sounds. :-)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: memory benchmark of tuples branch

2006-10-27 Thread Mark Mitchell

Aldy Hernandez wrote:

Does the tuples branch include the CALL_EXPR reworking from the LTO branch?


No.


Though, that is a similar global-touch-everything project, so hopefully 
whatever consensus develops from tuples will carry over.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-30 Thread Mark Mitchell

Ian Lance Taylor wrote:


I'm not sure I entirely agree with Mark's reasoning.  It's true that
we've always required a big set of tools to do development with gcc.
And it's true that we require GNU make to be installed and working in
order to build gcc.  But this is the first time that we've ever
required a non-standard library to be installed before J. Random User
can build gcc.  And plenty of people do try to build gcc themselves,
as can be seen on gcc-help.


I don't believe there's a serious problem with the concept, as long as 
"./configure; make; make install" for GMP DTRT.  If you can do it for 
GCC, you can do it for a library it uses too.


I would strongly oppose downloading stuff during the build process. 
We're not in the apt-get business; we can leave that to the GNU/Linux 
distributions, the Cygwin distributors, etc.  If you want to build a KDE 
application, you have to first build/download the KDE libraries; why 
should GCC be different?



I think that if we stick with our current approach, we will have a lot
of bug reports and dissatisfied users when gcc 4.3 is released.


I'd argue that the minority of people who are building from source 
should not be our primary concern.  Obviously, all other things being 
equal, we should try to make that easy -- but if we can deliver a better 
compiler (as Kaveh has already shown we can with his patch series), then 
we should prioritize that.  For those that want to build from source, we 
should provide good documentation, and clear instructions as to where to 
find what they need, but we should assume they can follow complicated 
instructions -- since the process is already complicated.


I do think it's important that we make sure there's a readily buildable 
GMP available, including one that works on OS X, in time for 4.3.  We 
should provide a tarball for it from gcc.gnu.org, if there isn't a 
suitable GMP release by then.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Steven Bosscher wrote:

On 30 Oct 2006 22:56:59 -0800, Ian Lance Taylor <[EMAIL PROTECTED]> wrote:


I'm certainly not saying that we should pull out GMP and MPFR.  But I
am saying that we need to do much much better about making it easy for
people to build gcc.


Can't we just make it so that, if gmp/  amd mpfr/ directories exist in
the toplevel, they are built along with GCC?  I don't mean actually
including gmp and mpfr in the gcc SVN repo, but just making it
possible to build them when someone unpacks gmp/mpfr tarballs in the
toplevel dir.


I wouldn't object to that.  It's a bit more build-system complexity, but 
if it makes it easier for people, then it's worth it.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Ian Lance Taylor wrote:

Mark Mitchell <[EMAIL PROTECTED]> writes:


I would strongly oppose downloading stuff during the build
process. We're not in the apt-get business; we can leave that to the
GNU/Linux distributions, the Cygwin distributors, etc.  If you want to
build a KDE application, you have to first build/download the KDE
libraries; why should GCC be different?


Because gcc is the first step in bringing up a new system. 


I don't find this as persuasive as I used to.  There aren't very many 
new host systems, and when there are, you get started with a cross compiler.



I disagree: the process of building gcc from a release (as opposed to
building the development version of gcc) really isn't complicated.
The only remotely non-standard thing that is required is GNU make.
Given that, all you need to do is "SRCDIR/configure; make".


OK, I agree: a native compiler, with no special options, isn't too hard. 
 I don't think typing that sequence twice would be too hard either, 
though. :-)



I'm certainly not saying that we should pull out GMP and MPFR.  But I
am saying that we need to do much much better about making it easy for
people to build gcc. 


I agree; I just don't think an external library is the problem.  For 
example, the unfortunate tendency of broken C++ compilers to manifest as 
autoconf errors about "run-time test after link-time failure" (that's 
not the right error) in libstdc++ builds confused me a bunch.  The fact 
that you can pass configure options that are silently ignored is a trap. 
 I'm sure we don't have good documentation for all of the configuration 
options we do have.  The way the libgcc Makefiles get constructed from 
shell scripts and the use of recursive make to invoke them confuses me, 
and the fact that "make" at the top level does things differently than 
"make" in the gcc/ directory also confuses me.  IIRC, --with-cpu= works 
on some systems, but not others.


In other words, the situation that you're on a GNU/Linux system, and 
have to  type "configure; make; make install" several times for several 
packages to GCC doesn't seem too bad to me.  What seems bad, and 
off-putting to newcomers interested in working on the source, is that as 
soon as you get past that point it all gets very tangled very quickly.


But, that's just me.  I wouldn't try to stop anybody from adding 
--with-gmp=http://www.gmp.org/gmp-7.3.tar.gz to the build system, even 
though I'd find it personally frightening. :-)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Geoffrey Keating wrote:

OK, I agree: a native compiler, with no special options, isn't too 
hard.  I don't think typing that sequence twice would be too hard 
either, though. :-)


For something that's not too hard, it's sure causing me a lot of trouble...


But, the trouble you're having is not because you have to build an 
external library; it's because the external library you're building 
doesn't work on your system, or, at least, doesn't work with obvious 
default build options.  So, we should fix the external library, or, in 
the worst case, declare that external library beyond salvage.


In contrast, as I understand it, Ian's perturbed about the idea of 
having an external library at all.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: build failure, GMP not available

2006-10-31 Thread Mark Mitchell

Geoffrey Keating wrote:


do you think this is likely to be:
1. some problem in gmp or mpfr,
2. some problem in my build of gmp and/or mpfr, that wouldn't occur if 
I'd built it in some other (unspecified) way,
3. some problem in my existing system configuration, maybe I already 
have a gmp installed that is somehow conflicting, or

4. a well-known but not serious bug in GCC's Darwin port?


In contrast, as I understand it, Ian's perturbed about the idea of 
having an external library at all.


I don't think Ian would object to an external library that users could 
always find easily, that always built cleanly, that didn't have bugs...  
but such a library doesn't exist.


But, neither does such an internal library exist.  Whether the library 
is part of the GCC source tree or not doesn't affects its quality, or 
even its buildability.  The issue isn't where the source code for the 
library lives, but whether it's any good or not.


I can think of one big advantage of an internal library, though: instead 
of (in addition to?) documenting its build process, you can automate it. 
 One would rather hope that the build process isn't complicated, 
though, in which case this doesn't matter.  After all, we're trying to 
cater to the users for whom "configure; make; make install" works to 
build GCC; as long as the same pattern works for the external libraries, 
I think we're OK.


We might have to improve GMP/MPFR in order to make them work that 
smoothly, but that would have to happen if we imported them too.  So, I 
think you could argue that these libraries are too immature for us to 
depend on in GCC.  But, I don't think that's what Ian was arguing. 
(And, I don't think they're too immature; the problems we're seeing 
don't seem particularly worse than the problems I would expect in early 
Stage 1 with any other kind of big infrastructure change.)


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mark Mitchell

Ian Lance Taylor wrote:


Here is a review followed by a proposal.


How does this proposal handle the current problematic situation that 
-std=c99 is broken on Linux?


We could either revert Geoff's patch, or conditionalize it on Linux. 
I'd argue against the latter approach (which is what I believe Geoff 
posted), in that it would break one of the key advantages of GCC: that 
the compiler behaves the same way on multiple systems.


I think the proposal is as good as we can do, given the box that we're 
in (and let this be a caution to us with respect to implementing 
extensions before standardization, especially without use of GNU 
keywords/syntax), but I think we should preserve both cross-system 
compatibility and Linux compilation in the meanwhile.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mark Mitchell

Ian Lance Taylor wrote:

Mark Mitchell <[EMAIL PROTECTED]> writes:


Ian Lance Taylor wrote:


Here is a review followed by a proposal.

How does this proposal handle the current problematic situation that
-std=c99 is broken on Linux?


According to the proposal, we will restore the GNU handling for
"extern inline" even when using -std=c99, which will fix the problem
when using glibc.


Sorry, I didn't pick up on that aspect.  FWIW, I'm completely happy with 
the proposal -- or at least as happy as one can be about changing the 
meaning of existing programs which conformed to our documentation to 
mean something else...


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Handling of extern inline in c99 mode

2006-11-01 Thread Mark Mitchell

Joseph S. Myers wrote:

On Wed, 1 Nov 2006, Ian Lance Taylor wrote:


According to the proposal, we will restore the GNU handling for
"extern inline" even when using -std=c99, which will fix the problem
when using glibc.


We definitely need to revert that until the fixincludes changes are 
available.  (The precise nature of the fix - whether we disable the 
inlines, change them to be C99-aware or add an attribute to give yet 
another form of inline function in gnu99 mode that mimics gnu89 extern 
inline - is less important.)


If we restore the previous behavior, then we don't need the fixincludes 
as immediately -- but since you're right that people will no doubt want 
to use GCC 4.4 with RHEL3, SLES8, etc., I think you're correct that when 
we do switch, we should be armed with fixincludes for GLIBC.  It's 
certainly not enough just to change the current GLIBC sourcebase to use 
C99 semantics going forward, as we must expect that people will want to 
install the software on older versions of the OS.



Thus, I hereby propose starting the 48 hour reversion timer.


I concur.

Once we have the fixincludes fixes, I don't think we need to wait for 4.4 
to switch the default in gnu99 mode back to C99 inline semantics, as long 
as we have those fixes during Stage 1.


I think it would be better to have GLIBC changed before changing the 
behavior of the compiler.  It might even be better to have a released 
version of GLIBC with the changes.  fixincludes causes sufficient 
problems for people that ensuring that only users putting new compilers 
on old systems suffer might be a good goal.


On the other hand, I agree that if we have fixincludes in place, then 
4.3 would not be in any way broken on GNU/Linux, so I think this is a 
judgment call.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Why doesn't libgcc define _chkstk on MinGW?

2006-11-03 Thread Mark Mitchell
This may be a FAQ, but I was unable to find the answer on the web, so I 
hope people will forgive me asking it here.


I recently tried to use a MinGW GCC (built from FSF sources) to link 
with a .lib file that had been compiled with MSVC, and got link-time 
errors about _chkstk.  After some searching, I understand what this 
function is for (it's a stack-probing thing that MSVC generates when 
allocating big stack frames), and that GCC has an equivalent in libgcc 
(called _alloca).  There also seems to be widespread belief that in fact 
the libgcc routine is compatible with _chkstk.  And, there are lots of 
people that have reported link failures involving _chkstk.


So, my (perhaps naive) question is: why don't we define _chkstk as an 
alias for _alloca in MinGW, so that we can link with these MSVC libraries?


Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Why doesn't libgcc define _chkstk on MinGW?

2006-11-03 Thread Mark Mitchell

Ross Ridge wrote:


There are other MSC library functions that MinGW doesn't provide, so
other libraries may not link even with a _chkstk alias.


Got a list?

Thanks,

--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: compiling very large functions.

2006-11-05 Thread Mark Mitchell

Paolo Bonzini wrote:

Kenneth Zadeck wrote:

I think that it is time that we in the GCC community took some time to
address the problem of compiling very large functions in a somewhat
systematic manner.


While I agree with you, I think that there are so many things we are 
already trying to address, that this one can wait. 


It certainly can, but I see no reason why it should.  This is a class of 
issues that users run into, and if someone is motivated to work on this 
class, then that's great!


I like Kenny's idea of having a uniform set of metrics for size (e.g., 
number of basic blocks, number of variables, etc.) and a limited set of 
gating functions because it will allow us to explain what's going on to 
users, and allow users to tune them.  For example, if the metric for 
disabling a pass (by default) is "# basic blocks > 10", then we can have 
a -foptimize-bigger=2 switch to change that to "20".  If the gating 
condition was instead some arbitrary computation, that would be harder 
to implement, and harder to explain.


Certainly, setting the default thresholds reasonably will be 
non-trivial.  If we can agree on the basic mechanism, though, we could 
add thresholding on a pass-by-pass basis.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mark Mitchell

Dale Johannesen wrote:


On Nov 7, 2006, at 11:47 AM, Douglas Gregor wrote:

I just read Nathan's discussion [1] on changing GCC's type system to 
use canonical type nodes, where the comparison between two types 
requires only a pointer comparison. Right now, we use "comptypes", 
which typically needs to do deep structural checks to determine if two 
types are equivalent, because we often clone _TYPE nodes.


One difficulty is that compatibility of types in C is not transitive, 
especially when you're compiling more than one translation unit at a time.
See the thread "IMA vs tree-ssa" in Feb-Mar 2004.  Geoff Keating and 
Joseph Myers give good examples.


For example:

  http://gcc.gnu.org/ml/gcc/2004-02/msg01462.html

However, I still doubt that this is what the C committee actually 
intended.


Transitivity of type equivalence is fundamental in every type system 
(real and theoretical) with which I'm familiar.  In C++, these examples 
are not valid because the ODR, and, IIRC, in C you cannot produce them 
in a single translation unit -- which is the case that most C 
programmers think about.  So, I'm of the opinion that we should discount 
this issue.


I do think that canonical types (with equivalence classes, as Doug 
suggests) would be a big win, for all of the reasons he suggests.  We 
have known for a long time that comptypes is a bottleneck in the C++ 
front end, and while some of that could be solved in other ways, making 
it a near-free operation would be a huge benefit.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-07 Thread Mark Mitchell

Richard Kenner wrote:

Like when int and long have the same range on a platform?
The answer is they are different, even when they imply the same object
representation.

The notion of unified type nodes is closer to syntax than semantics.


I'm more than a little confused, then, as to what we are talking about
canonicalizing.  We already have only one pointer to each type, for example.


Yes, but to compare two types, you have to recur on them, because of 
typedefs.  In:


  typedef int I;

"int *" and "I *" are distinct types, and you have to drill down to "I" 
to figure that out.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: bootstrap on powerpc fails

2006-11-07 Thread Mark Mitchell

David Edelsohn wrote:

Kaveh R GHAZI writes:


Kaveh> I tried many years ago and Mark objected:
Kaveh> http://gcc.gnu.org/ml/gcc-patches/2000-10/msg00756.html

Kaveh> Perhaps we could take a second look at this decision?  The average system
Kaveh> has increased in speed many times since then.  (Although sometimes I feel
Kaveh> like bootstrapping time has increased at an even greater pace than chip
Kaveh> improvements over the years. :-)

I object.


Me too.

I'm a big proponent of testing, but I do think there should be some 
bang/buck tradeoff.  (For example, we have tests in the GCC testsuite 
that take several minutes to run -- but never fail.  I doubt these tests 
are actually buying us a factor of several hundred more quality quanta 
over the average test.)  Machine time is cheap, but human time is not, 
and I know that for me the testsuite-latency time is a factor in how 
many patches I can write, because I'm not great at keeping track of 
multiple patches at once.


--
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Planned LTO driver work

2006-11-09 Thread Mark Mitchell
This message outlines a plan for modifying the GCC driver to support
compilation in LTO mode.  The goal is that:

  gcc --lto foo.c bar.o

will generate LTO information for foo.c, while compiling it, then invoke
the LTO front end for foo.o and bar.o, and then invoke the linker.

However, as a first step, the LTO front end will be invoked separately
for foo.o and bar.o -- meaning that the LTO front end will not actually
do any link-time optimization.  The reason for this first step is that
it's easier, and that it will allow us to run through the GCC testsuite
in LTO mode, eliminating failures in single-file mode, before we move on
to multi-file mode.

The key idea is to leverage the existing collect2 functionality for
reinvoking the compiler.  That's presently used for static
constructor/destructor handling and for instantiating templates in
-frepo mode.

So, the work plan is as follows:

1. Add a --lto option to collect2.  When collect2 sees this option,
treat all .o files as if they were .rpo files and recompile them.  We
will do this after all C++ template instantiation has been done, since
we want to optimize the .o files after the program can actually link.

2. Modify the driver so that --lto passes -flto to the C front-end and
--lto to collect2.

Any objections to this plan?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-09 Thread Mark Mitchell
Andrew Pinski wrote:
> On Thu, 2006-11-09 at 12:32 -0800, Mark Mitchell wrote:
>> 1. Add a --lto option to collect2.  When collect2 sees this option,
>> treat all .o files as if they were .rpo files and recompile them.  We
>> will do this after all C++ template instantiation has been done, since
>> we want to optimize the .o files after the program can actually link.
>>
>> 2. Modify the driver so that --lto passes -flto to the C front-end and
>> --lto to collect2.
>>
>> Any objections to this plan?
> 
> Maybe not an objection but a suggestion with respect of static
> libraries.  It might be useful to also to look into archives for files
> with LTO info in them and be able to read them inside the compiler also.

Definitely -- but not yet. :-)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-09 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Mark Mitchell <[EMAIL PROTECTED]> writes:
> 
>> 1. Add a --lto option to collect2.  When collect2 sees this option,
>> treat all .o files as if they were .rpo files and recompile them.  We
>> will do this after all C++ template instantiation has been done, since
>> we want to optimize the .o files after the program can actually link.
>>
>> 2. Modify the driver so that --lto passes -flto to the C front-end and
>> --lto to collect2.
> 
> Sounds workable in general.  I note that in your example of
>   gcc --lto foo.c bar.o
> this presumably means that bar.o will be recompiled using the compiler
> options specified on that command line, rather than, say, the compiler
> options specified when bar.o was first compiled.  This is probably the
> correct way to handle -march= options.

I think so.  Of course, outright conflicting options (e.g., different
ABIs between the original and subsequent compilation) should be detected
and an error issued.

There has to be one set of options for LTO, so I don't see much benefit
in recording the original options and trying to reuse them.  We can't
generate code for two different CPUs, or optimize both for size and for
space, for example.  (At least not without a lot more stuff that we
don't presently have.)

> I assume that in the long run, the gcc driver with --lto will invoke
> the LTO frontend rather than collect2.  And that the LTO frontend will
> then open all the .o files which it is passed.

Either that, or, at least, collect2 will invoke LTO once with all of the
.o files.  I'm not sure if it matters whether it's the driver or
collect2 that does the invocation.  What do you think?

In any case, for now, I'm just trying to move forward, and the collect2
route looks a bit easier.  If you're concerned about that, then I'll
take note to revisit and discuss before anything goes to mainline.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-09 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Mark Mitchell <[EMAIL PROTECTED]> writes:
> 
>>> I assume that in the long run, the gcc driver with --lto will invoke
>>> the LTO frontend rather than collect2.  And that the LTO frontend will
>>> then open all the .o files which it is passed.
>> Either that, or, at least, collect2 will invoke LTO once with all of the
>> .o files.  I'm not sure if it matters whether it's the driver or
>> collect2 that does the invocation.  What do you think?
> 
> I think in the long run the driver should invoke the LTO frontend
> directly.  

> That will save a process--if collect2 does the invocation, we have to
> run the driver twice.

Good point.  Probably not a huge deal in the context of optimizing the
whole program, but still, why be stupid?

Though, if we *are* doing the template-repository dance, we'll have to
do that for a while, declare victory, then invoke the LTO front end,
and, finally, the actual linker, which will be a bit complicated.  It
might be that we should move the invocation of the real linker back into
gcc.c, so that collect2's job just becomes generating the right pile of
object files via template instantiation and static
constructor/destructor generation?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Planned LTO driver work

2006-11-10 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Mark Mitchell <[EMAIL PROTECTED]> writes:
> 
>> Though, if we *are* doing the template-repository dance, we'll have to
>> do that for a while, declare victory, then invoke the LTO front end,
>> and, finally, the actual linker, which will be a bit complicated.  It
>> might be that we should move the invocation of the real linker back into
>> gcc.c, so that collect2's job just becomes generating the right pile of
>> object files via template instantiation and static
>> constructor/destructor generation?
> 
> For most targets we don't need to invoke collect2 at all anyhow,
> unless the user is using -frepo.  It's somewhat wasteful that we
> always run it.
> 
> Moving the invocation of the linker into the gcc driver makes sense to
> me, especially if it we can skip invoking collect2 entirely.  Note
> that on some targets, ones which do not use GNU ld, collect2 does
> provide the feature of demangling the ld error output.  That facility
> would have to be moved into the gcc driver as well.

I agree that this sounds like the best long-term plan.  I'll try to work
out whether it's actually a short-term win for me to do anything to
collect2 at all; if not, then I'll just put stuff straight into the
driver, since that's what we really want anyhow.

Thanks for the feedback!

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: How to create both -option-name-* and -option-name=* options?

2006-11-10 Thread Mark Mitchell
Dave Korn wrote:

>   It may seem a bit radical, but is there any reason not to modify the
> option-parsing machinery so that either '-' or '=' are treated interchangeably
> for /all/ options with joined arguments?  That is, whichever is specified in
> the .opt file, the parser accepts either?  

I like that idea.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Canonical type nodes, or, comptypes considered harmful

2006-11-10 Thread Mark Mitchell
Ian Lance Taylor wrote:

> This assumes, of course, that we can build an equivalence set for
> types.  I think that we need to make that work in the middle-end, and
> force the front-ends to conform.  As someone else mentioned, there are
> horrific cases in C like a[] being compatible with both a[5] and a[10]
> but a[5] and a[10] not being compatible with each other, and similarly
> f() is compatible with f(int) and f(float) but the latter two are not
> compatible with each other. 

I don't think these cases are serious problems; they're compatible
types, not equivalent types.  You don't need to check compatibility as
often as equivalence.  Certainly, in the big C++ test cases, Mike is
right that templates are the killer, and they're you're generally
testing equivalence.

So, if you separate same_type_p from compatible_type_p, and make
same_type_p fast, then that's still a big win.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: How to create both -option-name-* and -option-name=* options?

2006-11-10 Thread Mark Mitchell
Dave Korn wrote:
> On 10 November 2006 20:06, Mark Mitchell wrote:
> 
>> Dave Korn wrote:
>>
>>>   It may seem a bit radical, but is there any reason not to modify the
>>> option-parsing machinery so that either '-' or '=' are treated
>>> interchangeably for /all/ options with joined arguments?  That is,
>>> whichever is specified in the .opt file, the parser accepts either?
>> I like that idea.
> 
> 
>   Would it be a suitable solution to just provide a specialised wrapper around
> the two strncmp invocations in find_opt? 

FWIW, that seems reasonable to me, but I've not looked hard at the code
to be sure that's technically 100% correct.  It certainly seems like the
right idea.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++: Implement code transformation in parser or tree

2006-11-10 Thread Mark Mitchell
Sohail Somani wrote:

> struct __some_random_name
> {
> void operator()(int & t){t++;}
> };
> 
> for_each(b,e,__some_random_name());
> 
> Would this require a new tree node like LAMBDA_FUNCTION or should the
> parser do the translation? In the latter case, no new nodes should be
> necessary (I think).

Do you need new class types, or just an anonymous FUNCTION_DECL?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:
> GCC 4.1.1 for PowerPC generates a 162K executable for a
> minimal program  "int main() { return 0; }".  GCC 3.4.1
> generated a 7.2K executable.  Mark Mitchell mentioned the
> same problem for ARM and proposed a patch to remove the
> reference to malloc in atexit
> (http://sourceware.org/ml/newlib/2006/msg00181.html).
> 
> There are references to malloc in eh_alloc.c and
> unwind-dw2-fde.c.  It looks like these are being
> included even when there are no exception handlers.
> 
> Any suggestions on how to eliminate the references
> to these routines?

These aren't full implementation sketches, but, yes, we can do better.
Here are some ideas:

1. For the DWARF unwind stuff, have the linker work out how much space
is required and pre-allocate it.  The space required is a statically
knowable property of the executable, modulo dynamic linking, and on the
cases where we care most (bare-metal) we don't have to worry about
dynamic linking.  (If you can afford a dynamic linker, you can probably
afford malloc, and it's in a shared library.)

2. For the EH stuff, the maximum size of an exception is also statically
knowable, again assuming no dynamic linking.  The maximum exception
nesting depth (i.e., the number of simultaneously active exceptions) is
not, though.  So, here, what I think you want is a small, statically
allocated stack, at least as big as the biggest exception, out of which
you allocate exception objects.  Happily, we already have this, in the
form of "emergency_buffer" -- although it uses a compile-time estimate
of the biggest object, rather than having the linker fill it in, as
would be ideal.  But, in the no-malloc case, just fall back to the
emergency mode.

You could also declare malloc "weak" in that file, and just not call it
if the value is zero.  That way, if malloc is around, you can use it --
but if it's not, you use the emergency buffer.  Put the emergency_buffer
in a separate file (rather than in eh_alloc.cc), so that users can
provide their own implementation to control the size, overriding the one
in the library.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: C++: Implement code transformation in parser or tree

2006-11-12 Thread Mark Mitchell
Sohail Somani wrote:
> On Fri, 2006-11-10 at 19:46 -0800, Andrew Pinski wrote:
>> On Fri, 2006-11-10 at 15:23 -0800, Sohail Somani wrote:
>>>> Do you need new class types, or just an anonymous FUNCTION_DECL?
>>> Hi Mark, thanks for your reply.
>>>
>>> In general it would be a new class. If the lambda function looks like:
>>>
>>> void myfunc()
>>> {
>>>
>>> int a;
>>>
>>> ...<>(int i1,int i2) extern (a) {a=i1+i2}...
>>>
>>> }
>>>
>>> That would be a new class with an int reference (initialized to a) and
>>> operator()(int,int).
>>>
>>> Does that clarify?
>> Can lambda functions like this escape myfunc?  If not then using the
>> nested function mechanism that is already in GCC seems like a good
>> thing.  In fact I think of lambda functions as nested functions.
> 
> Yes they can in fact. So the object can outlive the scope.

As I understand the lambda proposal, the lambda function may not refer
to things that have gone out of scope.  It can use *references* that
have gone out of scope, but only if the referent is still in scope.
Since the way that something like:

  int i;
  void f() {
int &a = i;
...<>() { return a; } ...
  }

should be implemented is with a lambda-local copy of "a" (rather than a
pointer to "a"), this is OK.

So, I do think that nested functions would be a natural implementation
in GCC, since they already provide access to a containing function's
stack frame.  You could also use the anonymous-class approach that you
suggested, but, as the lambda proposal mentions, using a nested function
may result in better code.  I suspect that what you want is a class (to
meet the requirements on ret_type, etc.) whose operator() is marked as a
nested function for GCC, in the event -- and *only* in event -- that the
lambda function refers to variables with non-static storage duration.

Also, it appears to me that there is something missing from N1958: there
is no discussion about what happens when you apply typeid to a lambda
function, or otherwise use it in a context that requires type_info.
(Can you throw it as an exception, for example?)  Can you capture its
type with typeof()?  Can you declare a function with a paramter of type
pointer-to-lambda-function?  Is this discussed, or am I missing something?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:

> Preallocating space is a good thing, particularly if the size
> can be computed at compile time.  It's a little bit more awkward
> if it has to be calculated at link time.

It's a bit awkward, but it's also one of the clever tricks ARM's
proprietary linker uses, and we should use it too!

> Generating __gxx_personality_v0 is suppressed with the -fno-exceptions
> flag, but it would seem better if this symbol were only generated
> when catch/throw was used.  This happens in cxx_init_decl_processing(),
> which is called before it's known whether or not EH is really needed.

I believe that you need the personality routine if you will be unwinding
through a function, which is why -fno-exceptions is the test.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:
> Mark Mitchell wrote:
>>> Generating __gxx_personality_v0 is suppressed with the -fno-exceptions
>>> flag, but it would seem better if this symbol were only generated
>>> when catch/throw was used.  This happens in cxx_init_decl_processing(),
>>> which is called before it's known whether or not EH is really needed.
>>
>> I believe that you need the personality routine if you will be unwinding
>> through a function, which is why -fno-exceptions is the test.
> 
> You mean unwinding stack frames to handle a thrown exception?
> 
> That's true, but shouldn't this only be included when there
> exceptions are used?  

No, it must be included if exceptions are enabled, and there are any
objects which might require cleanups, which, in most C++ programs, is
equivalent to there are any objects with a destructor.

> One of the C++ percepts is that there
> is no overhead for features which are not used.

That objective does not hold for space, especially in the presence of
exceptions.

> Why should the personality routine be included in all C++ programs?

Because all non-trivial, exceptions-enabled programs, may need to do
stack unwinding.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Michael Eager wrote:
> Mark Mitchell wrote:
>> Michael Eager wrote:
>>> Why should the personality routine be included in all C++ programs?
>>
>> Because all non-trivial, exceptions-enabled programs, may need to do
>> stack unwinding.
> 
> It would seem that the place to require the personality
> routine would be in the routine which causes the stack
> unwinding, not in every C++ object file, whether needed
> or not.

But, the way the ABI works requires a reference from each unit which may
cause unwinding.  Even if you lose the personality routine, you will
still have the exception tables, which themselves are a significant
cost.  If you don't want to pay for exceptions, you really have to
compile with -fno-exceptions.  In that case, certainly, we should be
able to avoid pulling in the personality routine.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Daniel Jacobowitz wrote:

> If you try what Michael's been saying, you'll notice that trivial
> C++ files get the personality routine reference even if they don't
> have anything with a destructor which would need cleaning up.  We ought
> to be able to emit (somewhat smaller) unwind information which doesn't
> reference the personality routine if it's going to have nothing to do,
> shouldn't we?

Certainly, there are at least some such cases.  I guess a function whose
only callees (if any) are no-throw functions, and which itself does not
use "throw", does not need frame information.

But, for something like:

  extern void f();
  void g() {
f(); f();
  }

we do need unwind information, even though "g" has nothing to do with
exceptions.

However, I think you and Michael are right: we don't need to reference
the personality routine here.   Unless the entire program doesn't
contain anything that needs cleaning up, we'll still need it in the
final executable, but omitting it would make our object files smaller,
and unwinding a little faster, since we don't call personality routines
that aren't there.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Reducing the size of C++ executables - eliminating malloc

2006-11-12 Thread Mark Mitchell
Daniel Jacobowitz wrote:
> On Sun, Nov 12, 2006 at 05:11:39PM -0800, Mark Mitchell wrote:
>> Daniel Jacobowitz wrote:
>>
>>> If you try what Michael's been saying, you'll notice that trivial
>>> C++ files get the personality routine reference even if they don't
>>> have anything with a destructor which would need cleaning up.  We ought
>>> to be able to emit (somewhat smaller) unwind information which doesn't
>>> reference the personality routine if it's going to have nothing to do,
>>> shouldn't we?
>> Certainly, there are at least some such cases.  I guess a function whose
>> only callees (if any) are no-throw functions, and which itself does not
>> use "throw", does not need frame information.
> 
> You've talked right past me, since I wasn't saying that...

Well, for something like:

  int g() throw();
  int f(int a) {
return g() + a;
  }

I don't think you ever have to unwind through "f".  Exceptions are not
allowed to leave "g", and nothing in "f" can throw, so as far as the EH
systems is concerned, "f" doesn't even exist.  I think we could just
drop its frame info on the floor.  This might be a relatively
significant size improvement.

>> Unless the entire program doesn't
>> contain anything that needs cleaning up, we'll still need it in the
>> final executable,
> 
> Right.  So if you use local variables with destructors, even though you
> don't use exceptions, you'll get the personality routine.  The linker
> could straighten that out if we taught it to, though.

Correct.  It could notice that, globally, no throw-exception routines
(i.e., __cxa_throw, and equivalents for other languages) were included
and then discard the personality routine -- and, maybe, all of
.eh_frame.  You'd still have the cleanup code in function bodies,
though, so if you really want minimum size, you still have to compile
with -fno-exceptions.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 Status Report (2006-11-12)

2006-11-12 Thread Mark Mitchell
I realize that it's been a long time since a GCC 4.1.x release.

I'd like to put together a GCC 4.1.2 release in the relatively near
future.  (Then, there may or may not be a GCC 4.1.3 release at the same
time as 4.2.0, depending on where it seems like we are at that point.)

Since, in theory, the only changes on the 4.1 release branch were to fix
regressions, GCC 4.1.2 should be ready for release today, under the
primary condition that it be no worse than 4.1.1.  But, I recognize that
while theory and practice are, in theory, the same, they are, in
practice, different.

I also see that there are some 30 P1s open against 4.1.2.  Many of these
also apply to 4.2.0, which means that fixing them helps both releases.
So, I'd appreciate people working to knock down those PRs, in particular.

I would also like to know which PRs are regressions from 4.1.0 or 4.1.1.
 Please update the list here:

  http://gcc.gnu.org/wiki/GCC_4.1.2_Status

as you encounter such PRs.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: odd severities in bugzilla

2006-11-19 Thread Mark Mitchell
[EMAIL PROTECTED] wrote:

> So, are we using "P1" instead to mark release-blocking bugs?  Should
> we fix the severities of existing bugs?

I am using priorities to indicate how important it is to fix a bug
before the next release.  This is consistent with the meanings of the
terms "priority" and "severity".  In particular, the "severity"
indicates how severe the problem is, if you are affected by the bug.
The "priority" indicates how important it is to fix it.  In various
commercial environments I've worked in, customers set "severity" (e.g.,
help, this bug is really getting in my way!) and developers/support set
"priority (this bug is affecting only one customer, so it's medium
priority).

So, that's a long answer, but basically: "yes, we're using P1 to mark
release-critical bugs".  Also, in the paradigm described above,
"blocker" should mean "blocks the user from making progress, there is no
workaround", not "blocks the release".  (In my experience, severities
are normally things like "mild", "critical", "emergency.)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Some clarifications regarding GIMPLE and LTO

2006-11-26 Thread Mark Mitchell
[EMAIL PROTECTED] wrote:

> Does the LTO branch try to achieve that the complete information for a 
> "Program"
> can be sufficiently stored (in a file)? If this is already there, could anyone
> provide some pointers to the API?

Yes, on the LTO branch, we are working to store the entire translation
unit in a form that can then be read back into the compiler.  The global
symbol table is stored using DWARF3, so you can read it back with a
standard DWARF reader.  See lto/lto.c on the LTO branch for the code
that does this.

At present, there are a few things that are not yet written out to the
DWARF information (like GCC machine modes), but most things (types,
functions, variables) are present.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Canonical types (1/3)

2006-11-29 Thread Mark Mitchell
Doug Gregor wrote:
> This patch introduces canonical types into GCC, which allow us to
> compare two types very efficiently and results in an overall
> compile-time performance improvement. 

Thanks for working on this.  It's the sort of project I used to have
time to do. :-)

I will review these patches in the next couple of days.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [C/C++] same warning/error, different text

2006-12-03 Thread Mark Mitchell
Manuel López-Ibáñez wrote:
> The message for the following error:
> 
> enum e {  E3 = 1 / 0 };
> 
> is in C: error: enumerator value for 'E3' not integer constant
> and in C++: error: enumerator value for 'E3' is not an integer constant
> 
> Is there someone against fixing this? What would be the preferred message?

I slightly prefer the more-grammatical C++ version, but, if there's any
controversy at all, I'm perfectly happy with the C version too, and it's
certainly a good thing to use the same message in both languages.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Canonical types (1/3)

2006-12-04 Thread Mark Mitchell
Doug Gregor wrote:
> This patch introduces canonical types into GCC, which allow us to
> compare two types very efficiently and results in an overall
> compile-time performance improvement. I have been seeing 3-5%
> improvements in compile time on the G++ and libstdc++ test suites,
> 5-10% on template-heavy (but realistic) code in Boost, and up to 85%
> improvement for extremely template-heavy metaprogramming.

The new macros in tree.h (TYPE_CANONICAL and TYPE_STRUCTURAL_EQUALITY)
need documentation, at least in tree.h, and, ideally, in the ill-named
c-tree.texi as well.

I want to make sure I understand this idiom, in
build_pointer_type_for_mode, and elsewhere:

+  if (TYPE_CANONICAL (to_type) != to_type)
+TYPE_CANONICAL (t) =
+  build_pointer_type_for_mode (TYPE_CANONICAL (to_type),
+  mode, can_alias_all);

If there was already a pointer type to the canonical type of to_type,
then the call build_pointer_type_for_mode will return it.  If there
wasn't, then we will build a new canonical type for that pointer type.
We can't use the pointer type we're building now (i.e., "T") as the
canonical pointer type because we have would have no way to find it in
future, when creating another pointer type for the canonical version of
to_type.

So, we are actually creating more type nodes in this case.  That seems
unfortunate, though I fully understand we're intentionally trading space
for speed just by adding the new type fields.  A more dramatic version
of your change would be to put the new pointer type on the
TYPE_POINTER_TO list for the canonical to_type, make it the canonical
pointer type, and then have the build_pointer_type_for_mode always go to
the canonical to_type to search TYPE_POINTER_TO, considering types to be
an exact match only if they had more fields in common (like, TYPE_NAME
and TYPE_CONTEXT, say).  Anyhow, your approach is fine, at least for now.

+  TYPE_STRUCTURAL_EQUALITY (t) = TYPE_STRUCTURAL_EQUALITY (to_type);

Does it ever make sense to have both TYPE_CANONICAL and
TYPE_STRUCTURAL_EQUALITY set?  If we have to do the structural equality
test, then it seems to me that the canonical type isn't useful, and we
might as well not construct it.

> +  type = build_variant_type_copy (orig_type);
>TYPE_ALIGN (type) = boundary;
> +  TYPE_CANONICAL (type) = TYPE_CANONICAL (orig_type);

Eek.  So, despite having different alignments, we consider these types
"the same"?  If that's what we already do, then it's OK to preserve that
behavior, but it sure seems worrisome.

I'm going to review patch 2/3 here too, since I don't think we should
add the fields in patch 1 until we have something that can actually take
advantage of them; otherwise, we'd just be wasting (more) memory.

+  else if (strict == COMPARE_STRUCTURAL)
+return structural_comptypes (t1, t2, COMPARE_STRICT);

Why do we ever want the explicit COMPARE_STRUCTURAL?

+static hashval_t
+cplus_array_hash (const void* k)
+{
+  hashval_t hash;
+  tree t = (tree) k;
+
+  hash = (htab_hash_pointer (TREE_TYPE (t))
+ ^ htab_hash_pointer (TYPE_DOMAIN (t)));
+
+  return hash;
+}

Since this hash function is dependent on pointer values, we'll get
different results on different hosts.  I was worried that will lead to
differences in generated debug information, perhaps due to different
TYPE_UIDs -- but it looks like there is only ever one matching entry in
the table, so we never have to worry about the compiler "randomly"
choosing between two equally good choices?

Have you tested with flag_check_canonical_types on, and verified that
you get no warnings?  (I agree that a --param for that would be better;
if a user ever has to turn this on, we broke the compiler.)

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Announce: MPFR 2.2.1 is released

2006-12-05 Thread Mark Mitchell
Richard Guenther wrote:

>> As far as I know both versions are released.  What I said was
>> "undistributed," by which I mean: the required version of MPFR is not
>> on my relatively up to date Fedora system.
> 
> It also missed the openSUSE 10.2 schedule (which has the old version
> with all patches).  So I don't like rejecting the old version at any point.

I think the issue of whether to reject the old version of the library is
at least partially orthogonal to the import issue.  Even if we import
the sources, we'll still want to support using an external MPFR, so that
people who do have it on their system can leverage that.  So, we still
have to decide whether to allow older versions.

On that point, I agree with previous posters who have suggested we
should be liberal; we can issue a warning saying that we recommend
2.2.1, but not require it.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-12-12 Thread Mark Mitchell
Andrew Pinski wrote:
> On Fri, 2006-10-13 at 12:51 -0400, Carlos O'Donell wrote:
>> A relocated compiler should not look in $prefix.
>> Comments?
>> OK for Stage1?
> 
> I do have another issue with these set of patches which I did not notice
> until today.
> I can no longer do inside a just built GCC do:
> ./cc1 t.c
> or
> ./xgcc -B. t.c
> If I used the same prefix of an already installed GCC.
> This makes debugging driver issues without installing the driver again.

What are the contents of t.c?  What if you set GCC_EXEC_PREFIX?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-12-12 Thread Mark Mitchell
Andrew Pinski wrote:
>> What are the contents of t.c?  What if you set GCC_EXEC_PREFIX?
> 
> t.c:
> 
> #include 
> int main(void)
> {
>  printf("Hello World\n");
>  return 0;
> }
> 
> --
> No I did not set GCC_EXEC_PREFIX as I did not know I have to set that now.
> Seems like a change like this should be mentioned on
> http://gcc.gnu.org/gcc-4.3/changes.html
> Because some people liked the old behavior when debugging the driver.

This not a user-visible change; it does not affect installed compilers.
 It only affects GCC developers who are working with the uninstalled driver.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [PATCH] Relocated compiler should not look in $prefix.

2006-12-12 Thread Mark Mitchell
Andrew Pinski wrote:

> But other non user-visible changes are mentioned on changes.html already.
> Forward prop in 4.3.
> Incompatible changes to the build system in 4.2 which seems very related to 
> stuff like
> this.

If you want to make a patch, and Gerald approves it, it's fine by me.
But, fwprop is described as a new feature (faster compiler, better
code), and the build system affects people building the compiler.  The
change we're talking about seems to affect only people debugging the
compiler.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Paolo Bonzini appointed build system maintainer

2006-12-18 Thread Mark Mitchell
Paolo --

The GCC SC has appointed you as a co-maintainer of the build machinery.

Please add an appropriate MAINTAINERS entry.

Congratulations, and thank you for accepting this position!

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-01 Thread Mark Mitchell
Daniel Berlin wrote:

>> Richard Guenther added -fwrapv to the December 30 run of SPEC at
>> <http://www.suse.de/~gcctest/SPEC/CFP/sb-vangelis-head-64/recent.html>
>> and
>> <http://www.suse.de/~gcctest/SPEC/CINT/sb-vangelis-head-64/recent.html>.
>> Daniel Berlin and Geert Bosch disagreed about how to interpret
>> these results; see <http://gcc.gnu.org/ml/gcc/2007-01/msg00034.html>.

Thank you for pointing that out.  I apologize for having missed it
previously.

As others have noted, one disturbing aspect of that data is that it
shows that there is sometimes an inverse correlation between the base
and peak flags.  On the FP benchmarks, the results are mostly negative
for both base and peak (with 168.wupwise the notable exception); on the
integer benchmarks it's more mixed.  It would be nice to have data for
some other architectures: anyone have data for ARM/Itanium/MIPS/PowerPC?

So, my feeling is similar to what Daniel expresses below, and what I
think Ian has also said: let's disable the assumption about signed
overflow not wrapping for VRP, but leave it in place for loop analysis.

Especially given:

>> We don't have an exhaustive survey, but of the few samples I've
>> sent in most of code is in explicit overflow tests.  However, this
>> could be an artifact of the way I searched for wrapv-dependence
>> (basically, I grep for "overflow" in the source code).  The
>> remaining code depended on -INT_MIN evaluating to INT_MIN.  The
>> troublesome case that started this thread was an explicit overflow
>> test that also acted as a loop bound (which is partly what caused
>> the problem).

it sounds like that would eliminate most of the problem.  Certainly,
making -INT_MIN evaluate to INT_MIN, when expressed like that, is an
easy thing to do; that's just a guarantee about constant folding.
There's no reason for us not to document that signed arithmetic wraps
when folding constants, since we're going to fold the constant to
*something*, and we may as well pick that answer.

I don't even necessarily think we need to change our user documentation.
 We can just choose to make the compiler not make this assumption for
VRP, and to implement folding as two's-complement arithmetic, and go on
with life.  In practice, we probably won't "miscompile" many
non-conforming programs, and we probably won't miss two many useful
optimization opportunities.

Perhaps Richard G. would be so kind as to turn this off in VRP, and
rerun SPEC with that change?

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-01 Thread Mark Mitchell
Richard Kenner wrote:

>> Where there are reasonable positions on both sides, nobody ever
>> accurately predicts what the majority of a hugely diverse population
>> of language users is going to want, and almost everyone believes
>> they are in that majority.
> 
> I agree.  That's why I support a middle-of-the-road position where we make
> very few "guarantees", but do the best we can anyway to avoid gratuitously
> (meaning without being sure we're gaining a lot of optimization) breaking
> legacy code.

Yes, I think that you, Danny, Ian, and I are all agreed on that point,
and, I think, that disabling the assumption about signed overflow not
occurring during VRP (perhaps leaving that available under control of a
command-line option, for those users who think it will help their code),
 is the right thing to try.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: changing "configure" to default to "gcc -g -O2 -fwrapv ..."

2007-01-02 Thread Mark Mitchell
Richard Guenther wrote:

>> Perhaps Richard G. would be so kind as to turn this off in VRP, and
>> rerun SPEC with that change?
> 
> I can do this.

Thank you very much!

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Top level libgcc checked in

2007-01-03 Thread Mark Mitchell
Ben Elliston wrote:

> So I take it that at this stage we've not commenced the process of
> having libgcc's configury run autoconf tests on the target compiler?
> (Rather than having to hardwire most target details into the t-* files?)
> Any objections to starting down this path?

We should also be very careful not to introduce differences between
native and cross compilers.  So, we should have no run-time tests, no
tests that look at /proc, headers in /usr/include, etc.  I consider it
important that a Canadian-native compiler (i.e., one where $host =
$target, but $build != $host) and a native compiler (i.e., one where
$host = $build = $target) behave identically, given the same
configuration options.

If we decide to go with autoconf, and we are building a Canadian cross,
we should of course test the $build->$target compiler (which is the one
we'll be using to build the libraries), rather than the $host->$target
compiler (which may be the one in the tree).

Given the constraints, I'm not sure that autoconf is a huge win.  I'm
not violently opposed, but I'm not sure there are big benefits.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 Status Report [2007-01-04]

2007-01-04 Thread Mark Mitchell
I've decided to focus next on GCC 4.1.2.  After GCC 4.1.2, I will focus
on GCC 4.2.0.  At this point, I expect GCC 4.3 to remain in Stage 1 for
some time, while we work on GCC 4.1.2 and GCC 4.2.0.  So, I've been
looking at the GCC 4.1.2 Bugzilla entries.

(I'm sure one of your New Year's resolutions was "I shall fix more
regressions in 2007."  If not, it's not too late!)

Bugzilla has entries for 156 P1-P3 regressions.  Of course, some of the
P3s will in fact end up being P4 or P5, so that is not an entirely
accurate count.  There are 18 P1 regressions.  However, I am only aware
of two regressions relative to 4.1.0 or 4.1.1, as recorded here:

http://gcc.gnu.org/wiki/GCC_4.1.2_Status#preview

If you know of more, please let me know, and please update the Wiki page.

I'm not going to let bugs which existed in 4.1.[01] block 4.1.2 -- but I
am going to take a hard line on P1 regressions relative to the previous
4.1.x releases, and I'm going to grumble a lot about P2s.

So, I think we're relatively close to being able to do a 4.1.2 release.
 Let's tentatively plan on a first release candidate on or about January
19th, with a final release approximately a week later.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


libgcc-math

2007-01-11 Thread Mark Mitchell
Richard --

The GCC SC has been discussing libgcc-math.  RMS says that he will need
to consider the issue, and that he has other things he wants to do
first.  So, I'm afraid that we can't give you a good timeline for a
resolution of the question, but I can say that some progress is being made.

FYI,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: bug management: WAITING bugs that have timed out

2007-01-23 Thread Mark Mitchell
Mike Stump wrote:
> On Jan 11, 2007, at 10:47 PM, Joe Buck wrote:
>> The description of WORKSFORME sounds closest: we don't know how to
>> reproduce the bug.  Should that be used?
> 
> No, not generally. 

Of the states we have, WORKSFORME seems best to me, and I agree with Joe
that there's benefit in getting these closed out.  On the other hand, if
someone wants to create an UNREPRODUCIBLE state (which is a "terminal"
state, like WONTFIX), that's OK with me too.  But, let's not dither too
much over what state to use.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Signed int overflow behaviour in the security context

2007-01-23 Thread Mark Mitchell
Ian Lance Taylor wrote:
> Andreas Bogk <[EMAIL PROTECTED]> writes:

> I think a better way to describe your argument is that the compiler
> can remove a redundant test which would otherwise be part of a defense
> in depth.  That is true.  The thing is, most people want the compiler
> to remove redundant comparisons; most people don't want their code to
> have defense in depth, they want it to have just one layer of defense,
> because that is what will run fastest.

Exactly.  I think that Ian's approach (giving us a warning to help track
down problems in real-world code, together with an option to disable the
optimizations) is correct.  Even if the LIA-1 behavior would make GCC
magically better as a compiler for applications that have
not-quite-right security checks, it wouldn't make it better as a
compiler for lots of other applications.

I would rather hope that secure applications would define a set of
library calls for some of these frequently-occurring checks (whether, in
GLIBC, or libiberty, or some new library) so that application
programmers can use them.

(I've also been known to claim that writing secure applications in C may
provide performance advantages, but makes the security part harder.  If
someone handed me a contract to write a secure application, with a
penalty clause for security bugs, I'd sure be looking for a language
that raised exceptions on overflow, bounds-checking failures, etc.)

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [RFC] Our release cycles are getting longer

2007-01-24 Thread Mark Mitchell
Diego Novillo wrote:

> So, I was doing some archeology on past releases and we seem to be
> getting into longer release cycles.  With 4.2 we have already crossed
> the 1 year barrier.

I think there are several factors here.

First, I haven't had as much time to put in as RM lately as in past, so
I haven't been nagging people as much.  I also haven't as much time to
put in as a developer.  For some previous releases, I was the bug-fixer
of last resort, fixing many of the critical bugs -- or at least
proposing broken patches that goaded others into fixing things. :-)
Holidays are over, CodeSourcery's annual meeting is behind us, and I'm
almost caught up on the mailing lists.  So, I expect do more goading --
but probably not much more coding.

Second, I think there are conflicting desires.  In reading this thread,
some people want/suggest more frequent releases.  But, I've also had a
number of people tell me that the 4.2 release cycle was too quick in its
early stages, and that we didn't allow enough time to get features in --
even though doing so would likely have left us even more bugs to fix.
RMS has recently suggested that any wrong code bug (whether a regression
or not) that applies to relatively common code is a severe embarrassment
in a release.  Some people want to put more features onto release
branches, while others think we're too lax about changes.  If there's
one thing I've learned from years of being RM, it's that I can't please
everyone. :-) In any case, I've intentionally been letting 4.3 stage 1
drag out, because it looks like there's a lot of important functionality
coming in, and I didn't want to leave those bits stranded until 4.4.

Some folks have suggested that we ought to try to line up FSF releases
to help the Linux distributors.  Certainly, in practice, the
distributors are naturally most focused at the points that make sense in
their own release cycles.  However, I think it would be odd for the FSF
to try to specifically align with (say) Red Hat and Novell releases
(which may not themselves always be well-synchronized) at the possible
expense of (say) MontaVista and Wind River.  And, there are certainly a
large number of non-Linux users -- even on free operating systems.

In practice, I think that the creation of release branches has been
reasonably useful.  It may be true that some of the big server Linux
distributors aren't planning on picking up 4.2, but I know of other
major users who will be using it.  Even without much TLC, the currently
4.2 release branch represents a reasonably stable point, with some key
improvements over 4.1 (e.g., OpenMP).  Similarly, without much TLC, the
current 4.1 branch is pretty solid, and substantially better than 4.1.1.
 So, the existence of the branch, and the regression-only discipline
thereon, has produced a useful point for consumers, even though there's
not yet a 4.1.2.

I don't think that some of the ideas (like saying that you have to fix N
bugs for every patch you contribute) are very practical.  What we're
seeing is telling us something about "the market" for GCC; there's more
pressure for features, optimization, and ports than bug fixes.  If there
were enough people unhappy about bugs, there would be more people
contributing bug fixes.

It may be that not too many people pick up 4.2.0.  But, if 4.3 isn't
looking very stable, there will be a point when people decide that 4.2.0
is looking very attractive.  The worst outcome of trying to do a 4.2.0
release is that we'll fix some things that are also bugs in 4.3; most
4.2 bugs are also in 4.3.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Signed int overflow behaviour in the security context

2007-01-25 Thread Mark Mitchell
Robert Dewar wrote:

>> So basically you're saying gcc developers should compensate for other
>> people's sloppy engineering?  ;-)
> 
> Yes I would say! where possible this seems an excellent goal.

I agree: when it's possible to support non-standard legacy semantics
that do not conflict with the standard, without substantial negative
impact, then that's a good thing to do.

In this specific case, we know there is a significant performance
impact, and we know that performance is very important to both the
existing and potential GCC user base, so I think that making the
compiler more aggressive at -O2 is sensible.

And, Ian is working on -fno-strict-overflow, so that users have a
choice, which is also very sensible.  Perhaps the distribution vendors
will then add value by selectively compiling packages that need it with
-fno-strict-overflow so that security-critical packages are that much
less likely to do bad things, while making the rest of the system go
faster by not using the option.

I think we've selected a very reasonable path here.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


LTO Status

2007-01-26 Thread Mark Mitchell
Several people have asked for an update on the status of the LTO
project, so Kenny and I have put together a summary of what we believe
the status and remaining issues to be.

The short story is that, unfortunately, we have not had as much time as
we would have liked to make progress on LTO.  Kenny has been working on
the dataflow project, and I have had a lot of other things on my plate
as well.  So -- as always! -- we would be delighted to have other people
helping out.  (One kind person asked me if contributing to LTO would
hurt CodeSourcery by potentially depriving us of contracts.  I doubt
that very much, and I certainly don't think that should stop anybody
from contributing to LTO!)

I still think that LTO is a very important project, and that the design
outline we have is sound.  I think that a relatively small amount of
work (measured in terms of person-months) is required to get us to being
able to handle most of C.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713
Introduction

  This document summarizes work remaining in the LTO front end to
  achieve the initial goal of correct operation on single-file C
  programs.

Changes to the DWARF Reader Required

  The known limitations of the DWARF reader are as follows:

  * Modify 'LTO_READ_TYPE' to byte-swap data, as required for cross
compiling for targets with different endianness.

  * The DWARF reader skips around in the DWARF tree to read types.
It's possible that this will not work in situations with complex
nesting, or that a fixup will be required later when the DIE is
encountered again, during the normal walk.

  * Function-scope static variables are not handled.

  * Once more information about types is saved (see below), references
to layout_type, etc., should be removed or modified, so that the
saved data is not overritten.

  * Unprototyped functions are not handled.

DWARF Extensions Required
  
  The following sections summarize augmentations we must make to the
  DWARF generated by GCC.

  GNU Attributes

Semantic GNU attributes (e.g., dllimport) are not recorded in
DWARF.  Therefore, this information is lost.

  Type Information

At present, the LTO front end recomputes some type attributes (like
machine modes).  However, there is no guarantee that the type
attributes that are computed will match those in the original
program.  Because there is presently no method for encoding this
information in DWARF, we need to take advantage of DWARF's
extensibility features to add these representations.

The type attributes which require DWARF extensions are:

* Type alignment

* Machine mode

  Declaration Flags

There are some flags on 'FUNCTION_DECL' and 'VAR_DECL' nodes that
may need to be preserved.  Some may be implied by GNU attributes,
but others are not.  Here are the flags that should be preserved.

Functions and Variables: 

* 'DECL_SECTION_NAME'

* 'DECL_VISIBILITY'

* 'DECL_ONE_ONLY.

* 'DECL_COMDAT'

* 'DECL_WEAK'

* 'DECL_DLLIMPORT_P'

* 'DECL_ASSEMBLER_NAME'

Functions:

* 'DECL_UNINLINABLE'

* 'DECL_IS_MALLOC'

* 'DECL_IS_RETURNS_TWICE'

* 'DECL_IS_PURE'

* 'DECL_IS_NOVOPS'

* 'DECL_STATIC_CONSTRUCTOR'

* 'DECL_STATIC_DESTRUCTOR'

* 'DECL_NO_INSTRUMENT_FUNCTION_ENTRY_EXIT'

* 'DECL_NO_LIMIT_STACK'

* 'DECL_NO_STATIC_CHAIN'

* 'DECL_INLINE'

Variables:

* 'DECL_HARD_REGISTER'

* 'DECL_HAS_INIT_PRIORITY'

* 'DECL_INIT_PRIORITY'

* 'DECL_TLS_MODEL'

* 'DECL_THREAD_LOCAL_P'

* 'DECL_IN_TEXT_SECTION'

* 'DECL_COMMON'

Gimple Reader and Writer

  Current Status

All gimple forms except for those related to gomp are now handled.
It is believed that this code is mostly correct.

The lto reader and the writer logically work after the ipa_cp pass.
At this point, the program has been fully gimplified and is in fact
in "low gimple".  The reader is currently able to read in and
recreate gimple, and the control flow graph.  Much of the eh handing
code has been written but not tested.

The reader and writer can be compiled in a self checking mode so
that the writer writes a text logging of what is is serializing into
the object file.  The lto reader uses the same logging library to
produce a log of what it is reading.  During reading, the process
aborts if the logs get out of sync.

The current state of the code is that much of the code to serialize
the cfun has not been written or tested.  Without this part of the
code, nothing can be executed downstream of the read

Re: G++ OpenMP implementation uses TREE_COMPLEXITY?!?!

2007-01-28 Thread Mark Mitchell
Steven Bosscher wrote:

> Can you explain what went through your mind when you picked the 
> tree_exp.complexity field for something implemented new...  :-(

Please don't take this tone.  I can't tell if you have your tongue
planted in your cheek, but if you do, it's not obvious.

It's entirely reasonable to look for a way to get rid of this use of
TREE_COMPLEXITY, but things like:

> You know (or so I assume) this was a very Very VERY BAD thing to do

are not helpful.  Of course, if RTH had thought it was a bad thing, he
wouldn't have done it.

Please just state the problem and ask for help (as you did) without
attacking anyone.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: remarks about g++ 4.3 and some comparison to msvc & icc on ia32

2007-01-28 Thread Mark Mitchell
tbp wrote:

> Secundo, while i very much appreciate the brand new string ops, it
> seems that on ia32 some array initialization cases where left out,
> hence i still see oodles of 'movl $0x0' when generating code for k8.
> Also those zeroings get coalesced at the top of functions on ia32, and
> i have a function where there's 3 pages of those right after prologue.
> See the attached 'grep 'movl   $0x0' dump.

It looks like Jan and Richard have answered some of your questions about
inlining (or are in the process of doing so), but I haven't seen a
response to this point.

Certainly, if we're generating zillions of zero-initializations to
contiguous memory, rather than using memset, or an inline loop, that
seems unfortunate.  Would you please file a bug report?

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: remarks about g++ 4.3 and some comparison to msvc & icc on ia32

2007-01-28 Thread Mark Mitchell
Jan Hubicka wrote:

> I though the comment was more reffering to fact that we will happily
> generate
> movl $0x0,  place1
> movl $0x0,  place2
> ...
> movl $0x0,  placeMillion
> 
> rather than shorter
> xor %eax, %eax
> movl %eax, ...

Yes, that would be an improvement, but, as you say, at some point we
want to call memset.

> With the repeated mov issue unforutnately I don't know what would be the
> best place: we obviously don't want to constrain register allocation too
> much and after regalloc I guess only machine dependent pass

I would hope that we could notice this much earlier than that.  Wouldn't
this be evident even at the tree level or at least after
stack-allocation in the RTL layer?  I wouldn't expect the zeroing to be
coming from machine-dependent code.

One possibility is that we're doing something dumb with arrays.  Another
possibility is that we're SRA-ing a lot of small structures, which add
up to a ton of stack space.

I realize that we need a full bug report to be sure, though.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [c++] switch ( enum ) vs. default statment.

2007-01-28 Thread Mark Mitchell
Paweł Sikora wrote:

> typedef enum { X, Y } E;
> int f( E e )
> {
> switch ( e )
> {
> case X: return -1;
> case Y: return +1;
> }
> }
> 
> In this example g++ produces a warning:
> 
> e.cpp: In function ‘int f(E)’:
> e.cpp:9: warning: control reaches end of non-void function
> 
> Adding `default' statemnet to `switch' removes the warning but
> in C++ out-of-range values in enums are undefined.

Not quite.  They are unspecified; see below.

> I see no reason to handling any kind of UB ( especially this ).
> IMHO this warning is a bug in C++ frontend.

This is a tricky issue.  You are correct that the values of "E" are
exactly { 0, 1 } (or, equivalently, { X, Y }).  But, the underlying type
of the enumeration is at least "char".  And, the standard says that
assigning an integer value to enumeration type has unspecified behavior
if it outside the range of the enumeration.

So:

  E e;
  e = (E) 7;

has unspecified behavior, which is defined as:

"behavior,  for  a well-formed program construct and correct data, that
depends on the implementation.  The implementation is not required to
document which behavior occurs."

Because the program is unspecified, not undefined, the usual "this could
erase your disk" thinking does not apply.  Unspecified is meant to be
more like Ada's bounded errors (though not as closely specified), in
that something vaguely sensible is supposed to happen.

For GCC, what happens (though we need not document it) is that the value
is converted to the underlying type -- but not masked down to { 0, 1 },
because that masking would be costly.  So, "((int) e == 7)" may be true
after the assignment above.  (Perhaps, in some modes GCC may optimize
away the assignment because it will "know" that "e" cannot be 7, but it
does not do so at -O2.)

So, now, what should we do about the warning?  I think there are good
arguments in both directions.  On the one hand, portable programs cannot
assume that assigning out-of-range values to "e" does anything
predictable, so portable programs should never do that.  So, if you've
written the entire program and are sure that it's portable, you don't
want the warning.  On the other hand, if you are writing a portable
library designed to be used with other people's programs, you might
every well want the warning -- because you can't be sure that they're
not going to pass "7" in as the value of "e", and you may want to be
robust in the face of this *unspecified* behavior.

In practice, this warning from GCC is keyed off what it thinks the
effective range of "E" is.  If it starts assuming that "e" can only have
the values { 0, 1 }, it will also stop warning about the missing case.
It would be hard to stop emitting the warning without making that
assumption, and it may not be easy to make the assumption, but still
avoid the expensive masking operations.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1 Branch Frozen in Preparation for GCC 4.1.2 RC1

2007-01-28 Thread Mark Mitchell
I plan to create GCC 4.1.2 RC1 sometime this afternoon, US/Pacific time.

Therefore, please do not make any checkins to the 4.1 branch after 2PM
PST.  Once RC1 is uploaded, the branch will be open only for changes
which have my explicit approval, until the release.

Remember that the primary criteria for 4.1.2 is that it not contain
regressions from earlier 4.1.x releases.  As it is a mature codebase, we
do of course want to other critical problems, but we know it to be
useful to many users, so our first and foremost concern is that 4.1.1
users be able to upgrade easily to 4.1.2.

If you know of problems that you think should prevent a 4.1.2 release,
particularly critical regressions from earlier 4.1.x releases, please
make sure that there is a Bugzilla PR for the issue of concern to you,
and send an email to me with a pointer to the PR.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: remarks about g++ 4.3 and some comparison to msvc & icc on ia32

2007-01-28 Thread Mark Mitchell
tbp wrote:
> On 1/28/07, Mark Mitchell <[EMAIL PROTECTED]> wrote:
>> Certainly, if we're generating zillions of zero-initializations to
>> contiguous memory, rather than using memset, or an inline loop, that
>> seems unfortunate.  Would you please file a bug report?
> Because it takes, or seems to, a large function with small structure
> sprinkled around to trigger proper condition, i can't make a
> convincing reduced testcase, I guess that goes along with what Richard
> said.

It doesn't need to be a small testcase.  If you have a preprocessed
source file and a command-line, I'm sure one of the GCC developers would
be able to analyze the situation.  We're all good at isolating problems,
even starting with big complicated inputs.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: [c++] switch ( enum ) vs. default statment.

2007-01-29 Thread Mark Mitchell
Paweł Sikora wrote:

>> On the other hand, if you are writing a portable library designed
>> to be used with other people's programs, you might every well want
>> the warning -- because you can't be sure that they're not going to
>> pass "7" in as the value of "e", and you may want to be robust in
>> the face of this *unspecified* behavior.
> 
> sorry, i don't care about unspecified/undefined behavior triggered
> by users glitches. it's not a problem of my library.

The point I was trying to make was that "unspecified" and "undefined"
are actually very different.  I wouldn't be too surprised if, in the
future, G++ defined the behavior of the "e = (E) 7" case as storing the
value in the underlying type.  Then, might indeed rely on that.

Obviously, you're free to make your own decisions, but, personally, I
would certainly feel free to assume that no undefined behavior happened
in the application -- but I wouldn't assume that no unspecified behavior
occurred.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: G++ OpenMP implementation uses TREE_COMPLEXITY?!?!

2007-01-29 Thread Mark Mitchell
David Edelsohn wrote:

>   Have any of you considered that Steven was using hyperbole as a
> joke?  Are some people so overly-sensitized to Steven that they assume the
> worst and have a knee-jerk reaction criticizing him?

Yes, I did consider it; that's why I said:

> I can't tell if you have your tongue
> planted in your cheek, but if you do, it's not obvious.

Email is a tricky thing.  I've learned -- the hard way -- that it's best
to put a smiley on jokes, because otherwise people can't always tell
that they're jokes.

I don't think this was a knee-jerk reaction on my part.  I certainly
appreciate and respect Steven's contributions to GCC.  I read Steven's
post, the follow-ups, considered them for a while, read RTH's original
post, and then decided to post my message.

I certainly admit to a personal bugaboo about email tone on public
lists.  I think it's very important to err on the side of politeness.

>   The issue began as a light-hearted discussion on IRC.  Steven's
> tone came across as inappropriate in email without context.  However,
> Mark's reply defending RTH was not qualified with "probably", which was an
> unfortunate omission, IMHO.

I did not defend RTH, except insofar as to suggest that RTH didn't act
with ill will.  It's true that I can't be certain of that, but it seems
highly unlikely to me that any GCC contributor would intentionally check
in a patch that they knew was in conflict with a clear direction of GCC.
 My guess is that RTH forgot the patch used TREE_COMPLEXITY, forgot we
were removing TREE_COMPLEXITY, or something.

Even if my original posting, I wrote:

> It's entirely reasonable to look for a way to get rid of this use of
> TREE_COMPLEXITY

I refrained from specifically criticizing RTH's check-in, but I did not
in any way try to defend his use of TREE_COMPLEXITY.  I agree that using
TREE_COMPLEXITY for OpenMP is undesirable, and that we should eliminate
that use.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: G++ OpenMP implementation uses TREE_COMPLEXITY?!?!

2007-01-29 Thread Mark Mitchell
Steven Bosscher wrote:
> On 1/29/07, Mark Mitchell <[EMAIL PROTECTED]> wrote:
>> Email is a tricky thing.  I've learned -- the hard way -- that it's best
>> to put a smiley on jokes, because otherwise people can't always tell
>> that they're jokes.
> 
> I did use a smiley.
> 
> Maybe I should put the smiley smiling then, instead of a sad looking
> smlley.

To me, they do mean very different things.  The sad smiley didn't say
"joke"; it said "boo!".  I think that a happy smiley would help.

Like I say, I've had exactly the same problem with my own humor-impaired
recipients.  So, I think it's best just to live in constant fear that
people don't think things are funny. :-)

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 RC1

2007-01-29 Thread Mark Mitchell
GCC 4.1.2 RC1 is now on ftp://gcc.gnu.org and its mirrors.  The
canonical location is:

  ftp://gcc.gnu.org/pub/gcc/prerelease-4.1.2-20070128

As with all prereleases, the issue of most concern to me is packaging.
Therefore, please test the actual pre-release tarballs, rather than
sources from SVN.  Beyond packaging problems, I'm most concerned about
regression from previous 4.1.x releases, since the primary purpose of
4.1.2 is to provide an upgrade path from previous 4.1.x releases,
incorporating the bug fixes since 4.1.1.

If you do encounter problems, please file a Bugzilla PR, and add me to
the CC: list for the issue.  Please do not send me reports without first
filing a PR, as I am unable to keep track of all the issues if they are
not in the database.

We'll do either the final GCC 4.1.2 release (if all goes well), or RC2
(if it doesn't) in about a week.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1 Branch Frozen in Preparation for GCC 4.1.2 RC1

2007-01-30 Thread Mark Mitchell
Rask Ingemann Lambertsen wrote:
> On Sun, Jan 28, 2007 at 11:53:41AM -0800, Mark Mitchell wrote:
>> I plan to create GCC 4.1.2 RC1 sometime this afternoon, US/Pacific time.
>>
>> Therefore, please do not make any checkins to the 4.1 branch after 2PM
>> PST.  Once RC1 is uploaded, the branch will be open only for changes
>> which have my explicit approval, until the release.
>>
>> Remember that the primary criteria for 4.1.2 is that it not contain
>> regressions from earlier 4.1.x releases.
> 
>PR target/30370 (powerpc-unknown-eabispe can't build libgcc2) is a
> regression from 4.1.1. A patch was posted earlier this month at
> http://gcc.gnu.org/ml/gcc-patches/2007-01/msg00600.html>. I have
> regrettably forgotten to ping this patch (for which I think David's approval
> was only for the 4.2 branch). In any case, I don't have SVN write access and
> will need someone else to commit the patch.

Joseph, David, do you have any comments about this patch?  The same
clearly erroneous code (i.e., the incorrect use of "findstring") does
appear on the 4.1 branch.  Therefore, my inclination would be to apply
the patch.  However, I'm not able to effectively test the patch, so I
would appreciate your feedback.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 RC1

2007-01-30 Thread Mark Mitchell
Robert Schwebel wrote:

> What about PR28516, would it be acceptable for 4.1.2?

There are two issues:

(1) it's not marked as a 4.1 regression, let alone a regression from
4.1.x.  Did this test case work with older versions of GCC?

(2) Richard Earnshaw objected to applying the patch to 4.1 because it
requires a newer GAS.  Paul's counter that the newer GAS is only needed
if your compiler would otherwise crash seems persuasive to me, if true,
but I'd certainly want Richard to be comfortable with the change.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1 Branch Frozen in Preparation for GCC 4.1.2 RC1

2007-01-30 Thread Mark Mitchell
Joseph S. Myers wrote:
> On Tue, 30 Jan 2007, Mark Mitchell wrote:
> 
>>>PR target/30370 (powerpc-unknown-eabispe can't build libgcc2) is a
>>> regression from 4.1.1. A patch was posted earlier this month at
>>> http://gcc.gnu.org/ml/gcc-patches/2007-01/msg00600.html>. I have
>>> regrettably forgotten to ping this patch (for which I think David's approval
>>> was only for the 4.2 branch). In any case, I don't have SVN write access and
>>> will need someone else to commit the patch.
>> Joseph, David, do you have any comments about this patch?  The same
>> clearly erroneous code (i.e., the incorrect use of "findstring") does
>> appear on the 4.1 branch.  Therefore, my inclination would be to apply
>> the patch.  However, I'm not able to effectively test the patch, so I
>> would appreciate your feedback.
> 
> I think it should be applied to 4.1 and 4.2 branches.  (For trunk, the 
> issue is fixed properly, by toplevel libgcc removing the need for 
> findstring in conjunction with E500 long double making -mlong-double-128 a 
> supported option in this case so removing the underlying ICE.)

Thanks for the feedback.  David mentioned that he's traveling until
Friday, and I want to let him comment.  So, I've put the PR on the 4.1.2
Status Wiki page, and I will review it before the release.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 RC1

2007-01-30 Thread Mark Mitchell
Paul Brook wrote:
> On Wednesday 31 January 2007 01:26, Mark Mitchell wrote:
>> Robert Schwebel wrote:
>>> What about PR28516, would it be acceptable for 4.1.2?
>> There are two issues:
>>
>> (1) it's not marked as a 4.1 regression, let alone a regression from
>> 4.1.x.  Did this test case work with older versions of GCC?
> 
> 4.1 is the first FSF release to support arm-linux-gnueabi (the bug only 
> effects eabi targets).  The bug is preset in 4.0, but apparently noone other 
> than glibc uses nested functions with -fexceptions.

OK; in that case, I don't think this qualifies as a regression from a
previous 4.1.x release.  But, I'd certainly consider it within the
discretion of the ARM maintainers to put this patch on the 4.1 branch
after 4.1.2 is out, as it's a simple ARM-only patch.  That assumes, of
course, that everyone agrees about the GAS issue.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: gcc-4.1.2 RC1 build problem

2007-02-02 Thread Mark Mitchell
Joe Buck wrote:
> On Fri, Feb 02, 2007 at 05:23:01PM +0100, Eric Botcazou wrote:
>>> dec-osf4.0f/4.1.2/install-tools/mkheaders.conf
>>> /bin/sh: : cannot execute
>>> /bin/sh: /]*/../,, -e ta: not found
>>> sed: Function s,[ cannot be parsed.
>> That should not happen on Solaris if you set CONFIG_SHELL=/bin/ksh as 
>> recommended in http://gcc.gnu.org/install/specific.html#x-x-solaris2
> 
> I recall that it is also needed on Alpha OSF, though it's been a couple
> of years since I had an Alpha to play with.

Thanks (to both Kate and Joe) for the information.  I'm not going to
consider this issue a showstopper, but if setting CONFIG_SHELL doesn't
fix it, I would consider it more serious.  Please let me know if that's
the case.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 Status Report

2007-02-04 Thread Mark Mitchell
[Danny, Richard G., please see below.]

Thanks to all who have helped tested GCC 4.1.2 RC1 over the last week.

I've reviewed the list traffic and Bugzilla.  Sadly, there are a fair
number of bugs.  Fortunately, most seem not to be new in 4.1.2, and
therefore I don't consider them showstoppers.

The following issues seem to be the 4.1.1 regressions:

  http://gcc.gnu.org/wiki/GCC_4.1.2_Status

PR 28743 is only an ICE-on-invalid, so I'm not terribly concerned.

Daniel, 30088 is another aliasing problem.  IIIRC, you've in the past
said that these were (a) hard to fix, and (b) uncommon.  Is this the
same problem?  If so, do you still feel that (b) is true?  I'm
suspicious, and I am afraid that we need to look for a conservative hack.

Richard, 30370 has a patch, but David is concerned that we test it on
older GNU/Linux distributions, and suggested SLES9.  Would you be able
to test that?

Richard, 29487 is an issue raised on HP-UX 10.10, but I'm concerned that
it may reflect a bad decision about optimization of C++ functions that
don't throw exceptions.  Would you please comment?

I'm not sure yet as to whether we will do an RC2, or not; I will make
that decision after getting the answers to the issues above.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 Status Report

2007-02-05 Thread Mark Mitchell
Daniel Berlin wrote:

>> Daniel, 30088 is another aliasing problem.  IIIRC, you've in the past
>> said that these were (a) hard to fix, and (b) uncommon.  Is this the
>> same problem?  If so, do you still feel that (b) is true?  I'm
>> suspicious, and I am afraid that we need to look for a conservative hack.
> 
> It's certainly true that people will discover more and more aliasing
> bugs the harder they work 4.1 :)

Do you think that PR 30088 is another instance of the same problem, and
that therefore turning off the pruning will fix it?

> There is always the possibility of turning off the pruning, which will
> drop our performance, but will hide most of the latent bugs we later
> fixed through rewrites well enough that they can't be triggered (the
> 4.1 optimizers aren't aggressive enough).

Is it convenient for you (or Richard?) to measure that on SPEC?
(Richard, thank you very much for stepping up to help with the various
issues that I've raised for 4.1.2!)  Or, have we already done so, and
I've just forgotten?  I'm very mindful of the import or performance, but
if we think that these aliasing bugs are going to affect reasonably
large amounts of code (which I'm starting to think), then shipping the
compiler as is seems like a bad idea.

(Yes, there's a slippery slope argument whereby we turn off all
optimization, since all optimization passes may have bugs.  But, if I
understand correctly, the aliasing algorithm in 4.1 has relatively
fundamental problems, which is rather different.)

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 Status Report

2007-02-05 Thread Mark Mitchell
Richard Guenther wrote:

>> > It's certainly true that people will discover more and more aliasing
>> > bugs the harder they work 4.1 :)
>>
>> Do you think that PR 30088 is another instance of the same problem, and
>> that therefore turning off the pruning will fix it?
> 
> Disabling pruning will also increase memory-usage and compile-time.

You indicated earlier that you didn't think 30088 was a regression on
the branch.  That's an excellent point.  I had it on my list of
regressions from 4.1.[01], but perhaps I was misinformed when I put it
on the list.  Given that, I don't think we need to worry about it for
4.1.2; it's just one of several wrong-code regressions...

> I don't think we need to go this way - there is a workaround available
> (use -fno-strict-aliasing) and there are not enough problems to warrant
> this.

For the record, I don't think the workaround argument is as strong,
though.  When the user compiles a large application and it doesn't work,
there's no hint that -fno-strict-aliasing is the work-around.  It's not
like an ICE that makes you think "Hmm, maybe I should turn off that
pass, or compile this file with -O0".

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: false 'noreturn' function does return warnings

2007-02-06 Thread Mark Mitchell
Zack Weinberg wrote:
> Back in 2000 I wrote a really simple patch that caused gcc to treat an
> ASM_OPERANDS that clobbered "pc" as a control flow barrier, exactly
> for this problem.
> 
> http://gcc.gnu.org/ml/gcc-patches/2000-01/msg00190.html
> 
> I still think it was a good idea, but at the time it was received
> unenthusiastically.  I especially think "just use __builtin_trap()"
> misses the mark - at least some variants of the Linux kernel's BUG()
> macro, for instance, want to stick annotations in the assembly stream,
> which you cannot do with __builtin_trap...

I agree.  Adding options to make __builtin_trap do different things
(like generate an abort signal in user applications and something else
in kernel mode) is pretty nasty.  Linux is a favorite GNU kernel, but
there are lots of other kernels out there, and this could get comlicated
in a hurry.

I think clobbering the PC is a reasonably good way to represent a
control-flow barrier at user level.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


GCC 4.1.2 RC2

2007-02-09 Thread Mark Mitchell
GCC 4.1.2 RC2 is now available from:

ftp://gcc.gnu.org/pub/gcc/prerelease-4.1.2-20070208

and its mirrors.

The changes relative to RC1 are fixes for:

1. PR 29683: a wrong-code issue on Darwin
2. PR 30370: a build problem for certain PowerPC configurations
3. PR 29487: a build problem for HP-UX 10.10 a code-quality problem for
C++ on all platforms

If you find problems in RC2, please file them in Bugzilla.  For any
issues which are regressions relative to 4.1.1 or 4.1.0, please alert me
by email, referencing the Bugzilla PR number.  Please do not send me
email before filing a PR in Bugzilla.

Based on the absence of issues reported for GCC 4.1.2 RC1, I expect GCC
4.1.2 to be identical to these sources, other than version numbers, and
so forth.  I intend to spin the final release early next week.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: US Daylight Savings Time Changes

2007-02-11 Thread Mark Mitchell
David Edelsohn wrote:
>>>>>> Tom Tromey writes:
> 
> Tom> David probably knows this, but for others, Jakub and Andrew put in a
> Tom> patch for this today.  I think it is only on trunk, not any other
> Tom> branches.
> 
>   Should this be included in GCC 4.1.2?

Unfortunately, I think it's too late for that.  Java is not a major
release priority, and at this point I'm not anticipating a 4.1.2 RC3.
However, I would suggest that we apply the patch to the 4.1 branch after
4.1.2 is released, assuming that the Java maintainers are comfortable
with that.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 RC2

2007-02-11 Thread Mark Mitchell
Kaveh R. GHAZI wrote:

[Java folks: see below for check-in window for daylight savings time
patches.]

> Test results for sparc/sparc64 on solaris2.10 are here:
> http://gcc.gnu.org/ml/gcc-testresults/2007-02/msg00422.html
> http://gcc.gnu.org/ml/gcc-testresults/2007-02/msg00423.html

Thanks!

In general, I think it's too late to fix anything but the mostly truly
egregious problems at this point.  There have been lots of opportunities
to report problems in 4.1.2; at this point, I want to get it out the
door, and start pushing on 4.2.0.

> 1.  g++.dg/debug/debug9.C fails as described in PR 30649.  I believe this
> is simply a mistaken testcase checkin.  If confirmed by someone, no big
> deal I can remove it.

I think there's no question this is a bogus checkin.  I've removed the
testcase.

> 2.  g++.dg/tree-ssa/nothrow-1.C fails with -fpic/-fPIC.

This appears to be another manifestation of the problem from PR 29487
(now fixed).  Here, the compiler is again making an unreasonably
conservative assumption that will substantially penalize C++ shared
libraries: namely, that all global functions in shared libraries may be
replaced at link time, and that callers must therefore assume that they
may throw exceptions.

You are correct that this stems from Richard's patch, though that patch
made sense on its own: he used the same rules for exceptions that were
otherwise used for const/pure functions.

I think the key flaw here is that we are using binds_local_p in two
different ways.  One way is to tell us what kind of references we can
emit; for a locally bound entity we may/must use certain relocations,
etc., that we cannot with a global entity.  However, I think it's
unreasonably pessimistic to say that just because the user might use the
linker to replace a function in some way, we can't reason from the
function's behavior.  If the user doesn't state that intent explicitly
(by declaring the function weak), I think we should be free to optimize
based on the body of the function.

I think that this suggests that even the definition of
DECL_REPLACEABLE_P that I checked in is too conservative.  Perhaps the
definition should simply be "DECL_WEAK (decl) && !DECL_COMDAT (decl))";
in other words, only explicitly weak functions are replaceable from an
optimization point of view.  It was weak functions that were the
motivation for the original change.

I would certainly appreciate comments from others about this issue.

I do think this is an important issue; it would be bad for Mozilla, KDE,
etc., to suffer a significant optimization issue in going from 4.1.1 to
4.1.2.  I was pretty determined to get 4.1.2 out the door, and I really
don't want to have any functional changes between the last RC and the
actual release.  So, I feel that I have no choice but to do a 4.1.2 RC3
with a more conservative version of DECL_REPLACEABLE_P.

Therefore, if the Java folks have daylight savings time patches that
they would like to check in, please do so before Monday evening,
California time.  If these work out, we'll leave them in for 4.1.2;
otherwise, we'll back them out.  We will not do an RC4 simply to correct
problems in these patches: the choices will be only to keep the patches
checked in or to take them out entirely.

> 4.  gcc.dg/tree-ssa/20030714-1.c fails with -fpic/-fPIC and this one
> appears to have regressed since the case is from 2003. 

This looks to be at worst a minor code quality issue.

> 6.  22_locale/num_put/put/wchar_t/14220.cc fails with sparc64 -fpic/-fPIC.

This is unfortunate, but I don't see any evidence of a major blocking
issue there.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 RC2

2007-02-11 Thread Mark Mitchell
Eric Botcazou wrote:
>> Therefore, if the Java folks have daylight savings time patches that
>> they would like to check in, please do so before Monday evening,
>> California time.  If these work out, we'll leave them in for 4.1.2;
>> otherwise, we'll back them out.  We will not do an RC4 simply to correct
>> problems in these patches: the choices will be only to keep the patches
>> checked in or to take them out entirely.
> 
> What about the patch for PR other/27843 then?

It's OK with me to check that in, assuming it's been approved for
mainline and 4.2.  However, the same rules apply: I'll back it out,
rather than try to fix it, for a final release.

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 RC2

2007-02-12 Thread Mark Mitchell
Ian, Richard, Diego --

I've explicitly forwarded this to you, as folks who have done work on
middle-end optimization and have seen lots of real-world code.

(That's not to say that I'm not looking for comments from anyone and
everyone -- but I'm specifically trying to get at least some feedback,
so I'm picking on some likely candidates. :-))

Do you have any thoughts on the comments quoted below?  Does it seem
overly aggressive to you to assume "f" cannot throw in "g", given:

  void f() {}
  void g() { f(); }

where this code is in a shared library?  Of course, with DWARF2 EH, the
cost is mostly in terms of code size; whereas with SJLJ EH it's also
execution time.  Is it silly for me to be worrying about this
pessimization in 4.1.2?

Thanks,

>> 2.  g++.dg/tree-ssa/nothrow-1.C fails with -fpic/-fPIC.
> 
> This appears to be another manifestation of the problem from PR 29487
> (now fixed).  Here, the compiler is again making an unreasonably
> conservative assumption that will substantially penalize C++ shared
> libraries: namely, that all global functions in shared libraries may be
> replaced at link time, and that callers must therefore assume that they
> may throw exceptions.
> 
> You are correct that this stems from Richard's patch, though that patch
> made sense on its own: he used the same rules for exceptions that were
> otherwise used for const/pure functions.
> 
> I think the key flaw here is that we are using binds_local_p in two
> different ways.  One way is to tell us what kind of references we can
> emit; for a locally bound entity we may/must use certain relocations,
> etc., that we cannot with a global entity.  However, I think it's
> unreasonably pessimistic to say that just because the user might use the
> linker to replace a function in some way, we can't reason from the
> function's behavior.  If the user doesn't state that intent explicitly
> (by declaring the function weak), I think we should be free to optimize
> based on the body of the function.
> 
> I think that this suggests that even the definition of
> DECL_REPLACEABLE_P that I checked in is too conservative.  Perhaps the
> definition should simply be "DECL_WEAK (decl) && !DECL_COMDAT (decl))";
> in other words, only explicitly weak functions are replaceable from an
> optimization point of view.  It was weak functions that were the
> motivation for the original change.
> 
> I would certainly appreciate comments from others about this issue.
> 
> I do think this is an important issue; it would be bad for Mozilla, KDE,
> etc., to suffer a significant optimization issue in going from 4.1.1 to
> 4.1.2.  I was pretty determined to get 4.1.2 out the door, and I really
> don't want to have any functional changes between the last RC and the
> actual release.  So, I feel that I have no choice but to do a 4.1.2 RC3
> with a more conservative version of DECL_REPLACEABLE_P.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: GCC 4.1.2 RC2

2007-02-12 Thread Mark Mitchell
Richard Henderson wrote:
> On Mon, Feb 12, 2007 at 10:06:11AM -0800, Mark Mitchell wrote:
>> Does it seem overly aggressive to you to assume "f" cannot throw
>> in "g", given:
>>
>>   void f() {}
>>   void g() { f(); }
>>
>> where this code is in a shared library?
> 
> Yes.
> 
> If F is part of the exported (and overridable) interface of
> the shared library, one should be able to replace it with
> any valid function.

I understand that people do this for C code (e.g., replace memcpy, or
other bits of the C library).  Those routines aren't generally
manipulating global data; they're often stand-alone APIs that just
happened to be grouped together into a single library.

But, aren't big C++ shared libraries rather different?  Does KDE
actually use throw() everywhere, or visibility attributes?  But,
presumably, most people don't replace the implementation of
ScrollBar::ScrollUp or whatever.  I'd be happy to know I'm wrong here,
but I'd be surprised if there aren't large amounts of legacy code that
would be hurt by this change.  Do you disagree?

Thanks,

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


Re: Some thoughts and quetsions about the data flow infrastracture

2007-02-12 Thread Mark Mitchell
Vladimir Makarov wrote:
>  On Sunday I had accidentally chat about the df infrastructure on
> IIRC.  I've got some thoughts which I'd like to share.
> 
>  I like df infrastructure code from the day one for its clearness.
> Unfortunately users don't see it and probably don't care about it.

You're right that users don't care what the guts of the compiler look
like.  That's exactly right: they care only about about the speed of the
compiler, the speed of the generated code, correctness of the generated
code, overall quality, etc.

However, my understanding (as someone who's not an expert on the DF code
base) is that, as you say, the new stuff is much tidier.  I understood
the objective to be not so much that DF itself would directly improve
the generated code, but rather than it would provide much-need
infrastructure that would allow future improvements.  This is a lot like
TREE-SSA which, by itself, wasn't so much about optimization as it was
about providing a platform for optimization.

Obviously, making sure that the DF code isn't actively hurting us when
it goes into the compiler is part of the agreed-upon criteria.  But, I
don't think it needs to markedly help at this point, as long as people
are comfortable with the interfaces and functionality it provides.

>  So could be somebody from steering committee be kind and answer me
> 
>Is it (prohibiting to use other dataflow schemes) steering committee
>  decision?

No, the Steering Committee has not considered this issue.  As you
suggest, I don't think it's something the SC would want to consider;
it's a technical issue.  I'd certainly think that it would be good for
all passes to use the DF infrastructure.

However, if there were really some special case where we could do
markedly better without it, and no feasible way to give the special-case
information to the core DF code, then I'm sure everyone would agree that
it made sense to use something different.  But, that would be only in
extraordinary situations, rather than having lots of reinvention of the
same infrastructure.

-- 
Mark Mitchell
CodeSourcery
[EMAIL PROTECTED]
(650) 331-3385 x713


  1   2   3   4   5   6   7   8   9   10   >