Ian Lance Taylor wrote:
Kenneth Zadeck <[EMAIL PROTECTED]> writes:

I think that one thing that the gcc community should understand is
that to a great extent whopr is a google thing.   All of the documents
are drafted by google people, in meetings that are only open to google
people and it is only after these documents have been drafted do the
people who are outside of google who are working on lto, like Honza
and myself, see the documents and get to comment.  The gcc community
never sees the constraints, deadlines, needs, or benchmarks that are
motivating the decisions that are made in the whopr documents.

Every new gcc development starts that way.  Somebody has to put
together the initial proposal.  How many people were invited to work
on the initial LTO proposal before it was sent out?  Did anybody
outside of Red Hat see the tree-ssa proposal before it was sent out?

The WHOPR document has been out there for some time, and it was sent
out before any implementation work started.  There is no Google cabal
pushing it.  There is no secret information behind it, no constraints
or deadlines or benchmarks.  We did have the advantage of talking to
Google employees about their experience with LTO-style work done at
Intel and HP and Transmeta.  Some of the people we talked to have no
plans or interest in working on gcc, and it would not be fair to rope
them into the conversation further.  Google's needs are clear: we have
large programs.

Let's deal with these issues on the technical merits, not on
organizational issues.  If Google were dumping code on gcc, you would
have a legitimate complaint.  Here Google is proposing plans before
any work is started.  You seem to be complaining that the community
should have seen the plans at an earlier stage.  That makes no sense.
They are still just plans, they were based on all of two days of
meetings and discussions, and they are still completely open to
discussion and change.


Ian, i am not dumping on google. But there is a particular perspective that you have which is driven by your legitimate need to handle very large applications. This perspective may not be shared by the rest of the gcc community. I was really only pointing that out. In particular, there are a lot of decisions that are being made in whopr to support very large applications that are done so at the expense of compiling modest and even large applications. I do not necessarily disagree with these decisions, but I think that it is very important to get that out in front of everyone and let the community come to an informed consensus.
Honza and I plan, and are implementing, a system where most, but
probably all of the ipa passes, will be able to work in an environment
where the entire call graph and all of the decls and types are
available.  I.e. only the function bodies are missing.    In this
environment, we plan to do all of the interprocedural analysis and
generate work orders that will be applied to each function.

I don't see that as being opposed to the WHOPR ideas.  It's not like
WHOPR will prohibit that approach.  It's a limiting case.


In particular, as consumer
machines get larger memories and more processors, the assumption that
we cannot see all of the functions bodies gets more questionable,
especially for modest sized apps that are the staple of the gcc
community.

I question that assumption, and I especially question any assumption
that gcc should only work for modest sized apps.

Ian, there are tradeoffs here. My point is that there are a lot of things that can be done with modest sized apps or libraries that cannot be done on google sized applications. Remember that the majority of the world outside of google neither has google sized applications or could compile them if they did.

While i agree that some form of lto needs to support monster apps, that should not inhibit us from supporting transformations or models of compilation that are only practical with 100k line programs.

Ian

Reply via email to