Steven Bosscher wrote:

On 2/13/07, Vladimir Makarov <[EMAIL PROTECTED]> wrote:

  Wow, I got so many emails. I'll try to answer them in one email in
Let us look at major RTL optimizations: combiner, scheduler, RA.


...PRE, CPROP,SEE, RTL loop optimizers, if-conversion, ...  It is easy
to make your arguments look valid if you take it as a proposition that
only register allocation and scheduling ought to be done on RTL.

The reality is that GIMPLE is too high level (by design) to catch many
useful transformations performed on RTL. Tthink CSE of lowered
addresses, expanded builtins, code sequences generated for bitfield
operations and expensive instructions (e.g. mul, div).  So we are
going to have more RTL optimizers than just regalloc and sched.

Many RTL optimizations still matter very much (disable some of them
and test SPEC again, if you're unconvinced).  Having a uniform
dataflow framework for those optimizations is IMHO a good thing.

Steven, I am agree with you. I am not against a df infrastructure. Well defined and efficient one is always good thing. As I wrote I even uses the proposed DF in my RA project.

I am just trying to convince that the proposed df infrastructure is not ready and might create serious problems for this release and future development because it is slow. Danny is saying that the beauty of the infrastracture is just in improving it in one place. I am agree in this partially. I am only affraid that solution for faster infrastructure (e.g. another slimmer data representation) might change the interface considerably. I am not sure that I can convinince in this. But I am more worried about 4.3 release and I really believe that inclusion of the data flow infrastructure should be the 1st step of stage 1 to give people more time to solve at least some problems.

Saying that I hurt some feeling people who put a lof of efforts in the infrastracture like Danny, Ken, and Seongbae and I am sorry for that.



Do
we need a global analysis for building def-use use-def chains?   We
don't need it for combiner (only in bb scope)


It seems to me that this limitation is only there because when combine
was written, the idea of "global dataflow information" was in the
"future work" section for most practical compilers.  So, perhaps
combine, as it is now, does not need DU/UD chains. But maybe we can
improve passes like this if we re-implement them in, or migrate them
to a better dataflow framework.

Combiner is an older approach of code selection. It was designed by the same authors (Fraser and Proebsting) before they designed BURG. I remeber even intermidate approaches when minimal cover of tree by subtrees representing the machine insns was tried to be solved by context free grammar parsers. Modern code selection like BURG (dynamic programming approach which tries to find real *minimal cost* cover) works on tree of IR insns in one BB (or in more complex case on DAG. In this case it not optimal solution is used). I even have own tool for this NONA http://cocom.sf.net . Although it might be a good research to make it work on insns from diffrent BBs.

The problem is that to use the modern approach you need another description of insns (with one pattern - one machine insn relation) in tree representation with given cost for the tree. And it is a huge work to rewrite current machine descriptions even only for this.


Reply via email to