Wow, I got so many emails. I'll try to answer them in one email in order not to repeat.
Mark Mitchell wrote:
I was not trying to suggest that DF is necessarily as sweeping a change as TREE-SSA. Certainly, it's not a completely change to the representation.
It is not sweeping change as Tree-SSA. Tree-SSA is designed and sharpened for global optimizations. Ken knows that well because he is one inventor of SSA. Let us look at major RTL optimizations: combiner, scheduler, RA. Do we need a global analysis for building def-use use-def chains? We don't need it for combiner (only in bb scope), we don't need it for scheduler (only for DAG region), we don't need at all for reload. So building and uses use-def chains is definitely overkill which will make the compiler slower. A lot of things were done to break dependencies in scheduler manageable because scheduling is quadratic algorithm, addressing it is just moving all this code to the infrastructure. Some algorithms would benefit of global analysis for def-use, use-def chains (like gcse after reload or webizier which is switched off by default) but it can be done without fat structures of the data flow infrastructure. We need more accurate life analysis (liveness with availability). It could be fixed in the current life analysis. They say about other inaccuracies in which importance I doubt. But it can be fixed there again. I agree the lif analysis is the mess but you can rewrite it without introducing fat structures to calculate the relations. So I think they did not investigate drawbacks of the df with its fat structures which imho are reasons for slowness. They just took existing Mike Hyes's code without investigating alternative slimmer representations. And possible alternative slimmer representations might change the interface well. So what is the rest is duplicated may be faster representation of insn operands, whose time and memory resources for creating is not justified. That is not how tree-SSA was designed and developed during 5 years. Once again I am not against a new df infrastructure. I am against to rush it into the mainline. I understand that there are a lot of things to be investigated. They are promising blue sky but I don't see it now. May be I am blind. I understand we are under the pressure of our employers sometimes. I feel it too. But may be one year is not enough for such work. Ian Taylor wrote:
I don't really grasp where you are headed with this criticism. Criticism can be a good thing. But I think it would help me, at least, if you were more clear about what you see as the alternative.
I mentioned alternatives: o investigating what analysis is really necessary for major RTL passes o fixing life analysis code o rewriting life analysis code o investigating slimmer representation (may be attaching it to the rtl reg/subreg. although it is complicated because of pseudo-register sharing before reload). o trying to rewrite e.g. gcse after reload and see what is really happened. Now I am saying where it is headed. Many users skipped 4.0 release. 4.1 was a good release. imho 4.2 probably is another candidate for skipping. I don't want to make 4.3 one more such candidate because of the new DF infrastructure. I think producing two releases to be avoided is luxury for us. Changing the df without providing older path means that probably some ports will be broken. Including just infrastructure which makes compiler 5% slower is not reasonable too. I think such change should be included into the mainline only just right after the transition from stage 3 to stage 1. People will have more time to fix the broken ports and may be write something which show the potential of the new infrastructure. I think it is too late to include the new df into the mainline. I think achieving the criteria will take even more time. I ask the steering committee to reconsider its decision of inclusion of DF infrastructure in 4.3. It should be done as the first step on stage 1 for 4.4 release of course if they achieve the merge criteria.