On 2/21/07, Benjamin Kosnik <[EMAIL PROTECTED]> wrote:
> 4.0 branched with critical flaws that were not noticed until 4.2.0 which > is why we end up with the missed optimiation regression in the first place. > > So the question is do we want to correct the regressions or not, because > right now we sound like we don't. Which regression is more important? > Wrong code or missed optimiation, I still say wrong code. I know other > people will disagree with me but guess what they can disagree with me all > they want, they will never win. This is the same thing with rejecting > legal code and compile time regressions (I remember a patch Apple complained > about because it caused a compile time regression but it fixed a reject legal, > even recently with some ICE and -g and a compile time regression though it > does not effect C++). Sure. This is one argument: we want correctness. Yipeee!!! There's no real arguing this point. It looks like 4.2.x and 4.1.x as it stands now can be made to be about the same when it comes to both correctness and (perhaps) codegen. However, you keep dropping the compile-time regression issue: is this on purpose? It's more than just missed optimizations here. If 4.2.0 is 10% slower than 4.1 and 4.3, then what? (In fact, I'm seeing 4.2.x as 22% slower for C++ vs. mainline, when running libstdc++ testsuite. Fixing this without merging Diego/Andrew patches for mem-ssa seem impossible. Vlad has also posted compile-time results that are quite poor.) GCC release procedures have historically been toothless when it comes to compile-time issues, which rank less than even SPEC scores. However, this is one of the things that GCC users evaluate in production settings, and the current evaluation of 4.2.x in this area is going to be pretty poor. (IMHO unusably poor.)
It's not only compile-time, it's also memory-usage unfortunately.
I think that another issue, which Vlad and Paolo pick up and I now echo, is setting a good break point for major new infrastructure. Historically, GCC development has prioritized time-based schedules (which then slip), instead of feature-based schedules (which might actually motivate the pricipals to do the heavy lifting required to get to release shape: see C++ABI in 3.0.x-3.2.x, 3.4.x, and fortran and SSA in 4.0, say java in 4.1.x). It seems to me that df (and LTO?) could be this for what-might-be 4.3.0. It's worth considering this, and experimenting with "release theory and practice." As Kaveh said, all compilers ship with a mix of bugs and features. There are some compelling new features in 4.2.0, and some issues. After evaluating all the above, I think re-branching may be the best bet for an overall higher-quality 4.2.x release series than 4.1.x.
I believe re-branching will not make 4.2.x better but will only use up scarce resources. We have piled up so much new stuff for 4.3 already that it will be hard to stabilize it. So either go for 4.2.0 as it is now with all its regressions but possibly critically more correctness, or get back not-correctness of 4.1.x and fixing some of the regressions. Re-branching is the worst thing we can do I believe.
Obviously, others disagree.
Indeed ;) Richard.