4.0 branched with critical flaws that were not noticed until 4.2.0 which
is why we end up with the missed optimiation regression in the first place.

So the question is do we want to correct the regressions or not, because
right now we sound like we don't.  Which regression is more important?
Wrong code or missed optimiation, I still say wrong code.  I know other
people will disagree with me but guess what they can disagree with me all
they want, they will never win.  This is the same thing with rejecting
legal code and compile time regressions (I remember a patch Apple complained
about because it caused a compile time regression but it fixed a reject legal,
even recently with some ICE and -g and a compile time regression though it
does not effect C++).

Sure. This is one argument: we want correctness. Yipeee!!! There's no real arguing this point. It looks like 4.2.x and 4.1.x as it stands now can be made to be about the same when it comes to both correctness and (perhaps) codegen.

However, you keep dropping the compile-time regression issue: is this on purpose? It's more than just missed optimizations here. If 4.2.0 is 10% slower than 4.1 and 4.3, then what? (In fact, I'm seeing 4.2.x as 22% slower for C++ vs. mainline, when running libstdc++ testsuite. Fixing this without merging Diego/Andrew patches for mem-ssa seem impossible. Vlad has also posted compile-time results that are quite poor.) GCC release procedures have historically been toothless when it comes to compile-time issues, which rank less than even SPEC scores. However, this is one of the things that GCC users evaluate in production settings, and the current evaluation of 4.2.x in this area is going to be pretty poor. (IMHO unusably poor.)

I think that another issue, which Vlad and Paolo pick up and I now echo, is setting a good break point for major new infrastructure. Historically, GCC development has prioritized time-based schedules (which then slip), instead of feature-based schedules (which might actually motivate the pricipals to do the heavy lifting required to get to release shape: see C++ABI in 3.0.x-3.2.x, 3.4.x, and fortran and SSA in 4.0, say java in 4.1.x). It seems to me that df (and LTO?) could be this for what-might-be 4.3.0. It's worth considering this, and experimenting with "release theory and practice."

As Kaveh said, all compilers ship with a mix of bugs and features. There are some compelling new features in 4.2.0, and some issues. After evaluating all the above, I think re-branching may be the best bet for an overall higher-quality 4.2.x release series than 4.1.x.

Obviously, others disagree.

I still say 4.2.0 is still better at a lot of things that the aliasing 
regression
does not hurt that much.  I wonder what the result, between the two revsions
that cause a regression on x86, on PowerPC is.  If it is better than I say this
is really a RA issue and I say call it a day and just release 4.2.0 as is.

We really need a new RA if x86 is getting worse as the rest of the targets are
getting better.

Don't hold your breath. I think Mark asked for input on the near-term possible.

-benjamin

Reply via email to