I've spent some time today looking at GCC 4.2. I've heard various comments about whether or not it's worth doing a 4.2 release at all. For example:
[Option 1] Instead of 4.2, we should backport some functionality from 4.2 to the 4.1 branch, and call that 4.2. [Option 2] Instead of 4.2, we should skip 4.2, stabilize 4.3, and call that 4.2. [Option 3] Like (2), but create (the new) 4.2 branch before merging the dataflow code. One of the key points behind these suggestions is that Red Hat and Novell plan to skip to 4.3 for their next releases, so we'll have a hard time getting volunteers for stabilization of 4.2.0. Another comment is that the aliasing fixes on the 4.2 branch mean that 4.2's performance on SPEC is likely to be inferior to that of the 4.1 releases. (I'd like to see a 4.1.2 vs. 4.2.0 comparison for SPEC so that we can evaluate that more accurately.) I've had a look at the state of 4.2.0, from Bugzilla, and observed the following things: 1. There are 133 P3 and higher PRs, of which about 25 are P1s. 2. Virtually all of the PRs are also in at least one of 4.1 or 4.3 -- and most are in both. A consequence of (2) is that -- from a correctness point of view -- there isn't all that much to prevent us from releasing 4.2.0 forthwith. By hypothesis, 4.1 is satisfactory (shipping with major GNU/Linux distributions, and widely used throughout the entire GCC community), so problems that existed in 4.1 must be survivable. I count 7 P1s that are new in 4.2, and of those, all but two are also in 4.3. So, fixing the 4.2 P1s now just means less work on 4.3 in future. GCC 4.2.0 also has some good new features that are not part of FSF 4.1. OpenMP is the most obvious of these, but there is also support for new CPUs, and, as always, many-a-bug fixed. Considering the options above: * I think [Option 3] is unfair to Kenny, Songbae, and others who have worked on dataflow code. The SC set criteria for that merge and a timeline to do the merge, and I believe that the dataflow code has met, or has nearly met, those criteria. We should not force the dataflow folks to maintain that code on a branch any longer. * I think [Option 1] is not terribly productive. I'm not aware of anything in 4.2 that's bad, per se, with the possible exception of the performance regression from the aliasing changes. And, we can undo those by reverting Danny's patch. So, to a first approximation, we can have the performance of 4.1 with the bugs of 4.1. If the bugs trigger more often in 4.2, then we can change things so they don't. * I think [Option 2] lengthens the time between releases (which several people have recently told me is too long, although other people have in past told me it was too short...), but doesn't save much effort. The minimal way of getting to 4.2.0 is to fix the P1s common to 4.2 and 4.3, make a decision as to what to do about the aliasing safety patches, and declare victory. Also, I know of several operating system distributors who plan to ship GCC 4.2.0. Although they do not directly contribute to GCC in the same way that Red Hat and Novell do, they do still provide support for GCC. I think it would set a bad precedent to pull the plug on the 4.2.0 release after having created the branch, as it's reasonable for our entire userbase to rely on that as a commitment to produce a release. So, my feeling is that the best course of action is to set a relatively low threshold for GCC 4.2.0 and target 4.2.0 RC1 soon: say, March 10th. Then, we'll have a 4.2.0 release by (worst case, and allowing for lameness on my part) March 31. Feedback and alternative suggestions are welcome, of course. Thanks, -- Mark Mitchell CodeSourcery [EMAIL PROTECTED] (650) 331-3385 x713