On Thu, May 26, 2016 at 3:30 AM, Chet Ramey <chet.ra...@case.edu> wrote:
On 5/25/16 12:00 AM, konsolebox wrote:
Bash seems to have gone through a lot of changes/bugs lately
so it would be nice ...[to be]... sure it's stable forthat major version.
There will be at least one, possibly two, more release
candidates before final release.
----
Does the bash project (or has it considered) using
a testing-based release cycle? It's not used too much these days,
where dates are often given priority over quality (usually driven by
marketing-like factors). Basically, the decision of when to ship
looks at when the number of new bugs found (with important ones
being blockers to release) / time-period.
Say one uses a time-period of 2 weeks. During development one
looks at the number of bugs reported during a 2 week period. Say
5 bugs/week, are reported during the first several weeks. The
actual number is likely proportional to language and size of
project. The number might go up up or down in any period, but it
looks at the **trend**. At some point, say, it drops down to 1-2
bugs found in some previous period and if it stays down (one might
use a week as a period, but if a week spans some holiday period, one
might get artificially reduced results).
If it looks like the overall trend has dropped to 1-2 bugs/week,
and it stays there (not adding new features), looks at the number
of new bugs found/time-period as an important metric in deciding if
it the code is becoming stable enough to issue release candidates
and eventual release.
It's not the only metric used, but at least it can provide
"semi-objective" input about the code's readiness to be released.
It's usually most useful when the data can be graphed so one can
look at overall trends more easily.
Curious how many projects use such metrics in deciding to
release...
-l