Hi everyone,

As September approaches, I’ve been thinking about how we might be able to share 
notes on testing and build health.

Some types of information I have in mind:

– Known issues with builds (e.g., master / nightlies) that impact the ability 
to test the project. These could include known-flaky tests, bugs impacting test 
automation, or specific features that aren’t functioning as designed. Some 
contributors may be prefer to waive specific types of testing to prevent their 
automation from failing; others may prefer to continue more exhaustive testing 
of builds known not to be impacted by an issue.
– Tracking high-priority bugs that are known blockers, either for test 
automation or suitability of a release for dev / QA environments.
– Pre-release checklists and outstanding to-do items that need to be completed 
prior to cutting a build which aren’t necessarily best represented by a JIRA 
ticket.
– Performance test status – the characteristics of tests performed, hardware 
configurations, and improvements / regressions identified over time.

Some of this information may change often as development and testing progress - 
as often as daily in some cases.

Here are a couple examples of similar work in other projects:

–––––
– Hadoop’s “Release Status” docs in Confluence: 
https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8.0+Release

The Hadoop project uses these status docs to track release blockers, to-do 
items that need to be tracked ahead of release, and notes on scope as defined 
by the community.

– Hadoop’s “Road Map” docs in Confluence:
https://cwiki.apache.org/confluence/display/HADOOP/Roadmap

Similarly, the Hadoop project uses docs like this to track planned features for 
each release, and feature freeze, code freeze, and planned release dates. These 
are useful for organizing development and signposting what’s planned to the 
user community.

– “Are We Fast Yet”: A project from Mozilla tracking JS engine performance
https://arewefastyet.com

Perf testing of nightlies across a several platforms (and comparisons with 
other JavaScript engines), useful for identifying regressions quickly and 
tracking progress toward perf goals.
–––––

I’m curious on the dev community’s thoughts on how best to organize information 
like this. My thinking is that by having a space to share this, the community 
can be more informed on each others’ work toward testing, build health, and 
active projects.

The current Cassandra wiki (https://wiki.apache.org/cassandra) doesn’t appear 
too active and carries a warning. What do others think about filing an INFRA 
ticket to request a Confluence space at https://cwiki.apache.org for this type 
of information?

I’d be happy to help maintain information tracking build health, remaining 
to-do’s, known test automation blockers, flaky tests, etc. as well.

Cheers,

– Scott

Reply via email to