Hi all, June 2015 retrospective doc <https://docs.google.com/document/d/1HOW8tSPkizm2uUidwKxF1n7Lg2hmE6H2Goob1fOI9Xw/edit#>
It's time for the June retrospective. It looks like there were 3 releases in June (2.0.16, 2.1.6, 2.1.7). There are 90 issues listed across those three releases. Hopefully everything is annotated correctly. I want to take another shot at further removing myself as a bottleneck for the retrospective process. The goal at the end of each retrospective is to have transformed every issue we had into the solution (or next action) for that issue. I think that boils down to - Performance regression, add to performance harness doc <https://docs.google.com/document/d/1TMdJ7-y-hKQwhPRFYL0VXf0R53MsF4QmhZmwbT8wpE0/edit> - Correctness regression, add a JIRA or to Cassandra validation harness <https://docs.google.com/document/d/1kccPqxEAoYQpT0gXnp20MYQUDmjOrakAeQhf6vkqjGo/edit#heading=h.zd5nw0kl2ypi> doc - Note in retrospective what went wrong and what needed to be do differently at implementation/code review time For example CASSANDRA-9592 <https://issues.apache.org/jira/browse/CASSANDRA-9592> shows a scenario that could easily be detected by the performance harness or the C* validation harness and one or the other should be made to produce a scenario where this can happen and be able to detect that it does happen. So you would pick one of the two docs and put it in there. If there is a dtest that could also test for this you would create a JIRA instead and link it to CASSANDRA-9012 <https://issues.apache.org/jira/browse/CASSANDRA-9012>. By linking these issues to the solutions were are also making it easier to see how much not having a given piece of test functionality is costing which will increase bang for the buck post 3.0 when we go to decide what to work on. Thanks, Ariel