Re: [DISCUSS] Revising our release criteria, commit guidelines, and the role of circleci vs. ASF CI
> > 250 iterations isn't enough; I use 500 as a low water mark. I agree that 500 iterations would be a reasonable minimum. We have seen flaky unit tests requiring far more iterations, but that's not very common. We could use to 500 iterations as default, and discretionary use a higher limit in tests that are quick and might be prone to concurrency issues. I can change the defaults on CirceCI config file if we agree to a new limit, the current default of 100 iterations is quite arbitrary. The test multiplexer allows to either run test individual test methods or entire classes. It is quite frequent to see tests methods that pass individually but fail when they are run together with the other tests in the same class. Because of this, I think that we should always run entire classes when repeating new or modified tests. The only exception to this would be Python dtests, which usually are more resource intensive and not so prone to that type of issues. For CI on a patch, run the pre-commit suite and also run multiplexer with > 250 runs on new, changed, or related tests to ensure not flaky The multiplexer only allows to run a single test class per push. This is ok for fixing existing flakies (its original purpose), and for most minor changes, but it can be quite inconvenient for testing large patches that add or modify many tests. For example, the patch for CEP-19 directly modifies 31 test classes, which means 31 CircleCI config pushes. This number can be somewhat reduced with some wildcards on the class names, but the process is still quite inconvenient. I guess that other large patches will find the same problem. I have plans on modifying the multiplexer to allow specifying a list of classes per test target, so we don't have to needlessly suffer with this. On Mon, 26 Sept 2022 at 22:44, Brandon Williams wrote: > On Mon, Sep 26, 2022 at 1:31 PM Josh McKenzie > wrote: > > > > 250 iterations isn't enough; I use 500 as a low water mark. > > > > Say more here. I originally had it at 500 but neither Mick nor I knew > why and figured we could suss this out on this thread. > > I've seen flakies that passed with less later exhibit at that point. > > > This is also assuming that circle and ASF CI run the same tests, which > > is not entirely true. > > > > +1: we need to fix this. My intuition is the path to getting circle-ci > in parity on coverage is a shorter path than getting ASF CI to 3 green runs > for GA. That consistent w/your perception as well or do you disagree? > > I agree that bringing parity to the coverage will be the shorter path. >
[VOTE] Release Apache Cassandra 4.1-beta1
Proposing the test build of Cassandra 4.1-beta1 for release. sha1: 5d9d93ea08d9c76402aa1d14bad54bf9ec875686 Git: https://gitbox.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/4.1-beta1-tentative Maven Artifacts: https://repository.apache.org/content/repositories/orgapachecassandra-1276/org/apache/cassandra/cassandra-all/4.1-beta1/ The Source and Build Artifacts, and the Debian and RPM packages and repositories, are available here: https://dist.apache.org/repos/dist/dev/cassandra/4.1-beta1/ The vote will be open for 72 hours (longer if needed). Everyone who has tested the build is invited to vote. Votes by PMC members are considered binding. A vote passes if there are at least three binding +1s and no -1's. [1]: CHANGES.txt: https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/4.1-beta1-tentative [2]: NEWS.txt: https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/4.1-beta1-tentative
Re: [DISCUSS] Revising our release criteria, commit guidelines, and the role of circleci vs. ASF CI
> I have plans on modifying the multiplexer to allow specifying a list of > classes per test target, so we don't have to needlessly suffer with this This sounds integral to us multiplexing tests on large diffs whether we go with circle for releases or not and would be a great addition! On Tue, Sep 27, 2022, at 6:19 AM, Andrés de la Peña wrote: >> 250 iterations isn't enough; I use 500 as a low water mark. > > I agree that 500 iterations would be a reasonable minimum. We have seen flaky > unit tests requiring far more iterations, but that's not very common. We > could use to 500 iterations as default, and discretionary use a higher limit > in tests that are quick and might be prone to concurrency issues. I can > change the defaults on CirceCI config file if we agree to a new limit, the > current default of 100 iterations is quite arbitrary. > > The test multiplexer allows to either run test individual test methods or > entire classes. It is quite frequent to see tests methods that pass > individually but fail when they are run together with the other tests in the > same class. Because of this, I think that we should always run entire classes > when repeating new or modified tests. The only exception to this would be > Python dtests, which usually are more resource intensive and not so prone to > that type of issues. > >> For CI on a patch, run the pre-commit suite and also run multiplexer with >> 250 runs on new, changed, or related tests to ensure not flaky > > The multiplexer only allows to run a single test class per push. This is ok > for fixing existing flakies (its original purpose), and for most minor > changes, but it can be quite inconvenient for testing large patches that add > or modify many tests. For example, the patch for CEP-19 directly modifies 31 > test classes, which means 31 CircleCI config pushes. This number can be > somewhat reduced with some wildcards on the class names, but the process is > still quite inconvenient. I guess that other large patches will find the same > problem. I have plans on modifying the multiplexer to allow specifying a list > of classes per test target, so we don't have to needlessly suffer with this. > > On Mon, 26 Sept 2022 at 22:44, Brandon Williams wrote: >> On Mon, Sep 26, 2022 at 1:31 PM Josh McKenzie wrote: >> > >> > 250 iterations isn't enough; I use 500 as a low water mark. >> > >> > Say more here. I originally had it at 500 but neither Mick nor I knew why >> > and figured we could suss this out on this thread. >> >> I've seen flakies that passed with less later exhibit at that point. >> >> > This is also assuming that circle and ASF CI run the same tests, which >> > is not entirely true. >> > >> > +1: we need to fix this. My intuition is the path to getting circle-ci in >> > parity on coverage is a shorter path than getting ASF CI to 3 green runs >> > for GA. That consistent w/your perception as well or do you disagree? >> >> I agree that bringing parity to the coverage will be the shorter path.
Re: [DISCUSS] Revising our release criteria, commit guidelines, and the role of circleci vs. ASF CI
“I have plans on modifying the multiplexer to allow specifying a list of classes per test target, so we don't have to needlessly suffer with this” That would be great, I was thinking of that the other day too. With that said I’ll be happy to support you in that effort too :-) On Tue, 27 Sep 2022 at 11:18, Josh McKenzie wrote: > I have plans on modifying the multiplexer to allow specifying a list of > classes per test target, so we don't have to needlessly suffer with this > > This sounds integral to us multiplexing tests on large diffs whether we go > with circle for releases or not and would be a great addition! > > On Tue, Sep 27, 2022, at 6:19 AM, Andrés de la Peña wrote: > > 250 iterations isn't enough; I use 500 as a low water mark. > > > I agree that 500 iterations would be a reasonable minimum. We have seen > flaky unit tests requiring far more iterations, but that's not very common. > We could use to 500 iterations as default, and discretionary use a higher > limit in tests that are quick and might be prone to concurrency issues. I > can change the defaults on CirceCI config file if we agree to a new limit, > the current default of 100 iterations is quite arbitrary. > > The test multiplexer allows to either run test individual test methods or > entire classes. It is quite frequent to see tests methods that pass > individually but fail when they are run together with the other tests in > the same class. Because of this, I think that we should always run entire > classes when repeating new or modified tests. The only exception to this > would be Python dtests, which usually are more resource intensive and not > so prone to that type of issues. > > For CI on a patch, run the pre-commit suite and also run multiplexer with > 250 runs on new, changed, or related tests to ensure not flaky > > > The multiplexer only allows to run a single test class per push. This is > ok for fixing existing flakies (its original purpose), and for most minor > changes, but it can be quite inconvenient for testing large patches that > add or modify many tests. For example, the patch for CEP-19 directly > modifies 31 test classes, which means 31 CircleCI config pushes. This > number can be somewhat reduced with some wildcards on the class names, but > the process is still quite inconvenient. I guess that other large patches > will find the same problem. I have plans on modifying the multiplexer to > allow specifying a list of classes per test target, so we don't have to > needlessly suffer with this. > > On Mon, 26 Sept 2022 at 22:44, Brandon Williams wrote: > > On Mon, Sep 26, 2022 at 1:31 PM Josh McKenzie > wrote: > > > > 250 iterations isn't enough; I use 500 as a low water mark. > > > > Say more here. I originally had it at 500 but neither Mick nor I knew > why and figured we could suss this out on this thread. > > I've seen flakies that passed with less later exhibit at that point. > > > This is also assuming that circle and ASF CI run the same tests, which > > is not entirely true. > > > > +1: we need to fix this. My intuition is the path to getting circle-ci > in parity on coverage is a shorter path than getting ASF CI to 3 green runs > for GA. That consistent w/your perception as well or do you disagree? > > I agree that bringing parity to the coverage will be the shorter path. > > >
Re: [VOTE] Release Apache Cassandra 4.1-beta1
We have a couple new test failures that came up as regressions recently: https://issues.apache.org/jira/browse/CASSANDRA-17928 https://issues.apache.org/jira/browse/CASSANDRA-17927 Along with that, I'm not sure we're in a good spot from a "no flaky test" perspective as per butler: https://butler.cassandra.apache.org/#/ci/upstream/compare/Cassandra-4.1/cassandra-4.1, https://cwiki.apache.org/confluence/display/CASSANDRA/Release+Lifecycle, "No flaky tests - All tests (Unit Tests and DTests) should pass consistently. A failing test, upon analyzing the root cause of failure, may be “ignored in exceptional cases”, if appropriate, for the release, after discussion in the dev mailing list." So those three caveats aside, we have some threads in flight right now about whether we use circle or ASF to determine when to release, and if the above 2 tests are fixed prior to GA I'm fine with it. Just wanted to make sure to call out these current disconnects w/the state of CI, our agreed upon release process, and the current state of the branch and CI. On Tue, Sep 27, 2022, at 9:13 AM, Mick Semb Wever wrote: > > Proposing the test build of Cassandra 4.1-beta1 for release. > > sha1: 5d9d93ea08d9c76402aa1d14bad54bf9ec875686 > Git: > https://gitbox.apache.org/repos/asf?p=cassandra.git;a=shortlog;h=refs/tags/4.1-beta1-tentative > Maven Artifacts: > https://repository.apache.org/content/repositories/orgapachecassandra-1276/org/apache/cassandra/cassandra-all/4.1-beta1/ > > The Source and Build Artifacts, and the Debian and RPM packages and > repositories, are available here: > https://dist.apache.org/repos/dist/dev/cassandra/4.1-beta1/ > > The vote will be open for 72 hours (longer if needed). Everyone who has > tested the build is invited to vote. Votes by PMC members are considered > binding. A vote passes if there are at least three binding +1s and no -1's. > > [1]: CHANGES.txt: > https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=CHANGES.txt;hb=refs/tags/4.1-beta1-tentative > [2]: NEWS.txt: > https://gitbox.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=NEWS.txt;hb=refs/tags/4.1-beta1-tentative
Who will be at ApacheCON next week? (please let me know off-list)
Could you please reply to me directly (off-list) if you will be in New Orleans next week. While we already have a list of the speakers, we would love to know in advance who else in our dev community will be attending. (If you're reading this you're in our dev community.) regards, Mick
Re: [DISCUSS] Revising our release criteria, commit guidelines, and the role of circleci vs. ASF CI
So: 1. 500 iterations on multiplexer 2. Augmenting generate.sh to allow providing multiple class names and generating a single config that'll multiplex all the tests provided 3. Test parity / pre-release config added on circleci (see https://issues.apache.org/jira/browse/CASSANDRA-17930), specifically dtest-large, dtest-offheap, test-large-novnode If we get the above 3, are we at a place where we're good to consider vetting releases on circleci for beta / rc / ga? On Tue, Sep 27, 2022, at 11:28 AM, Ekaterina Dimitrova wrote: >> “I have plans on modifying the multiplexer to allow specifying a list of >> classes per test target, so we don't have to needlessly suffer with this” >> >> >> That would be great, I was thinking of that the other day too. With that >> said I’ll be happy to support you in that effort too :-) > > On Tue, 27 Sep 2022 at 11:18, Josh McKenzie wrote: >> __ >>> I have plans on modifying the multiplexer to allow specifying a list of >>> classes per test target, so we don't have to needlessly suffer with this >> This sounds integral to us multiplexing tests on large diffs whether we go >> with circle for releases or not and would be a great addition! >> >> On Tue, Sep 27, 2022, at 6:19 AM, Andrés de la Peña wrote: 250 iterations isn't enough; I use 500 as a low water mark. >>> >>> I agree that 500 iterations would be a reasonable minimum. We have seen >>> flaky unit tests requiring far more iterations, but that's not very common. >>> We could use to 500 iterations as default, and discretionary use a higher >>> limit in tests that are quick and might be prone to concurrency issues. I >>> can change the defaults on CirceCI config file if we agree to a new limit, >>> the current default of 100 iterations is quite arbitrary. >>> >>> The test multiplexer allows to either run test individual test methods or >>> entire classes. It is quite frequent to see tests methods that pass >>> individually but fail when they are run together with the other tests in >>> the same class. Because of this, I think that we should always run entire >>> classes when repeating new or modified tests. The only exception to this >>> would be Python dtests, which usually are more resource intensive and not >>> so prone to that type of issues. >>> For CI on a patch, run the pre-commit suite and also run multiplexer with 250 runs on new, changed, or related tests to ensure not flaky >>> >>> The multiplexer only allows to run a single test class per push. This is ok >>> for fixing existing flakies (its original purpose), and for most minor >>> changes, but it can be quite inconvenient for testing large patches that >>> add or modify many tests. For example, the patch for CEP-19 directly >>> modifies 31 test classes, which means 31 CircleCI config pushes. This >>> number can be somewhat reduced with some wildcards on the class names, but >>> the process is still quite inconvenient. I guess that other large patches >>> will find the same problem. I have plans on modifying the multiplexer to >>> allow specifying a list of classes per test target, so we don't have to >>> needlessly suffer with this. >>> >>> On Mon, 26 Sept 2022 at 22:44, Brandon Williams wrote: On Mon, Sep 26, 2022 at 1:31 PM Josh McKenzie wrote: > > 250 iterations isn't enough; I use 500 as a low water mark. > > Say more here. I originally had it at 500 but neither Mick nor I knew > why and figured we could suss this out on this thread. I've seen flakies that passed with less later exhibit at that point. > This is also assuming that circle and ASF CI run the same tests, which > is not entirely true. > > +1: we need to fix this. My intuition is the path to getting circle-ci > in parity on coverage is a shorter path than getting ASF CI to 3 green > runs for GA. That consistent w/your perception as well or do you > disagree? I agree that bringing parity to the coverage will be the shorter path. >>
Re: [DISCUSS] Revising our release criteria, commit guidelines, and the role of circleci vs. ASF CI
I would add an option for generate.sh to detect all changed *Test.java files, that would be handy imo. On 28/9/22 4:29, Josh McKenzie wrote: So: 1. 500 iterations on multiplexer 2. Augmenting generate.sh to allow providing multiple class names and generating a single config that'll multiplex all the tests provided 3. Test parity / pre-release config added on circleci (see https://issues.apache.org/jira/browse/CASSANDRA-17930), specifically dtest-large, dtest-offheap, test-large-novnode If we get the above 3, are we at a place where we're good to consider vetting releases on circleci for beta / rc / ga? On Tue, Sep 27, 2022, at 11:28 AM, Ekaterina Dimitrova wrote: “I have plans on modifying the multiplexer to allow specifying a list of classes per test target, so we don't have to needlessly suffer with this” That would be great, I was thinking of that the other day too. With that said I’ll be happy to support you in that effort too :-) On Tue, 27 Sep 2022 at 11:18, Josh McKenzie wrote: I have plans on modifying the multiplexer to allow specifying a list of classes per test target, so we don't have to needlessly suffer with this This sounds integral to us multiplexing tests on large diffs whether we go with circle for releases or not and would be a great addition! On Tue, Sep 27, 2022, at 6:19 AM, Andrés de la Peña wrote: 250 iterations isn't enough; I use 500 as a low water mark. I agree that 500 iterations would be a reasonable minimum. We have seen flaky unit tests requiring far more iterations, but that's not very common. We could use to 500 iterations as default, and discretionary use a higher limit in tests that are quick and might be prone to concurrency issues. I can change the defaults on CirceCI config file if we agree to a new limit, the current default of 100 iterations is quite arbitrary. The test multiplexer allows to either run test individual test methods or entire classes. It is quite frequent to see tests methods that pass individually but fail when they are run together with the other tests in the same class. Because of this, I think that we should always run entire classes when repeating new or modified tests. The only exception to this would be Python dtests, which usually are more resource intensive and not so prone to that type of issues. For CI on a patch, run the pre-commit suite and also run multiplexer with 250 runs on new, changed, or related tests to ensure not flaky The multiplexer only allows to run a single test class per push. This is ok for fixing existing flakies (its original purpose), and for most minor changes, but it can be quite inconvenient for testing large patches that add or modify many tests. For example, the patch for CEP-19 directly modifies 31 test classes, which means 31 CircleCI config pushes. This number can be somewhat reduced with some wildcards on the class names, but the process is still quite inconvenient. I guess that other large patches will find the same problem. I have plans on modifying the multiplexer to allow specifying a list of classes per test target, so we don't have to needlessly suffer with this. On Mon, 26 Sept 2022 at 22:44, Brandon Williams wrote: On Mon, Sep 26, 2022 at 1:31 PM Josh McKenzie wrote: > > 250 iterations isn't enough; I use 500 as a low water mark. > > Say more here. I originally had it at 500 but neither Mick nor I knew why and figured we could suss this out on this thread. I've seen flakies that passed with less later exhibit at that point. > This is also assuming that circle and ASF CI run the same tests, which > is not entirely true. > > +1: we need to fix this. My intuition is the path to getting circle-ci in parity on coverage is a shorter path than getting ASF CI to 3 green runs for GA. That consistent w/your perception as well or do you disagree? I agree that bringing parity to the coverage will be the shorter path.