Re: Fwd: [DISCUSS] Formalizing requirements for pre-commit patches on new CI
> A strong +1 to getting to a single CI system. CircleCI definitely has some > niceties and I understand why it's currently used, but right now we get 2 > CI systems for twice the price. +1 on the proposed subsets. > That's not entirely true, it provides value in "double accounting" and that has caught a number of serious bugs over time. We need to keep this in mind as we go back to one CI (foundation) – we can't let bugs slip through bc blindspots. That said, how circleci was implemented not re-using any of the existing scripts, and even using entirely different ant and jvm invocations, hurts me to this day. Ultimately I have no objection to having multiple CI systems in use, so long as they have a common foundation. What's found now under .build/ is intended to be that. I encourage everyone to start running local tests like `.build/run-test.sh test my_test` (or w/ docker like `.build/docker/run-test.sh test my_test 11`) This approach does not build for you, so it's fast turn-around. It does do the test env setup for the different test types. I'm working next on switching ci-cassandra.a.o over to use these scripts in trunk (CASSANDRA-18665).
Re: Fwd: [DISCUSS] Formalizing requirements for pre-commit patches on new CI
On Wed, 12 Jul 2023 at 15:05, Jacek Lewandowski wrote: > I believe some tools can determine which tests make sense to multiplex, > given that some exact lines of code were changed using code coverage > analysis. After the initial run, we should have data from the coverage > analysis, which would tell us which test classes are tainted - that is, > they cover the modified code fragments. > > Using a similar approach, we could detect the coverage differences when > running, say w/wo compression, and discover the tests which cover those > parts of the code. > > That way, we can be smart and save time by precisely pointing to it makes > sense to test more accurately. > This. We already have the beginnings of it in circleci for what to repeat run. If we have a baseline pre-commit pipeline that's quite minimal (the default jdk and config only), then such a pre-detection script can detect what additional test matrix as well as repeated tests are required. This can also be used to determine what test suites don't need to be run. Adding in a results validation script, it would re-use this to determine if the aggregated unit xml results file from a pre-commit CI run contained all the expected test runs for the patch in question. Doing this: going from the script generating "this is the tests you need to run for this patch" to "run that list as one pipeline" using the in-tree scripts becomes very easy. Not related but in general, I keep going back to this for inspiration: https://yetus.apache.org/documentation/in-progress/precommit/
Apache Cassandra User Survey
It’s been a long time since I’ve asked the community for feedback in a poll or otherwise. A lot is changing in the data world, and we have an exciting Cassandra release coming up with v5! I would like to ask for five or ten minutes of your time to answer some questions about how you use Cassandra and how we are doing as a community. There are only 2 questions required, and the rest are all optional, so answer whatever you can. It’s all helpful information. https://forms.gle/KVNd7UmUfcBuoNvF7 The survey will run until July 29, 2023. Once completed, the results will be anonymized and the results posted on http://cassandra.apache.org Help spread the word by posting this invitation on social media, slack channels, or emailing colleagues. The bigger the N, the better the survey! Here’s a sample to get you started: I recently took the Apache Cassandra® 2023 survey, and I think you should too! By sharing your answers, you can help shape the future of the Cassandra project and contribute to the community. Your opinion matters! https://forms.gle/KVNd7UmUfcBuoNvF7 Patrick