> choose a consistent, representative subset of stable tests that we feel give
> us a reasonable level of confidence in return for a reasonable amount of
> runtime
>
> ...
> Currently a dtest is being ran in j8 w/wo vnodes , j8/j11 w/wo vnodes and j11
> w/wo vnodes. That is 6 times total. I won
Perhaps pre-commit checks should include mostly the typical configuration
of Cassandra rather than some subset of possible combinations. Like it was
said somewhere above - test with the default number of vnodes, test with
the default compression settings, and test with the default heap/off-heap
buf
Currently a dtest is being ran in j8 w/wo vnodes , j8/j11 w/wo vnodes
and j11 w/wo vnodes. That is 6 times total. I wonder about that ROI.
On dtest cluster reusage yes, I stopped that as at the time we had lots
of CI changes, an upcoming release and priorities. But when the CI
starts flexing i
Ultimately I think we have to invest in two directions: first, choose a
consistent, representative subset of stable tests that we feel give us a
reasonable level of confidence in return for a reasonable amount of
runtime. Second, we need to invest in figuring out why certain tests fail.
I strongly
> Instead of running all the tests through available CI agents every time we
> can have presets of tests:
Back when I joined the project in 2014, unit tests took ~ 5 minutes to run on a
local machine. We had pre-commit and post-commit tests as a distinction as
well, but also had flakes in the pr
For me, the biggest benefit of keeping the build scripts and CI
configurations as well in the same project is that these files are
versioned in the same way as the main sources do. This ensures that we
can build past releases without having any annoying errors in the
scripts, so I would say that th
> Not everyone will have access to such resources, if all you have is 1 such
> pod you'll be waiting a long time (in theory one month, and you actually need
> a few bigger pods for some of the more extensive tests, e.g. large upgrade
> tests)….
One thing worth calling out: I believe we have *
>
> - There are hw constraints, is there any approximation on how long it will
> take to run all tests? Or is there a stated goal that we will strive to
> reach as a project?
>
> Have to defer to Mick on this; I don't think the changes outlined here
> will materially change the runtime on our curre
All great questions I don't have answers to Ekaterina. :) Thoughts though:
> - Currently we run at most two parallel CI runs in Jenkins-dev, I guess you
> will try to improve that limitation?
If we get to using cloud-based resources for CI instead of our donated hardware
w/a budget, we could the
Thank you, Josh and Mick
Immediate questions on my mind:
- Currently we run at most two parallel CI runs in Jenkins-dev, I guess you
will try to improve that limitation?
- There are hw constraints, is there any approximation on how long it will
take to run all tests? Or is there a stated goal that
Thanks Josh, this looks great! I think the constraints you've outlined are
reasonable for an initial attempt. We can always evolve if we run into
issues.
Cheers,
Derek
On Fri, Jun 30, 2023 at 11:19 AM Josh McKenzie wrote:
> Context: we're looking to get away from having split CircleCI and ASF
11 matches
Mail list logo