> Perhaps it is also a good opportunity to distinguish subsets of tests which make sense to run with a configuration matrix.
Agree. I think we should define a “standard/golden” configuration for each branch and minimally require precommit tests for that configuration. Assignees and reviewers can determine if additional test variants are required based on the patch scope. Nightly and prerelease tests can be run to catch any issues outside the standard configuration based on the supported configuration matrix. On Wed, 14 Feb 2024 at 15:32 Jacek Lewandowski <lewandowski.ja...@gmail.com> wrote: > śr., 14 lut 2024 o 17:30 Josh McKenzie <jmcken...@apache.org> napisał(a): > >> When we have failing tests people do not spend the time to figure out if >> their logic caused a regression and merge, making things more unstable… so >> when we merge failing tests that leads to people merging even more failing >> tests... >> >> What's the counter position to this Jacek / Berenguer? >> > > For how long are we going to deceive ourselves? Are we shipping those > features or not? Perhaps it is also a good opportunity to distinguish > subsets of tests which make sense to run with a configuration matrix. > > If we don't add those tests to the pre-commit pipeline, "people do not > spend the time to figure out if their logic caused a regression and merge, > making things more unstable…" > I think it is much more valuable to test those various configurations > rather than test against j11 and j17 separately. I can see a really little > value in doing that. > > >