Here are a few things that are true for me or I believe are true in general:

   - Our test suite is more flaky than we'd like it to be
   - I don't believe that adding more Unit tests that follow existing
   patterns buys us that much. I'd rather see something similar to what some
   folks are doing with Membership right now where we isolate the code and
   test it more systematically
   - We have other testing gaps: We have benchmarks 👏🎉, but we are still
   lacking coverage in that ares; our community is still lacking HA tests. I'd
   rather fill those than bring back old DUnit tests that are chosen somewhat
   at random.
   - I'd rather be deliberate about what tests we introduce than wholesale
   bring back a set of tests, since any of these re-introduced tests has a
   potential to be flaky. Let's make sure our tests carry their weight.
   - If we delete these tests, we can always go back to a SHA from today
   and bring them back at a later date
   - These tests have been ignored since a very long time and we've shipped
   without them and nobody has missed them enough to bring them back.

Given all the above, my vote is for less noise in our code, which means
deleting all ignored tests. If we want to keep them, I'd love to hear a
plan of action on how we bring them back. Having a bunch of dead code helps
nobody.

On Tue, Dec 31, 2019 at 1:50 PM Mark Hanson <mhan...@pivotal.io> wrote:

> Hi All,
>
> As part of what I am doing to fix flaky tests, I periodically come across
> tests that are @Ignore’d. I am curious what we would like to do with them
> generally speaking. We could fix them, which would seem obvious, but we are
> struggling to fix flaky tests as it is.  We could delete them, but those
> tests were written for a reason (I hope).  Or we could leave them. This
> pollutes searches etc as inactive code requiring upkeep at least.
>
> I don’t have an easy answer. Some have suggested deleting them. I tend to
> lean that direction, but I thought I would consult the community for a
> broader perspective.
>
> Thanks,
> Mark

Reply via email to