@Uri

May I ask, whats the idea behind running a random subset of tests only? Wouldn't a Monte Carlo approach be highly unreliable, e.g. lure ppl into thinking everything is ok, but in reality the random test selection did not catch affected code paths? I mean for tests - its all about reliability, isn't it? And 200 out of 6k tests sounds like running often into false positive test results, esp. if your test base is skewed towards features not being affected by current changes.

I think this could still work with a better reliability / changed code coverage, if the abstraction is a bit more complicated, e.g.: - introduce grouping flags on tests - on module, class or even single method scope - on test run, declare what flags should be tested (runs all test with given flags) - alternatively use appropriate flags reflecting your code changes + your test counter on top, but now it selects from the flagged tests with higher probability to run affected tests

Ah well, just some quick thoughts on that...

Cheers,
Jörg

--
You received this message because you are subscribed to the Google Groups "Django 
developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/0138232b-4b8f-4a7a-99b0-67c0702f9ce8%40netzkolchose.de.
  • New... אורי
    • ... Jörg Breitbart
      • ... Jörg Breitbart
      • ... אורי
      • ... אורי
        • ... 'Adam Johnson' via Django developers (Contributions to Django itself)
          • ... Jörg Breitbart
            • ... Jason Johns

Reply via email to