New feature request - Run only a random subset of tests.

2024-02-12 Thread אורי
Django developers,

I created https://code.djangoproject.com/ticket/35183 and was asked
to start a discussion on the DevelopersMailingList. I'm requesting a
new feature request - Run only a random subset of tests.

tests - add a new argument "--test-only" (int, >=0) and If run with this
argument, run only this number of tests.
Sometimes there are thousands of tests, for example 6,000 tests, and I want
to run only a random subset of them, for example 200 tests. It should be
possible with Django.

More details can be found on https://code.djangoproject.com/ticket/35183

Another feature request I asked is
https://code.djangoproject.com/ticket/35184 -tests - use wildcards in
labels. More details can be found on this link. But this may be more
complicated to implement.

Thanks,
Uri Rodberg, Speedy Net.
www.speedy.net
אורי
u...@speedy.net

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CABD5YeHG5tpwzMgAOLOTkUo_tU9XRBGTKXZ_0a4rab-CYR3Atw%40mail.gmail.com.


Re: New feature request - Run only a random subset of tests.

2024-02-12 Thread Jörg Breitbart

@Uri

May I ask, whats the idea behind running a random subset of tests only? 
Wouldn't a Monte Carlo approach be highly unreliable, e.g. lure ppl into 
thinking everything is ok, but in reality the random test selection did 
not catch affected code paths? I mean for tests - its all about 
reliability, isn't it? And 200 out of 6k tests sounds like running often 
into false positive test results, esp. if your test base is skewed 
towards features not being affected by current changes.


I think this could still work with a better reliability / changed code 
coverage, if the abstraction is a bit more complicated, e.g.:
- introduce grouping flags on tests - on module, class or even single 
method scope
- on test run, declare what flags should be tested (runs all test with 
given flags)
- alternatively use appropriate flags reflecting your code changes + 
your test counter on top, but now it selects from the flagged tests with 
higher probability to run affected tests


Ah well, just some quick thoughts on that...

Cheers,
Jörg

--
You received this message because you are subscribed to the Google Groups "Django 
developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/0138232b-4b8f-4a7a-99b0-67c0702f9ce8%40netzkolchose.de.


Re: New feature request - Run only a random subset of tests.

2024-02-12 Thread Jörg Breitbart

Adding to my last comment:

If you are looking for a more tailored unit testing with low test 
pressure and still high reliability - maybe coverage.py can give you 
enough code insights to build a tailored test index db to only run 
affected tests from current code changes. I am not aware of test 
frameworks doing that currently, but it should give you high confidence 
in the test results w'o running them all over and over.


--
You received this message because you are subscribed to the Google Groups "Django 
developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/5e7558d3-c9a4-483b-94e6-1b781e77c7c0%40netzkolchose.de.


Re: New feature request - Run only a random subset of tests.

2024-02-12 Thread אורי
Hi Jörg,

All our tests are tested anyway with GitHub Actions. The idea is to run a
subset of tests locally to catch 90% of the problems before I commit and
wait 40 minutes for all the tests to run. It works most of the time. Of
course the whole tests should be run before deploying to production, but
running a subset of tests improves productivity in locating errors without
having to wait for the full test suit to run.

(by the way, running all our tests take 90 minutes, but we skip many tests
and run them in random anyway - we have 11 languages and we always test 3
specific languages + another language selected by random. This is how we
reduce the time from 90 minutes to 40 minutes. And if we make changes in
languages, we can wait 90 minutes and run all the tests)

I also can run specific tests if I work on a specific module. For example
if I work on a specific view - I can run only tests of this view. But
again, of course we run all the tests before we deploy to production.

Thanks,
Uri.
אורי
u...@speedy.net


On Mon, Feb 12, 2024 at 3:36 PM Jörg Breitbart 
wrote:

> @Uri
>
> May I ask, whats the idea behind running a random subset of tests only?
> Wouldn't a Monte Carlo approach be highly unreliable, e.g. lure ppl into
> thinking everything is ok, but in reality the random test selection did
> not catch affected code paths? I mean for tests - its all about
> reliability, isn't it? And 200 out of 6k tests sounds like running often
> into false positive test results, esp. if your test base is skewed
> towards features not being affected by current changes.
>
> I think this could still work with a better reliability / changed code
> coverage, if the abstraction is a bit more complicated, e.g.:
> - introduce grouping flags on tests - on module, class or even single
> method scope
> - on test run, declare what flags should be tested (runs all test with
> given flags)
> - alternatively use appropriate flags reflecting your code changes +
> your test counter on top, but now it selects from the flagged tests with
> higher probability to run affected tests
>
> Ah well, just some quick thoughts on that...
>
> Cheers,
> Jörg
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers  (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-developers/0138232b-4b8f-4a7a-99b0-67c0702f9ce8%40netzkolchose.de
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CABD5YeFeq_Y66GgjUD-OXP0%3D_v5LZLzC2bg4h6qaASzri0zSyA%40mail.gmail.com.


Re: New feature request - Run only a random subset of tests.

2024-02-12 Thread אורי
Hi,

Also, sometimes I just want to see how many tests are in a specific module,
without running them. So I can just run
`./tests_manage_all_sites_with_all_warnings.sh test speedy.net --test-only
0 --test-all-languages` which gives me the number of tests in speedy.net,
or any module I need. There is no way to count them from code, because many
of them are tested more than once so the only way to know how many tests
are in a specific module is to run them.

Thanks,
Uri.


אורי
u...@speedy.net


On Mon, Feb 12, 2024 at 3:36 PM Jörg Breitbart 
wrote:

> @Uri
>
> May I ask, whats the idea behind running a random subset of tests only?
> Wouldn't a Monte Carlo approach be highly unreliable, e.g. lure ppl into
> thinking everything is ok, but in reality the random test selection did
> not catch affected code paths? I mean for tests - its all about
> reliability, isn't it? And 200 out of 6k tests sounds like running often
> into false positive test results, esp. if your test base is skewed
> towards features not being affected by current changes.
>
> I think this could still work with a better reliability / changed code
> coverage, if the abstraction is a bit more complicated, e.g.:
> - introduce grouping flags on tests - on module, class or even single
> method scope
> - on test run, declare what flags should be tested (runs all test with
> given flags)
> - alternatively use appropriate flags reflecting your code changes +
> your test counter on top, but now it selects from the flagged tests with
> higher probability to run affected tests
>
> Ah well, just some quick thoughts on that...
>
> Cheers,
> Jörg
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django developers  (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-developers/0138232b-4b8f-4a7a-99b0-67c0702f9ce8%40netzkolchose.de
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/CABD5YeEQ3hBNEZBhR68iiHCRkFRqr9B9jNg%3DtgsN4OwC3%3D9rLg%40mail.gmail.com.


Re: New feature request - Run only a random subset of tests.

2024-02-12 Thread 'Adam Johnson' via Django developers (Contributions to Django itself)
I’d be against this. I think this approach would be counterproductive in most 
cases due to the high probability of a false positive. Including it as a core 
feature is not necessary when it can be added through a third party package.

On Mon, Feb 12, 2024, at 2:22 PM, אורי wrote:
> Hi,
> 
> Also, sometimes I just want to see how many tests are in a specific module, 
> without running them. So I can just run 
> `./tests_manage_all_sites_with_all_warnings.sh test speedy.net --test-only 0 
> --test-all-languages` which gives me the number of tests in speedy.net, or 
> any module I need. There is no way to count them from code, because many of 
> them are tested more than once so the only way to know how many tests are in 
> a specific module is to run them.
> 
> Thanks,
> Uri.
> 
> 
> אורי
> u...@speedy.net
> 
> 
> On Mon, Feb 12, 2024 at 3:36 PM Jörg Breitbart  
> wrote:
>> @Uri
>> 
>> May I ask, whats the idea behind running a random subset of tests only? 
>> Wouldn't a Monte Carlo approach be highly unreliable, e.g. lure ppl into 
>> thinking everything is ok, but in reality the random test selection did 
>> not catch affected code paths? I mean for tests - its all about 
>> reliability, isn't it? And 200 out of 6k tests sounds like running often 
>> into false positive test results, esp. if your test base is skewed 
>> towards features not being affected by current changes.
>> 
>> I think this could still work with a better reliability / changed code 
>> coverage, if the abstraction is a bit more complicated, e.g.:
>> - introduce grouping flags on tests - on module, class or even single 
>> method scope
>> - on test run, declare what flags should be tested (runs all test with 
>> given flags)
>> - alternatively use appropriate flags reflecting your code changes + 
>> your test counter on top, but now it selects from the flagged tests with 
>> higher probability to run affected tests
>> 
>> Ah well, just some quick thoughts on that...
>> 
>> Cheers,
>> Jörg
>> 
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Django developers  (Contributions to Django itself)" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to django-developers+unsubscr...@googlegroups.com 
>> .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/django-developers/0138232b-4b8f-4a7a-99b0-67c0702f9ce8%40netzkolchose.de.
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Django developers (Contributions to Django itself)" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to django-developers+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/django-developers/CABD5YeEQ3hBNEZBhR68iiHCRkFRqr9B9jNg%3DtgsN4OwC3%3D9rLg%40mail.gmail.com
>  
> .

-- 
You received this message because you are subscribed to the Google Groups 
"Django developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/e3fe4559-e37c-47ce-86b5-8c9d6385f048%40app.fastmail.com.


Re: New feature request - Run only a random subset of tests.

2024-02-12 Thread Jörg Breitbart
I also think that your requirements are too specific to be added to 
django. You are prolly better suited by creating your own test picking 
abstraction for this, e.g. by writing custom test suite aggregates or 
using unittest.TestLoader.discover and going into tests of interest by 
your own logic (assuming you are sticking to unittest for testing).


--
You received this message because you are subscribed to the Google Groups "Django 
developers  (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-developers+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-developers/cef0a8f2-7ea7-45ea-ade9-e3522b1e2518%40netzkolchose.de.


Re: Testing Unmanaged Models - Using the SchemaEditor to create db tables

2024-02-12 Thread Emmanuel Katchy
Hi Adam,

Thanks for your response!

I understand your point about unmanaged models being a niche use case of 
Django. I've decided to proceed with creating a package and see how it goes.

The new enterContext() and other methods in unittest seem interesting. I'll 
definitely be using them more from now on.

Best,
Emmanuel

On Friday, February 9, 2024 at 11:23:36 PM UTC+1 Adam Johnson wrote:

> Hi Emmanuel
>
> Most activity from this mailing list has moved to Django Internals 
> category on the forum: https://forum.djangoproject.com/c/internals/5 . 
> Better to post there in future, or you could even duplicate this post.
>
> I think your approach is worth sharing in a blog post, or even a package, 
> rather than adding to Django itself.  Your code is worth sharing but may be 
> too specific for the framework.
>
> Unmanaged models aren’t particularly popular. When they are used, it can 
> be for many reasons. As a result, projects may create the tables in various 
> ways during tests, such as loading an existing database dump or calling an 
> external tool. So using Django’s migrations to create them (through 
> managed=True or SchemaEditor) is just one option among many.
>
> By the way, you may be able to simplify your implementation with the new 
> context methods in unittest from Python 3.11: 
> https://adamj.eu/tech/2022/11/14/unittest-context-methods-python-3-11-backports/
>  
> .
>
> Thank you for sharing, and welcome to the Django community!
>
> On Sun, Jan 28, 2024, at 11:00 PM, Emmanuel Katchy wrote:
>
> Hi everyone!
>
> I'd like to get your thoughts on something.
>
> Unmanaged models mean that Django no longer handles creating and managing 
> schema at the database level (hence the name).
> When running tests, this means these tables aren't created, and we can't 
> run queries against that model. The general solution I found is to 
> monkey-patch 
> the TestSuiteRunner to temporarily treat models as managed 
> 
> .
>
> Doing a bit of research I however came up with a solution using 
> SchemaEditor , 
> to create the model tables directly, viz:
>
> ```
> """
> A cleaner approach to temporarily creating unmanaged model db tables for 
> tests
> """
>
> from unittest import TestCase
>
> from django.db import connections, models
>
> class create_unmanaged_model_tables:
> """
> Create db tables for unmanaged models for tests
> Adapted from: https://stackoverflow.com/a/49800437
> Examples:
> with create_unmanaged_model_tables(UnmanagedModel):
> ...
> @create_unmanaged_model_tables(UnmanagedModel, FooModel)
> def test_generate_data():
> ...
> 
> @create_unmanaged_model_tables(UnmanagedModel, FooModel)
> def MyTestCase(unittest.TestCase):
> ...
> """
>
> def __init__(self, unmanaged_models: list[ModelBase], db_alias: str = 
> "default"):
> """
> :param str db_alias: Name of the database to connect to, defaults 
> to "default"
> """
> self.unmanaged_models = unmanaged_models
> self.db_alias = db_alias
> self.connection = connections[db_alias]
>
> def __call__(self, obj):
> if issubclass(obj, TestCase):
> return self.decorate_class(obj)
> return self.decorate_callable(obj)
>
> def __enter__(self):
> self.start()
>
> def __exit__(self, exc_type, exc_value, traceback):
> self.stop()
>
> def start(self):
> with self.connection.schema_editor() as schema_editor:
> for model in self.unmanaged_models:
> schema_editor.create_model(model)
>
> if (
> model._meta.db_table
> not in self.connection.introspection.table_names()
> ):
> raise ValueError(
> "Table `{table_name}` is missing in test 
> database.".format(
> table_name=model._meta.db_table
> )
> )
>
> def stop(self):
> with self.connection.schema_editor() as schema_editor:
> for model in self.unmanaged_models:
> schema_editor.delete_model(model)
>
> def copy(self):
> return self.__class__(
> unmanaged_models=self.unmanaged_models, db_alias=self.db_alias
> )
>
> def decorate_class(self, klass):
> # Modify setUpClass and tearDownClass
> orig_setUpClass = klass.setUpClass
> orig_tearDownClass = klass.tearDownClass
>
> @classmethod
> def setUpClass(cls):
> self.start()
> if orig_setUpClass is not None:
> orig_setUpClass()
>
>
> @classmethod
> def tearDownClass(cls):
> if orig_tearDownClass i