[TIP] Test isolation // Detection of offending test

Benji York benji at benjiyork.com
Wed Dec 5 12:55:15 PST 2012


On Wed, Dec 5, 2012 at 4:50 PM, Andres Riancho <andres.riancho at gmail.com> wrote:
> Lists,
>
>     I've got a project with 500+ tests, and during the last month or
> so I started to notice that some of my tests run perfectly if I run
> them directly (nosetests --config=nose.cfg
> core/data/url/tests/test_xurllib.py) but fail when I run them together
> with all the other tests (nosetests --config=nose.cfg core/).
[snip]
>     I suspect that this is a common issue when testing, how do you
> guys solve this?

The way I have handled it is to use a test runner that can randomize
test order and use that option with a continuous integration system like
Buildbot or Jenkins.  It is especially nice if the test runner tells you
the seed it used for the random number generator so you can replicate
the test order yourself.

>     for test in all_tests:
>         result = run_nosetests(test, test_that_fails)
>         if result == False:
>             print test, 'breaks', test_that_fails
>
>     Is that a good idea?

This is certainly a reasonable way to find the current bad actors, but
you will need something to keep the pressure on so new ones aren't
created.
-- 
Benji York



More information about the testing-in-python mailing list