[TIP] Randomizing test order with nose
dgou at mac.com
Thu Apr 23 06:03:29 PDT 2009
On or about 2009 Apr 23, at 8:49 AM, Michael Foord indited:
> This means that the exact sequence of tests run on any one machine
> depend on how many computers are running the tests and timing etc.
Yes, that is a good thing, making sure that test success is not
accidentally dependent on a specific sequence of prior test runs. We
have a farm of buildbot slaves, and they run the full suite of tests
on different devices (which means some tests run only when the right
device is present) with a shuffled test ordering.
> Anyway, it shows an alternative approach; rather than making the
> test order inherently repeatable you could record the order that
> tests are actually run in and if you *need* to repeat them have a
> runner capable of doing this. Perhaps not ideal if it is a common
> need, but combined with our reporting tools it works well for us.
Our version of unit test logs each test loaded (and even if it was
loaded from .py vs. .pyc) and the exact sequence that tests were run
in. We started with that kind of "whole regression suite
reproducibility". Once we crossed the threshold of several hundred
tests, including stress tests that can take multiple hours, if not
days, to run, we needed a way to pluck out one failing test from that
regression suite and run it in isolation; that can't be done if the
only way to reproduce the PRNG state for a given test is to re-run the
entire regression suite, in the same order, up to that test.
More information about the testing-in-python