[TIP] unittest q: keeping track of the *right* number of tests

Michael Foord fuzzyman at voidspace.org.uk
Sun Mar 22 10:43:31 PDT 2009

C. Titus Brown wrote:
> Hi all,
> we're running into an interesting problem over on the pygr project: the
> number of tests to run varies according to the installation, and we're
> having trouble tracking the expected number of tests.
> Briefly, we include or skip entire test suites depending on whether or
> not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
> installed.  How should we track how many tests *should* actually be run?
> (This is a more general problem, incidentally; I've noticed that in
> projects with thousands of tests, a few can slip through the cracks and
> become "not executed" without anyone noticing, for a variety of reasons
> (inappropriate skipping, poor if statement usage, function renaming,
> etc.)

Yeah - we've had this problem, a bunch of tests not getting executed. :-)

Don't think we've actually found a solution however.


> The two ideas I had are --
>   - keep track of aggregate number of tests per test file expected under
>     the various conditions, e.g.
>         test_db_stuff: 20 tests if no MySQL, 50 if MySQL
>   - keep track of individual tests by name, and tag them with "not run
>     if MySQL is not installed", and then check to make sure all of the
>     expected ones are run.
> Both entail an increase in record-keeping which is annoying but
> inevitable, I guess...
> Thoughts?
> cheers,
> --titus


More information about the testing-in-python mailing list