[TIP] unittest q: keeping track of the *right* number of tests

C. Titus Brown ctb at msu.edu
Sun Mar 22 07:15:08 PDT 2009


Hi all,

we're running into an interesting problem over on the pygr project: the
number of tests to run varies according to the installation, and we're
having trouble tracking the expected number of tests.

Briefly, we include or skip entire test suites depending on whether or
not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
installed.  How should we track how many tests *should* actually be run?

(This is a more general problem, incidentally; I've noticed that in
projects with thousands of tests, a few can slip through the cracks and
become "not executed" without anyone noticing, for a variety of reasons
(inappropriate skipping, poor if statement usage, function renaming,
etc.)

The two ideas I had are --

  - keep track of aggregate number of tests per test file expected under
    the various conditions, e.g.

        test_db_stuff: 20 tests if no MySQL, 50 if MySQL

  - keep track of individual tests by name, and tag them with "not run
    if MySQL is not installed", and then check to make sure all of the
    expected ones are run.

Both entail an increase in record-keeping which is annoying but
inevitable, I guess...

Thoughts?

cheers,
--titus
-- 
C. Titus Brown, ctb at msu.edu



More information about the testing-in-python mailing list