[TIP] unittest q: keeping track of the *right* number of tests

C. Titus Brown ctb at msu.edu
Mon Mar 23 07:38:26 PDT 2009

On Mon, Mar 23, 2009 at 09:26:49AM -0400, laidler at stsci.edu wrote:
-> >On Sun, Mar 22, 2009 at 4:15 PM, C. Titus Brown <ctb at msu.edu> wrote:
-> >> Hi all,
-> >>
-> >> we're running into an interesting problem over on the pygr project: the
-> >> number of tests to run varies according to the installation, and we're
-> >> having trouble tracking the expected number of tests.
-> >>
-> >> Briefly, we include or skip entire test suites depending on whether or
-> >> not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
-> >> installed.  How should we track how many tests *should* actually be run?
-> >>
-> >> (This is a more general problem, incidentally; I've noticed that in
-> >> projects with thousands of tests, a few can slip through the cracks and
-> >> become "not executed" without anyone noticing, for a variety of reasons
-> >> (inappropriate skipping, poor if statement usage, function renaming,
-> >> etc.)
-> The solution we have adopted is to build this into a separate
-> test report system, which remembers the names of all the tests
-> that have previously been run in a given environment and 
-> reports any missing tests.

Hi Vicki,

this is the slightly more intensive version of what I was thinking
about.  Do you have any code or (perhaps even more importantly ;) tips &
tricks to share with us?

C. Titus Brown, ctb at msu.edu

More information about the testing-in-python mailing list