[TIP] unittest q: keeping track of the *right* number of tests
jnoller at gmail.com
Mon Mar 23 07:46:53 PDT 2009
On Mon, Mar 23, 2009 at 10:38 AM, C. Titus Brown <ctb at msu.edu> wrote:
> On Mon, Mar 23, 2009 at 09:26:49AM -0400, laidler at stsci.edu wrote:
> -> >On Sun, Mar 22, 2009 at 4:15 PM, C. Titus Brown <ctb at msu.edu> wrote:
> -> >> Hi all,
> -> >>
> -> >> we're running into an interesting problem over on the pygr project: the
> -> >> number of tests to run varies according to the installation, and we're
> -> >> having trouble tracking the expected number of tests.
> -> >>
> -> >> Briefly, we include or skip entire test suites depending on whether or
> -> >> not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
> -> >> installed. How should we track how many tests *should* actually be run?
> -> >>
> -> >> (This is a more general problem, incidentally; I've noticed that in
> -> >> projects with thousands of tests, a few can slip through the cracks and
> -> >> become "not executed" without anyone noticing, for a variety of reasons
> -> >> (inappropriate skipping, poor if statement usage, function renaming,
> -> >> etc.)
> -> The solution we have adopted is to build this into a separate
> -> test report system, which remembers the names of all the tests
> -> that have previously been run in a given environment and
> -> reports any missing tests.
> Hi Vicki,
> this is the slightly more intensive version of what I was thinking
> about. Do you have any code or (perhaps even more importantly ;) tips &
> tricks to share with us?
> C. Titus Brown, ctb at msu.edu
FWIW, my approach is similar to Vicki's - build a web app which tracks
"all known" test cases (uploaded via a nose plugin) and then run
results from are fed into the application as a "run set" - if the all
known/registered != the "set" then you can drill into what/why things
For example, we make heavy use of nose attributes - anything which has
a requirement say - a mysql install gets tagged with @requires_mysql
and a skiptest might get raised which is then fed into the result set
for the run.
This approach also allows you detect dead/removed tests as it's
essentially a diff against the full battery (or a customized subset).
More information about the testing-in-python