[TIP] unittest q: keeping track of the *right* number of tests

holger krekel holger at merlinux.eu
Mon Mar 23 01:49:00 PDT 2009

Hi Titus, all,

On Sun, Mar 22, 2009 at 07:15 -0700, C. Titus Brown wrote:
> we're running into an interesting problem over on the pygr project: the
> number of tests to run varies according to the installation, and we're
> having trouble tracking the expected number of tests.
> Briefly, we include or skip entire test suites depending on whether or
> not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
> installed.  How should we track how many tests *should* actually be run?
> (This is a more general problem, incidentally; I've noticed that in
> projects with thousands of tests, a few can slip through the cracks and
> become "not executed" without anyone noticing, for a variety of reasons
> (inappropriate skipping, poor if statement usage, function renaming,
> etc.)

Yes, i see this a general issue as well.  In my experience,
unittest- or doctest skips are performed for roughly these
class of reaons:

* missing modules/packages or the correct versions 
* wrong system platform, wrong python interpreter 
* test or code is partially incomplete or broken 

However, usually these skips are often expressed in 
slightly different ways.  Test tools have no easy way
of distinguishing, tracking or summarizing nicely. 
So one major issue is to settle one standard way to 
spell each of these skips.   

Another issue with skips (at least with py.test) is that
they indeed often arise during test *collection*, i.e. the
actual number of tests are not collected.  People
put skips at setup_module, setup_class or even 
at module-global level.  It is then hard to 
know the number of tests behind the skip. 

I think that test tools should go out of their way 
to do run-to-run sanity-checking, i.e. report loudly if
things changed between test runs in suspicious ways.  
Where the definition of "suspicious" is customizable, of course. 
This mostly requires a way to store results of test-runs (including 
exact skip reasons) and to have "meta/sanity" checks 
and reporting.

Happy to discuss common approaches at our py.test / nose meetup. 


More information about the testing-in-python mailing list