[TIP] unittest q: keeping track of the *right* number of tests

Jesse Noller jnoller at gmail.com
Sun Mar 22 08:57:27 PDT 2009

I'd use a mix of both - a rollup of the expected count, but a
drill-down based on the tags. This is one of the problems I wanted to
solve with my testbutler project (auditing, counting, tagging and
rational views) alas, I haven't had time to work on it :\

On Sun, Mar 22, 2009 at 10:15 AM, C. Titus Brown <ctb at msu.edu> wrote:
> Hi all,
> we're running into an interesting problem over on the pygr project: the
> number of tests to run varies according to the installation, and we're
> having trouble tracking the expected number of tests.
> Briefly, we include or skip entire test suites depending on whether or
> not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
> installed.  How should we track how many tests *should* actually be run?
> (This is a more general problem, incidentally; I've noticed that in
> projects with thousands of tests, a few can slip through the cracks and
> become "not executed" without anyone noticing, for a variety of reasons
> (inappropriate skipping, poor if statement usage, function renaming,
> etc.)
> The two ideas I had are --
>  - keep track of aggregate number of tests per test file expected under
>    the various conditions, e.g.
>        test_db_stuff: 20 tests if no MySQL, 50 if MySQL
>  - keep track of individual tests by name, and tag them with "not run
>    if MySQL is not installed", and then check to make sure all of the
>    expected ones are run.
> Both entail an increase in record-keeping which is annoying but
> inevitable, I guess...
> Thoughts?
> cheers,
> --titus
> --
> C. Titus Brown, ctb at msu.edu
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python

More information about the testing-in-python mailing list