[TIP] unittest q: keeping track of the *right* number of tests

Gabor Szabo szabgab at gmail.com
Mon Mar 23 03:25:50 PDT 2009


On Sun, Mar 22, 2009 at 4:15 PM, C. Titus Brown <ctb at msu.edu> wrote:
> Hi all,
>
> we're running into an interesting problem over on the pygr project: the
> number of tests to run varies according to the installation, and we're
> having trouble tracking the expected number of tests.
>
> Briefly, we include or skip entire test suites depending on whether or
> not MySQL, sqlite, and/or BLAST (a sequence comparison package) are
> installed.  How should we track how many tests *should* actually be run?
>
> (This is a more general problem, incidentally; I've noticed that in
> projects with thousands of tests, a few can slip through the cracks and
> become "not executed" without anyone noticing, for a variety of reasons
> (inappropriate skipping, poor if statement usage, function renaming,
> etc.)
>
> The two ideas I had are --
>
>  - keep track of aggregate number of tests per test file expected under
>    the various conditions, e.g.
>
>        test_db_stuff: 20 tests if no MySQL, 50 if MySQL
>
>  - keep track of individual tests by name, and tag them with "not run
>    if MySQL is not installed", and then check to make sure all of the
>    expected ones are run.
>
> Both entail an increase in record-keeping which is annoying but
> inevitable, I guess...
>
> Thoughts?

If I understood correctly the problem is that you have no way to know
if the correct number of tests were executed.

As I am trying to learn how this is done in the various languages and then
share it with others let me point you at how it is done with TAP,
the Test Anything Protocol.

The idea there is that you have two processes, one that generates the
raw report for the individual unit tests or assertions printing "ok" or "not ok"
and some comment respectively. The other process - the harness - is gathering
this information and then printing a summary at the end.

In order to ensure the correct number of tests were run, the test author has to
declare the "plan" the number of tests (assertions) she is going to run which
is displayed as 1..N. Then the harness can compare to this.

When a skipping a bunch of tests (assertions) the skip command has
to say how many units it is going to skip - after all the code cannot know -
and then communicate this to the harness.

You are right that it is a pain to maintain these numbers but AFAIK there is no
way around it without giving up this control.
There is some work underway to allow incremental "planning" which
ill ease that pain. It will also help in providing more granular control
over the actual number of assertions.

TAP http://www.testanything.org/

TAP is language independent and there is work underway to turn TAP
into an IETF standard.


regards
   Gabor
   http://szabgab.com/blog.html

   Test Automation Tips
   http://szabgab.com/test_automation_tips.html



More information about the testing-in-python mailing list