[TIP] Testing a daemon with coverage (nose)

Ben Finney ben+python at benfinney.id.au
Thu May 7 20:23:59 PDT 2009


Mark Waite <markwaite at yahoo.com> writes:

> Ben Finney <ben+python at benfinney.id.au> writes:
> 
> > That implies that all the unit tests need to run when I make that
> > change, not just the one I'm focussing on. The point of an automated
> > test suite is that I don't need to remember which of them need to
> > run.
> 
> In that context, I don't see the distinction between unit tests and
> any other form of tests.  I want to know the state of all tests on
> every change.

Ideally, yes of course; but as pointed out, many types of test exercise
expensive (in CPU time) resources, so aren't suitable for running the
entire suite when you need an answer within seconds.

> Some tests are long enough that I am not willing to wait for those
> tests before I move to my next task. That is what I want from
> continuous integration (run tests while I move forward).

Then you've just made exactly the distinction between unit tests and
non-unit tests I was making.

> Can you clarify how you've gained value from the distinction between
> test types in this context?  I'd like to understand how your concepts
> might apply in my environment.

A unit test <URL:http://en.wikipedia.org/wiki/Unit_testing>, by design,
tests a very clear assertion about a very small unit of code, in
isolation.

The isolation requires that any complex run-time external resources are
instead substituted with inexpensive test doubles
<URL:http://www.martinfowler.com/bliki/TestDouble.html>, that allow the
code unit to run but the test to know that any perturbation comes either
from the isolated test environment for that test case, or from the code
unit under test.

This isolation, motivated by the requirement to have a unit test be able
to exercise a tiny unit of code without external factors, also means
that each unit test case should be very fast to run. Thus, the entire
unit test suite, even with many thousands of test cases, will typically
run in seconds.

This is a stark difference in both purpose and degree from other kinds
of test: most non-unit tests (acceptance, integration, performance,
etc.) will need to exercise the full application stack, which
necessarily brings in costly set-up and execution times. Non-unit test
suites, simply by the nature of what they're designed for, tend to have
a much greater range in how long they will take to run.

Keeping a separate suite of only unit tests, therefore, is a very useful
way of getting at least one dimension of testing — the code unit
behaviour — with as short a feedback loop as possible. I keep my unit
test suite running automatically every time a file is changed within my
on-disk working tree, and check its output whenever I think I've made an
interesting change. Feedback is thus extremely tight, with no real
overhead since no costly external resources are exercised.

The build procedure, of course, should exercise the unit test suite
*and* all other test suites.

-- 
 \         “A man must consider what a rich realm he abdicates when he |
  `\                       becomes a conformist.” —Ralph Waldo Emerson |
_o__)                                                                  |
Ben Finney




More information about the testing-in-python mailing list