[TIP] Testing a daemon with coverage (nose)

Marius Gedminas marius at gedmin.as
Fri May 8 04:20:19 PDT 2009


On Fri, May 08, 2009 at 01:23:59PM +1000, Ben Finney wrote:
> > Can you clarify how you've gained value from the distinction between
> > test types in this context?  I'd like to understand how your concepts
> > might apply in my environment.
> 
> A unit test <URL:http://en.wikipedia.org/wiki/Unit_testing>, by design,
> tests a very clear assertion about a very small unit of code, in
> isolation.
> 
> The isolation requires that any complex run-time external resources are
> instead substituted with inexpensive test doubles
> <URL:http://www.martinfowler.com/bliki/TestDouble.html>, that allow the
> code unit to run but the test to know that any perturbation comes either
> from the isolated test environment for that test case, or from the code
> unit under test.
> 
> This isolation, motivated by the requirement to have a unit test be able
> to exercise a tiny unit of code without external factors, also means
> that each unit test case should be very fast to run. Thus, the entire
> unit test suite, even with many thousands of test cases, will typically
> run in seconds.

I've got a 135000-line application (not counting size of 3rd-party
libraries we rely on) with 2753 unit tests.  The tests use stubs where
convenient and don't rely on external resources.  The unit test suite takes
8 and a half minutes to run, not counting module imports (additional 40
seconds) from cold cache on a 1.8 GHz Core 2 Duo CPU.  When everything's
in disk cache the import time drops to 6 seconds, but the test run is
still over 8 minutes.

Now perhaps the tests are not proper unit tests (we don't use mocks;
when code under test depends on other code that doesn't require too
elaborate setup we tend to let it use real components instead of
stubbing them), but that's way longer than I accept in my edit-save-test
cycle.

Thus filtering by other means than test kind is essential for me.
zope.testing allows me to filter by a combination of these:

  - source file location (saves disk churn on recursive test discovery)
  - regexp match against Python package/module name
  - regexp match against test name (class/method or doctest file)

When I'm debugging a test failure, I filter out everything except that
single test to get instant feedback.  When I'm done, I run the tests
for the whole package before I check in.  I let buildbot run the full
test suite to catch regressions.

> This is a stark difference in both purpose and degree from other kinds
> of test: most non-unit tests (acceptance, integration, performance,
> etc.) will need to exercise the full application stack, which
> necessarily brings in costly set-up and execution times. Non-unit test
> suites, simply by the nature of what they're designed for, tend to have
> a much greater range in how long they will take to run.

That is true; our functional test suite takes over 20 minutes (for 272
tests).

Marius Gedminas
-- 
A: No.
Q: Should I include quotations after my reply?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://lists.idyll.org/pipermail/testing-in-python/attachments/20090508/06b18cc1/attachment.pgp 


More information about the testing-in-python mailing list