[TIP] unitteset outcomes (pass/fail/xfail/error), humans and automation

Robert Collins robertc at robertcollins.net
Mon Dec 14 00:41:20 PST 2009


So I've been thinking about outcomes. As a user of unittest there is an
arbitrary amount of resolution: in python 2.6 we offer 'success, fail,
error'. In python2.7 we'll offer skip, expected fail, and its converse,
unexpected success.

But users may want more detail than we offer: bzr wants to know about
tests that are missing a dependency (e.g. a win32 specific test on unix)
separately from those tests that were skipped because the test decided
it didn't want to run, separately from those cases where a test simply
wasn't relevant but we had constructed the test object anyway. (We
report missing dependencies by the dependency, total all the skips, and
don't report irrelevant cases).

As authors of unittest we can ask users to squash their needs into a
fixed set of outcomes, and then get pressured to expand that set, or we
can try to find a way to make it extensible.

I think making it extensible is better, because if it is deliberately
extensible then the system can expand to meet the needs of our users,
rather than them having to tunnel through our system and do things like
parse exceptions in a reporter object in order to figure out that it
was, in fact, an XYZ outcome. (Yes, people do this!).

So far, I've been extending things by having the TestCase try for a new
method on TestResult (e.g. addSkip) and if its not present, map the
outcome to a 'similar' one (e.g. addSkip->addSuccess).

This isn't desirable, because it depends on all the components of a
system being upgraded to support a new outcome before it is broadly
available.

I have a few things *I* want from any extensible set of outcomes, and
I'm hoping to gather more from this list (beyond obvious thinks like
beautiful code :)).

I want to:
 - be able to automate with them: if I add an outcome 'MissingFeature',
I want to be able to add workflow and reports on that in my CI system or
test repository.
 - I want them to tunnel safely though intermediate result objects and
things like subunit without needing the intermediaries to have a
semantic understanding of the new outcome.
 - I want to be able to control whether this outcome should be
considered ordinary or exceptional: should it cause a test suite to
error, or even to stop early?

Thats about it really. 

Anyone else have needs or constraints they'd like to chip in? I will be
experimenting with this in testtools, and like the addDetails API will
be proffering the results from that experiments for the standard
library's unittest package, once the API is working and in use in a few
projects.

(And you're all more than welcome to experiment with me :)). The
addDetails API is present in testtools 0.9.0, and 0.9.2 is about to be
released - its python 3 compatible too thanks to Benjamin Peterson, for
more enjoyment all around!)

-Rob
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20091214/088f2abb/attachment-0001.pgp>


More information about the testing-in-python mailing list