[TIP] unitteset outcomes (pass/fail/xfail/error), humans and automation

holger krekel holger at merlinux.eu
Mon Dec 14 07:46:26 PST 2009


Hi Rob,

On Mon, Dec 14, 2009 at 19:41 +1100, Robert Collins wrote:
> So I've been thinking about outcomes. As a user of unittest there is an
> arbitrary amount of resolution: in python 2.6 we offer 'success, fail,
> error'. In python2.7 we'll offer skip, expected fail, and its converse,
> unexpected success.

This also reflects py.test's test outcome view - btw, i am generally happy with 
the overall outcome convergence except for "unconditional skips".  IMO

* 'skip' is an outcome for dependency/platform/resource mismatches.
  Given the right combination of resources a skipped test will pass.
  IOW a skip is **neccessarily** based on a environmental condition.

* 'xfail' is an outcome for implementation problems, it can but does not
  need to be tied to a condition ("fails due to refactoring XYZ").

In my experience, "unconditional skips" blur the line between using
skip versus xfail. 

> [...]
> I have a few things *I* want from any extensible set of outcomes, and
> I'm hoping to gather more from this list (beyond obvious thinks like
> beautiful code :)).
> 
> I want to:
>  - be able to automate with them: if I add an outcome 'MissingFeature',
> I want to be able to add workflow and reports on that in my CI system or
> test repository.

agreed.  Some kind of a single-word string which can be one of 
a known set of values ("pass", "fail", "error", "xfail", "skip", "xpass" e.g.)
or some user-defined one?

>  - I want them to tunnel safely though intermediate result objects and
> things like subunit without needing the intermediaries to have a
> semantic understanding of the new outcome.

yip.  Of course some outcomes can only be constructed by interaction 
with details of the overall testing process.  (xfail is one example).

>  - I want to be able to control whether this outcome should be
> considered ordinary or exceptional: should it cause a test suite to
> error, or even to stop early?

That's a matter of test-running to me, has almost nothing to do
representing test outcomes. 

> Thats about it really. 
> 
> Anyone else have needs or constraints they'd like to chip in? I will be
> experimenting with this in testtools, and like the addDetails API will
> be proffering the results from that experiments for the standard
> library's unittest package, once the API is working and in use in a few
> projects.

Do you have interest in trying to standardize some bits across tools
and approaches?  I'd see the goal in having tools that process results
somewhat uniformly. 

cheers,
holger
 
> (And you're all more than welcome to experiment with me :)). The
> addDetails API is present in testtools 0.9.0, and 0.9.2 is about to be
> released - its python 3 compatible too thanks to Benjamin Peterson, for
> more enjoyment all around!)

> -Rob



> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python


-- 



More information about the testing-in-python mailing list