[TIP] Implementation details of x-outcomes WAS: unitteset outcomes (pass/fail/xfail/error), humans and automation

Robert Collins robertc at robertcollins.net
Mon Dec 14 14:05:43 PST 2009


On Mon, 2009-12-14 at 09:55 -0500, Olemis Lang wrote:
> /me changing subject to focus on requirements in the original thread ;o)
> 
> On Mon, Dec 14, 2009 at 3:41 AM, Robert Collins
> <robertc at robertcollins.net> wrote:
> >
> 
> Here we go again. Cool !
> 
> Like I mentioned before there are three main outcomes out there :
> 
>   - pass : the SUT satisfies the test condition (according to the
>     judgments of the author of the test ;o)
>   - failure : the SUT does not satisfy the test condition
>   - error : the test code contains bugs
> 
> Everybody agree ?

No :). POSIX specifies many more outcomes; users want more outcomes, and
in fact the difference between failure and error is meaningless for many
people, while being important for others.

> The goal then IMO should be to refine those outcomes (isn't it ?)
> 
> If so :
>   - pass could be represented by warnings and there is a whole std
> hierarchy for that .
>   - failure could be represented by exceptions and there is a whole
> std hierarchy for that .
>   - something similar for errors (AFAICR there's an exception type for that)
> 
> so my question is :
> 
> Q:
>   - Why not to leave the overall API almost intact and just add
>     new (exception | warning) hierarchy for testing purposes with
>     further details included inside of exception objects ?

I prefer not to discuss possible implementations till I've got a decent
sense of the goals and constraints :)

> > I want to:
> >  - be able to automate with them: if I add an outcome 'MissingFeature',
> > I want to be able to add workflow and reports on that in my CI system or
> > test repository.
> >
> 
> while being language agnostic ? is that still a requirement ?

I don't think language agnostic can really apply while we're talking
about the python unittest API. I will want to figure out how to tunnel
whatever design we come up with through subunit, in both directions -
e.g. how to represent a custom outcome in junit in python - but thats my
problem ;).

> >  - I want to be able to control whether this outcome should be
> > considered ordinary or exceptional: should it cause a test suite to
> > error, or even to stop early?
> >
> 
> This interpretation of (pass | fail) may be very controversial and
> probably we end up with a parallel hierarchy similar to ImportError
> and ImportWarning

I don't think I'm really interpreting here at all - if you look at all
the CI tools around, they have varying degrees of sophistication, but
what they all seem to agree on is that a test run either passes, or does
not pass. Being able to inform an extensible system as to whether a new
outcome stops the test run being a pass is pretty important on that
basis: if you cannot inform the system then new outcomes are either
always going to make the run 'fail', or they are never going to make the
run 'fail' - and its pretty easy to guarantee someone will be
unhappy ;). As an experiment, imagine that python 2.7's skip, expected
fail and unexpected success were written as extensions, not as a patch
to the core. We should be able to do that, with whatever system we come
up with.

-Rob
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20091215/487e6589/attachment.pgp>


More information about the testing-in-python mailing list