[TIP] Implementation details of x-outcomes WAS: unitteset outcomes (pass/fail/xfail/error), humans and automation

Olemis Lang olemis at gmail.com
Mon Dec 14 06:55:37 PST 2009

/me changing subject to focus on requirements in the original thread ;o)

On Mon, Dec 14, 2009 at 3:41 AM, Robert Collins
<robertc at robertcollins.net> wrote:

Here we go again. Cool !

Like I mentioned before there are three main outcomes out there :

  - pass : the SUT satisfies the test condition (according to the
    judgments of the author of the test ;o)
  - failure : the SUT does not satisfy the test condition
  - error : the test code contains bugs

Everybody agree ?

The goal then IMO should be to refine those outcomes (isn't it ?)

If so :
  - pass could be represented by warnings and there is a whole std
hierarchy for that .
  - failure could be represented by exceptions and there is a whole
std hierarchy for that .
  - something similar for errors (AFAICR there's an exception type for that)

so my question is :

  - Why not to leave the overall API almost intact and just add
    new (exception | warning) hierarchy for testing purposes with
    further details included inside of exception objects ?

> I want to:
>  - be able to automate with them: if I add an outcome 'MissingFeature',
> I want to be able to add workflow and reports on that in my CI system or
> test repository.

while being language agnostic ? is that still a requirement ?

>  - I want to be able to control whether this outcome should be
> considered ordinary or exceptional: should it cause a test suite to
> error, or even to stop early?

This interpretation of (pass | fail) may be very controversial and
probably we end up with a parallel hierarchy similar to ImportError
and ImportWarning




Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
Automated init.  -

More information about the testing-in-python mailing list