[TIP] Implementation details of x-outcomes WAS: unitteset outcomes (pass/fail/xfail/error), humans and automation

Olemis Lang olemis at gmail.com
Tue Dec 15 06:33:45 PST 2009

On Tue, Dec 15, 2009 at 9:12 AM, Olemis Lang <olemis at gmail.com> wrote:
> On Mon, Dec 14, 2009 at 5:05 PM, Robert Collins
> <robertc at robertcollins.net> wrote:
>> On Mon, 2009-12-14 at 09:55 -0500, Olemis Lang wrote:
>>> On Mon, Dec 14, 2009 at 3:41 AM, Robert Collins
>>> <robertc at robertcollins.net> wrote:
>>> > I want to:
>>> >  - be able to automate with them: if I add an outcome 'MissingFeature',
>>> > I want to be able to add workflow and reports on that in my CI system or
>>> > test repository.
>>> >  - I want to be able to control whether this outcome should be
>>> > considered ordinary or exceptional: should it cause a test suite to
>>> > error, or even to stop early?
>>> >
>>> This interpretation of (pass | fail) may be very controversial and
>>> probably we end up with a parallel hierarchy similar to ImportError
>>> and ImportWarning
>> I don't think I'm really interpreting here at all -
> I was not talking about you . I was talking about the testing framework and
> the test author.

For example, when you mention MissingFeature what is it ? a failure or
a warning ? IMO you cannot determine what it is because its meaning is
related to a (fact | concept | thing) that's beyond testing and
depends on the SUT and some particular semantics. For instance you
might say : the feature is missing and therefore I cannot test
anything else because everything depends on that feature and it was
supposed to be there (IOW it's a failure) . OTOH you can say: well the
feature is missing, let's use a mock or replacement , or let's just
skip the test, or let's consider that everything is ok if the feature
is not there provided that the SUT can recover from that situation, or
... (IOW it's a warning and test should pass)

There you have the interpretation I was talking about. In that case
you'll probably end-up defining both `MissingFeatureError` and
`MissingFeatureWarning` in order to resolve the ambiguities and let
the test framework determine whether the test case is ok or not .



Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
olemis changed the public information on the FLiOOPS project  -

More information about the testing-in-python mailing list