[TIP] unitteset outcomes (pass/fail/xfail/error), humans and automation

Olemis Lang olemis at gmail.com
Mon Dec 14 07:59:12 PST 2009


On Mon, Dec 14, 2009 at 10:46 AM, holger krekel <holger at merlinux.eu> wrote:
>
[...]
>
>>  - I want to be able to control whether this outcome should be
>> considered ordinary or exceptional: should it cause a test suite to
>> error, or even to stop early?
>
> That's a matter of test-running to me, has almost nothing to do
> representing test outcomes.
>

IMO it's a responsibility of the (test case | suite) to determine
whether something failed or not . The purpose of the runner should be
just to orchestrate the whole process and be a little bit pro-active .
Not sure about the decision of stopping the test runner early BTW.

>>
>> Anyone else have needs or constraints they'd like to chip in? I will be
>> experimenting with this in testtools, and like the addDetails API will
>> be proffering the results from that experiments for the standard
>> library's unittest package, once the API is working and in use in a few
>> projects.
>
> Do you have interest in trying to standardize some bits across tools
> and approaches?  I'd see the goal in having tools that process results
> somewhat uniformly.
>

+1

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
olemis changed the public information on the FLiOOPS project  -
http://feedproxy.google.com/~r/flioops/~3/2oEVcZIoLdQ/flioops



More information about the testing-in-python mailing list