[TIP] unitteset outcomes (pass/fail/xfail/error), humans and automation

Michael Foord fuzzyman at voidspace.org.uk
Sat Dec 26 06:21:10 PST 2009


On 24/12/2009 19:55, Robert Collins wrote:
> On Wed, 2009-12-16 at 15:15 +0000, Michael Foord wrote:
>    
>> On 15/12/2009 22:47, Robert Collins wrote:
>>      
>>> On Tue, 2009-12-15 at 16:24 -0500, Olemis Lang wrote:
>>>
>>>        
>>>> But anyway , it is often possible to wrap the execution of those TFs
>>>> using customized instances of TestCase thereby recording the results
>>>> provided by those frameworks using TestResult&  Co.
>>>>
>>>> PS: Does anybody knows about an exception to that «rule» ?
>>>>
>>>>          
>>> Well yes, at the moment you cannot preserve the full resolution because
>>> the unittest module isn't very extensible.
>>>
>>> See for instance what nosetests's ErrorClassPlugin does / recommends:
>>> monkey patching the result object :- something totally incompatible if
>>> the result object is e.g. a gui result, or a remoting result.
>>>
>>>        
>> I'm definitely in favour of making unittest more modular and
>> extensible, but am worried about creating a situation where there are
>> multiple incompatible extensions. It seems like this could be a
>> problem with the proposed outcomes.
>>      
> Well, I haven't proposed an implementation yet ;). Consider though
> that /right now/ we have multiple incompatible extensions: nose monkey
> patches result objects, testtools permits registration, bzr until
> recently reimplements TestCase.run (but now registers via testtools
> API).
>
>    
>> I'm interested in this discussion but have never had the need for
>> custom outcomes, so don't feel I can add very much here.
>>      
> I think a very easy way to think about the problem is to imagine that
> skip&  expected fail were not in the standard library, and you wanted to
> add them *as an extension*.
>    

Heh, fair. If you can suggest an API perhaps we can work on an 
implementation... Skipping and expected fail can then be reimplemented 
as extensions.

With outcomes would we have the concept of "pass" and "fail" as 
categories of outcomes (affecting the way that they are reported)?

All the best,

Michael


> -Rob
>    


-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog





More information about the testing-in-python mailing list