[TIP] unittest2 plugins

Michael Foord fuzzyman at voidspace.org.uk
Mon Jul 26 08:00:32 PDT 2010


On 26/07/2010 14:58, jason pellerin wrote:
> On Sat, Jul 24, 2010 at 5:39 AM, Michael Foord
> <fuzzyman at voidspace.org.uk>  wrote:
>    
>> On 24/07/2010 09:02, meme dough wrote:
>>      
>>> Hi,
>>>
>>> I see the email about unittest2 plugins.  I had a play with plugin
>>> mechanism.
>>>
>>>
>>>        
>> This is *very* much an experimental work in progress. When it is a bit more
>> complete I will send an email to python-dev and this list describing the
>> mechanism and solicit feedback on it.
>>
>> For those who just want to play now it is in the "plugins" branch of the
>> mercurial repo at:
>>
>>     http://hg.python.org/unittest2
>>      
> This is quite encouraging! I had a look at the branch this weekend,
> and I can see pretty clearly how to implement most of nose's builtin
> plugins. The major exceptions are the multiprocessing plugin, which
> replaces the test runner, and nose's support for custom exception
> handling (like skiptest, but generalized -- in nose, skips are handled
> by a plugin that registers itself as the processor for SkipTest
> exceptions). Do you have any thoughts about how to support those kinds
> of plugins?
>
>    

I'll shortly be checking in a minor change that allows a plugin to 
completely take over the test run - specifically motivated by supporting 
multiprocessing test runs.

As for handling custom outcomes using skiptest... Currently the 
onTestFail event is not raised for skips, expected failures or 
unexpected successes.

A possibility would be to have onTestFail be raised even for these 
exceptions but with a flag on the event making it clear that it is a 
skip. That way plugins (like the debugger plugin) that are only 
interested in genuine errors / failures have an easy way to tell. This 
event could also have a "supress" flag that plugins can set to say "I'm 
handling this failure, ignore it". This would be a step towards making 
the result objects less relevant.

This would mean that tests that failed but have the failure supressed 
would get reported as a success (and the testStop event would be fired 
with a success result). What should be done in the testStop event for 
tests like this?

Michael Foord


> I'm looking forward to working with this, it looks very flexible and clean.
>
> JP
>    


-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog

READ CAREFULLY. By accepting and reading this email you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies (”BOGUS AGREEMENTS”) that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.





More information about the testing-in-python mailing list