[TIP] suitability of unittest.py for large-scale tests

Michael Foord fuzzyman at voidspace.org.uk
Tue Apr 7 08:51:22 PDT 2009


Marius Gedminas wrote:
> On Tue, Apr 07, 2009 at 09:27:59AM -0400, Douglas Philips wrote:
>   
>> On or about 2009 Apr 7, at 8:40 AM, Laura Creighton indited:
>>     
>>> At any rate, somebody was saying that they wanted the behaviour that
>>> when a test failed, everything came to a screeching halt, because
>>> otherwise it is too hard to lose the failing test in the
>>> voluminous output.  I understand this.  But sometimes what you want
>>> to do is to change something, and then divide up all the failing
>>> tests among the team.
>>>       
>> It might have been my message, but since you didn't find it, I'm not  
>> sure if my reply will be on target or not.
>>
>> What I want is for everything to come to a screeching halt if there is  
>> an error -loading- a test module. i.e. before any tests are even run.
>>     
>
> I'm with Laura on this (see my other email for the rationale), and don't
> want that to happen.
>
>   

I think it is a requirement - and is currently what happens with 
unittest. If an exception occurs in your module loading then the test 
run fails before it starts.

If the opposite were true then your test run doesn't mean what you think 
it means - and one error saying "oops half your tests didn't even run" 
would be hard to miss.

If you want a different failure mode (e.g. hardware not being available) 
then you need to build that into your test system yourself. There are 
plenty of ways other than have test modules fail at import time (mark 
the tests as skipped, raise exceptions in setup, have load_tests return 
None or an alternate test suite for example).

What Laura was saying seemed to be slightly different - she was arguing 
against the test run ending on test failure, no argument there from me.

Michael

>> My experience has been that if the regression runner simply reports  
>> that a module failed to load(import), that is what is too easy to get  
>> lost.
>>     
>
> Now rewrite this to read "My experience has been that if the regression
> runner simply reports that one of the tests failed, that is what is too
> easy to get lost".
>
> The problem is with the method of reporting, not with the lack of an
> early abort.
>
>   
>> But you reminded me, our custom regression runner has a "halt on first  
>> failure" option.
>>     
>
> This is a common and useful option.
>
>   
>> We added that option so that someone working on a new  
>> device doesn't have to wait for the entire regression to finish
>>     
>
> It is sometimes more useful not to stop on first failure, but to produce
> sufficient error diagnostics so that you can start investigating that
> first failure while the rest of the tests run in the background.
>
> zope.testing does this, and I couldn't live without such early
> reporting.
>
> There's nothing more frustrating than seeing a capital F in the middle
> of a sea of dots, and knowing you'll have to wait 15 minutes before you
> can start investigating it.
>
>   
>> (we  
>> have an open TODO for graceful shutdown, which for us is tricky since  
>> the device shouldn't be left in certain states for very long). For my  
>> test team's use, we always want to run all the tests to get an  
>> overview of how functional the device(s) are.
>>     
>
> Regards,
> Marius Gedminas
>   
> ------------------------------------------------------------------------
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>   


-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog





More information about the testing-in-python mailing list