[TIP] suitability of unittest.py for large-scale tests

Laura Creighton lac at openend.se
Tue Apr 7 08:11:46 PDT 2009


In a message of Tue, 07 Apr 2009 09:27:59 EDT, Douglas Philips writes:
>On or about 2009 Apr 7, at 8:40 AM, Laura Creighton indited:
>> At any rate, somebody was saying that they wanted the behaviour that
>> when a test failed, everything came to a screeching halt, because
>> otherwise it is too hard to lose the failing test in the
>> voluminous output.  I understand this.  But sometimes what you want
>> to do is to change something, and then divide up all the failing
>> tests among the team.
>
>
>Laura,
>
>It might have been my message, but since you didn't find it, I'm not  
>sure if my reply will be on target or not.
>
>What I want is for everything to come to a screeching halt if there is  
>an error -loading- a test module. i.e. before any tests are even run.
>
>My experience has been that if the regression runner simply reports  
>that a module failed to load(import), that is what is too easy to get  
>lost.

Aha.  That's the problem with having ideas when you are on the tram.
They may not match the notes that sparked them.  Which is why I looked
for the message, but, goodness but haven't we been doing a lot of talking
over the last 3 days. :)  Sorry that I misunderstood.

>But you reminded me, our custom regression runner has a "halt on first  
>failure" option. We added that option so that someone working on a new  
>device doesn't have to wait for the entire regression to finish (we  
>have an open TODO for graceful shutdown, which for us is tricky since  
>the device shouldn't be left in certain states for very long). For my  
>test team's use, we always want to run all the tests to get an  
>overview of how functional the device(s) are.

It's been very interesting finding out what is important in environments
that are very different from the one that you are doing.  Thank you.
>
>		--Doug

Laura




More information about the testing-in-python mailing list