[TIP] suitability of unittest.py for large-scale tests

Doug Philips dgou at mac.com
Tue Apr 7 11:47:22 PDT 2009


On or about Tuesday, April 07, 2009, at 11:20AM, Marius Gedminas indited:
>On Tue, Apr 07, 2009 at 09:27:59AM -0400, Douglas Philips wrote:
>> What I want is for everything to come to a screeching halt if there is  
>> an error -loading- a test module. i.e. before any tests are even run.
>
>I'm with Laura on this (see my other email for the rationale), and don't
>want that to happen.

We consider an unloadable module to be a fundamental error that calls into question the very environment in which the tests are running and hence the veracity of any results we might get. We don't want to start chasing down the rabbit hole of trying to reflectively "understand" what kind of failure it was and then try to take remedial action (such as running other tests). YMMV, and seemily does. This is a pretty clear layer of separation which should be able to support policy (continue if import errors, abort if import errors, etc.) pretty generically...



>> My experience has been that if the regression runner simply reports  
>> that a module failed to load(import), that is what is too easy to get  
>> lost.
>
>Now rewrite this to read "My experience has been that if the regression
>runner simply reports that one of the tests failed, that is what is too
>easy to get lost".

Doesn't compute. A failed module load has unknown consequences, tests may merely be missing, or something more severe may be wrong, you just don't know in the general case. See above.


>> We added that option so that someone working on a new  
>> device doesn't have to wait for the entire regression to finish
>
>It is sometimes more useful not to stop on first failure, but to produce
>sufficient error diagnostics so that you can start investigating that
>first failure while the rest of the tests run in the background.

Yes, that could be interesting. Our simplistic regression logs are formatted for easy line-at-a-time parsing and so to dump diagnostics in the middle of run would confuse our reporting tools. For other reasons we need to separate regression info from reporting, and that would help in this kind of situation as well.

-Doug




More information about the testing-in-python mailing list