[TIP] Test discovery for unittest
Michael Foord
fuzzyman at voidspace.org.uk
Mon Apr 6 19:57:59 PDT 2009
Douglas Philips wrote:
> On or about 2009 Apr 6, at 6:26 PM, John Arbash Meinel indited:
>
>> Auto-discovery will have similar problems. If you fail to import, do
>> you
>> raise an error, or do you just skip that file? (Skipping things
>> silently
>> can easily lead to tests that don't run when you expected them to.)
>>
>
>
> I want the regression run to stop if there are any errors.
> I don't mind if the loader continues past the first one so that it can
> report as many import errors in one run as possible, but I absolutely
> want that to abort the regression run. We have a custom loader that
> used to be forgiving, but the fact is once you have more than a
> hundred tests, you don't see the warning for all the other output.
> Printing out a summary of the load errors and aborting the regression
> run will get everyone's attention.
>
I agree.
Michael
> One could argue that missing tests due to import errors are just
> missing tests and as such should be caught by the post-regression down-
> stream results processors, but I would much prefer to have the option
> to abort running any tests (for various reasons including being able
> to detect test code problems as early as possible).
>
> -Doug
>
>
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>
--
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog
More information about the testing-in-python
mailing list