[TIP] Test discovery for unittest

Marius Gedminas marius at gedmin.as
Tue Apr 7 06:58:43 PDT 2009


On Tue, Apr 07, 2009 at 03:57:59AM +0100, Michael Foord wrote:
> Douglas Philips wrote:
> > On or about 2009 Apr 6, at 6:26 PM, John Arbash Meinel indited:
> >   
> >> Auto-discovery will have similar problems. If you fail to import,
> >> do  you raise an error, or do you just skip that file? (Skipping
> >> things  silently can easily lead to tests that don't run when you
> >> expected them to.)
> >
> > I want the regression run to stop if there are any errors.

> I agree.

I disagree.  An import failure should not be silently ignored (or
reported as a warning), it should be reported at the same level of
seriousness as a test failure, but it should not preclude other tests
from being run.

My rationale: Independent failures should be reported independently.  It
would be nice if everybody stopped working on everything else when your
continuous integration framework is showing red, but, in my experience,
this never happens.  If people look at it, they say "no, this failure
can't possibly be my fault, I'll wait for the person who broke it to fix
it"---and then happily continue committing their own code.  If that
unrelated failure hides new regressions, those cannot be fixed
independently (and won't even be noticed until somebody fixes the first
import failure).  It's better to notice regressions as soon as they
happen, without having to wait for fixes to unrelated regressions.

> > I don't mind if the loader continues past the first one so that it can  
> > report as many import errors in one run as possible, but I absolutely  
> > want that to abort the regression run.

Perhaps we are in violent agreement here?

I understand the word "abort" to imply "*not* continue past the error".
What I want is for any import errors to mark the regression run as
unsuccessful.

> > We have a custom loader that  
> > used to be forgiving, but the fact is once you have more than a  
> > hundred tests, you don't see the warning for all the other output.  

Absolutely.

> > Printing out a summary of the load errors and aborting the regression  
> > run will get everyone's attention.

If you replace "aborting the regression run" with "marking the regression
run as a failure", then I agree absolutely.

> > One could argue that missing tests due to import errors are just  
> > missing tests

And one would be mistaken.

If you want to skip some tests (because of missing dependencies or
whatnot), do that explicitly, not by letting the import error propagate.
I believe unittest recently gained support for skipping tests.

> > and as such should be caught by the post-regression down- 
> > stream results processors, but I would much prefer to have the option  
> > to abort running any tests (for various reasons including being able  
> > to detect test code problems as early as possible).

Marius Gedminas
-- 
EMACS is a good OS.  The only thing it lacks is a decent text-editor.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://lists.idyll.org/pipermail/testing-in-python/attachments/20090407/d798108f/attachment.pgp 


More information about the testing-in-python mailing list