dgou at mac.com
Fri Apr 24 12:47:49 PDT 2009
On or about 2009 Apr 24, at 3:29 PM, Doug Hellmann indited:
> The SkipTest exception is an interesting solution. It seems like it
> would work well for tests where fixtures aren't expensive, but I've
> found that explicitly tagging tests to be run or not works better
> for me. The tag can be applied conditionally via a decorator when
> the test is imported, so you still get the dynamic behavior based on
> available resources.
Maybe this doesn't apply to most unit testing environments, but in the
environment I'm in, we don't usually know at module/class load/
interpretation time whether or not a particular test method is going
to be skipped or not. Sometimes we know once everything is loaded and
the device under test is interrogated, but in many of our test
methods, the test method itself has to do very specific/detailed
interrogation (of both the device and the configuration enviroment)
and then decide at the time it is run if it should skip. I can see how
it would be useful to have a load-time skip-ability though we haven't
needed it. (We treat failure to load a test module as a failure of the
code review process :) ).
We aadded a few new exception classes: one for skipping, one for "not
yet implemented" (so we can track any work in progress that might
accidently escape, but we don't run into it much), one of
"indeterminate results", etc.
Now that we've been talking about this, our problem was that there
isn't an easy way to plug-in new exceptions, new exception handlers
and new results classification categories... But all that said, when
we took the standard unittest module and tweaked it (and after we
threw away all the speculative features we didn't actually need), the
changes were very minor. We made a lot more changes just refactoring
the basic unittest without changing the features/API...
More information about the testing-in-python