[TIP] stop suite after failure

Kumar McMillan kumar.mcmillan at gmail.com
Wed Jan 2 18:28:13 PST 2008

On Dec 29, 2007 4:45 PM, loigu <loigu at volny.cz> wrote:
> Hi,
> Does anybody know if it is possible to stop context suite execution
> after test failure without preventing other suites to run?
> What I'm trying to do:
> I have test class containing some tests (trivial, but with non-trivial
> time consumption).
> There can be defined ordering on them (if test N fails, all tests > N
> fails too).
> So in this case it would be nice to stop after failure and do not run
> other tests from the same class (we know that they will fail),
> but continue in execution of other test suites.
> Is it there some easy or preferred way to do this?
> Or it is nonsense to do such a thing?

One way you could do this is to use nose
(http://somethingaboutorange.com/mrl/projects/nose/).  You would
assign attributes to your tests to define the level and create a
custom plugin to skip tests when the threshold has been reached.

Here is some more info on writing nose plugins:

You'd assign attributes like so:

def test_beans():
    assert 1==2
test_beans.threshold = 5

def test_rice():
    # relates somehow to beans
    assert 1==3
test_rice.threshold = 6

Then a plugin would look something like this:

from nose.plugins import Plugin
class FailureThreshold(Plugin):
    name = "failure-threshold"
    def configure(self, options, conf):
        Plugin.configure(self, options, conf)
        if not self.enabled:
        self.failure_threshold = 0

    def addError(self, test, err):
        level = getattr(test, 'threshold', None)
        if level is not None and level < self.failure_threshold:
            # honor the lowest failing level
            self.failure_threshold = level
    addFailure = addError

    def startTest(self, test):
        level = getattr(test, 'threshold', None)
        if level is not None and level >= self.failure_threshold:
            return True # prevents the test from running, *I think*

...untested, of course.  However, if you are creating tests that
depend on another test in some way I would suggest trying to decouple
that dependency.  You can probably do so by inheriting common
setUp/tearDown methods.  Otherwise, if you are just trying to make a
logical guess as to what might start failing next then your approach
sounds like it might save some cpu time.  You may also find that
manually declaring "levels" will become fragile and error prone so if
you can find a way to introspect the level instead, that would
probably be better.

> Thanks,
> Jiri Zouhar
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python

More information about the testing-in-python mailing list