[TIP] stop suite after failure

Kumar McMillan kumar.mcmillan at gmail.com
Fri Jan 4 11:46:39 PST 2008


On Jan 4, 2008 5:07 AM, loki <loigu at centrum.cz> wrote:
> Kumar McMillan wrote:
> > On Jan 3, 2008 5:54 AM, loigu <loigu at volny.cz> wrote:
> >
> >> Thanks for your response,
> >>
> >> The approach with startTest doesn't work -- returning True just prevents
> >> other plugins from seeing the test start, doesn't prevent test itself
> >> from starting.
> >>
> >
> > Hmm, I think there must be a way to have a plugin stop the test.  Did
> > you try the want* methods?  Like:
> >
> > class FailureThreshold(Plugin):
> >     # ...as defined above
> >     def _wantTestObject(self, test):
> >         level = getattr(test, 'threshold', None)
> >         if level is not None and level >= self.failure_threshold:
> >             return False
> >     wantClass = _wantTestObject
> >     wantFunction = _wantTestObject
> >     wantMethod = _wantTestObject
> >
>
> Yes you can do this but BEFORE any test run.

ack, that's true.  That won't work then.  Hmm, I still think this is a
valid use case and should be supported by a plugin.  Can you submit an
issue for this?  http://code.google.com/p/python-nose/issues/list

Or, you can start a new thread on the nose list about which would
probably be a good idea too.  I'm thinking a new plugin hook, like
wantToExecuteTest(self, test)

> So if you want to run test
> depending on others results using want_ methods it forces you to run one
> test after another manually.
>
> I know that it's rather rare to want this feature in unit testing (unit
> tests should be small and fast - so why bother) but when small unit test
> runs above distributed filesystem... :/
>
>
> > ...if that doesn't work I'd suggest posting on the nose list and after
> > that submitting a feature request ;)  I think a plugin should be able
> > to stop a test based on a test's attributes.
> >
> > Kumar
> >
> >
> >> Currently I use this approach:
> >> I had subclassed nose.suite.ContextSuite and replaced the run method
> >> with something like
> >>
> >> ....
> >> ....
> >> #main loop
> >>                 try:
> >>                     for test in self._tests:
> >>                         if result.shouldStop:
> >>                             log.debug("stopping")
> >>                             break
> >>                         # each nose.case.Test will create its own result
> >> proxy
> >>                         # so the cases need the original result, to
> >> avoid proxy
> >>                         # chains
> >>                         test(orig)
> >> #here is the only difference against original function
> >>
> >>                         if getattr(test,'stopContext', None):
> >>                             break;
> >>                 finally:
> >>                     self.has_run = True
> >> ....
> >> ....
> >>
> >> Then in plugin I'm generating the context suite with tests ordered and
> >> in handleFailure and handleError I'm setting the stopContext attribute.
> >>
> >>
> >> This approach works but it's not as maintainable as i would like to.
> >>
> >> All the best,
> >>
> >> Jiri Zouhar
> >>
> >>
> >>
> >> Kumar McMillan wrote:
> >>
> >>> On Dec 29, 2007 4:45 PM, loigu <loigu at volny.cz> wrote:
> >>>
> >>>
> >>>> Hi,
> >>>>
> >>>> Does anybody know if it is possible to stop context suite execution
> >>>> after test failure without preventing other suites to run?
> >>>>
> >>>> What I'm trying to do:
> >>>>
> >>>> I have test class containing some tests (trivial, but with non-trivial
> >>>> time consumption).
> >>>> There can be defined ordering on them (if test N fails, all tests > N
> >>>> fails too).
> >>>> So in this case it would be nice to stop after failure and do not run
> >>>> other tests from the same class (we know that they will fail),
> >>>> but continue in execution of other test suites.
> >>>>
> >>>> Is it there some easy or preferred way to do this?
> >>>> Or it is nonsense to do such a thing?
> >>>>
> >>>>
> >>> One way you could do this is to use nose
> >>> (http://somethingaboutorange.com/mrl/projects/nose/).  You would
> >>> assign attributes to your tests to define the level and create a
> >>> custom plugin to skip tests when the threshold has been reached.
> >>>
> >>> Here is some more info on writing nose plugins:
> >>> http://somethingaboutorange.com/mrl/projects/nose/doc/writing_plugins.html
> >>> http://somethingaboutorange.com/mrl/projects/nose/doc/plugin_interface.html
> >>>
> >>> You'd assign attributes like so:
> >>>
> >>> def test_beans():
> >>>     assert 1==2
> >>> test_beans.threshold = 5
> >>>
> >>> def test_rice():
> >>>     # relates somehow to beans
> >>>     assert 1==3
> >>> test_rice.threshold = 6
> >>>
> >>> Then a plugin would look something like this:
> >>>
> >>> from nose.plugins import Plugin
> >>> class FailureThreshold(Plugin):
> >>>     name = "failure-threshold"
> >>>     def configure(self, options, conf):
> >>>         Plugin.configure(self, options, conf)
> >>>         if not self.enabled:
> >>>             return
> >>>         self.failure_threshold = 0
> >>>
> >>>     def addError(self, test, err):
> >>>         level = getattr(test, 'threshold', None)
> >>>         if level is not None and level < self.failure_threshold:
> >>>             # honor the lowest failing level
> >>>             self.failure_threshold = level
> >>>     addFailure = addError
> >>>
> >>>     def startTest(self, test):
> >>>         level = getattr(test, 'threshold', None)
> >>>         if level is not None and level >= self.failure_threshold:
> >>>             return True # prevents the test from running, *I think*
> >>>
> >>>
> >>>
> >>> ...untested, of course.  However, if you are creating tests that
> >>> depend on another test in some way I would suggest trying to decouple
> >>> that dependency.  You can probably do so by inheriting common
> >>> setUp/tearDown methods.  Otherwise, if you are just trying to make a
> >>> logical guess as to what might start failing next then your approach
> >>> sounds like it might save some cpu time.  You may also find that
> >>> manually declaring "levels" will become fragile and error prone so if
> >>> you can find a way to introspect the level instead, that would
> >>> probably be better.
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>> Thanks,
> >>>>
> >>>> Jiri Zouhar
> >>>>
> >>>>
> >>>>
> >>>>
> >>>> _______________________________________________
> >>>> testing-in-python mailing list
> >>>> testing-in-python at lists.idyll.org
> >>>> http://lists.idyll.org/listinfo/testing-in-python
> >>>>
> >>>>
> >>>>
> >>> _______________________________________________
> >>> testing-in-python mailing list
> >>> testing-in-python at lists.idyll.org
> >>> http://lists.idyll.org/listinfo/testing-in-python
> >>>
> >>>
> >>>
> >>>
> >> _______________________________________________
> >> testing-in-python mailing list
> >> testing-in-python at lists.idyll.org
> >> http://lists.idyll.org/listinfo/testing-in-python
> >>
> >>
> >
> >
> >
>
>



More information about the testing-in-python mailing list