[TIP] Handling exceptions while cleaning up in SuiteFixture Setup

Olemis Lang olemis at gmail.com
Mon Aug 31 13:51:22 PDT 2009

On Mon, Aug 31, 2009 at 3:02 PM, Robert
Collins<robertc at robertcollins.net> wrote:
> The general question here is 'how many outcomes is a test permitted to
> have'.
> We don't define it well enough to write correct TestResult objects from
> the docstrings.

Perhaps I'm missing something here, there is a single TestResult
object containing many entries (lists actually) containing multiple
failures, errors (list for successful TCs is missing ) . In the case
of dutest, it writes an entry in the TestResult for each interactive
example that gets executed, I mean in this case

def here():
  >>> a=1
  >>> a+=1
  >>> a

there are 3 test cases and an specialized suite wrapping'em all

Not sure if this is what u were talking about ...

> For now, you could:
>  - addError
> or
>  - stopTest(test)
>  - startTest(StubTestRepresentingSuite)
>  - adderror(STRS, error)
>  - stopTest(STRS)
> The latter would be better I think, because it visible shows 'suite' vs
> 'last test in the suite'.

Ok ... let me think a little bit about this ;o)

> [Not to mention that Suite wide setup and teardown are a bad idea for
> performance and isolation of tests, but thats a whole different troll]

Well I've two concrete example where this is used to reduce loads and
improve performance :

  - My first concrete example (perhaps I'll talk you about'em in more
    detail later ;o) : I'm writing tests for Trac plugins and
    I don't want to recreate & shut down the whole Trac environment after
    running «a few» test cases. I wanna create it once before everything
    happens and shut it down once after everything's been done. The
    main goal of the implementation I mentionned before (i.e.
    AFAICR `trac.test.TestSetup`) is to include the env in a fixture and
    share it among all test cases *THAT NEED IT* (i.e. those
    implementing `setFixture` method). This is mostly an ninstance of
    `Prebuilt Fixture` pattern. Implementation is ok and very unittest-like ,
    but since I'm very doctest-minded, my solution is much simple
    and (possibly) faster to write : use doctest's extraglobs and all test
    cases (i.e. interactive examples ) will share the fixture via the global
    namespace  :P

  - My second concrete example : Introduce a fresh dummy request
    object immediately before running all the doctests written inside
    the same docstring. This way at the beginning of the test you'll
    have a magically setup and completely clean request object to
    use it in the tests ;o). In cases like this, I need the following

suite setUp()   ---> Setup request objects and further objs here
  example setUp()   ---> Empty
  run example 1
  example tearDown()   ---> Empty
  example setUp()   ---> Empty
  run example 2
  example tearDown()   ---> Empty
  example setUp()   ---> Empty
  run example n
  example tearDown()   ---> Empty
suite tearDown()   ---> And here is where I want to know what to do on
failure ;o)

  - Thirdly : The example mentionned in the book (AFAICR), Database
    login could be performed once, and not before and after running
    each test case.

The book also recommends other tactic decisions to avoid using that
pattern to solve this kind of problems ;o)  Beyond that, everything
depends on how the setup is handled, and what's really needed in the

Once I publish everything I'll let u know about the whole thing ...
I'll always like
to know what u think about it ... and perhaps some of you'll like it too ;o)

PS: In fact there are many Trac devs who dont write tests because of
the complexity of the current unittest-based style




Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:

More information about the testing-in-python mailing list