[TIP] Test reporting in unittest (and plugins)

Jonathan Lange jml at mumak.net
Tue Aug 3 05:25:55 PDT 2010

On Tue, Aug 3, 2010 at 1:10 PM, Michael Foord <michael at voidspace.org.uk> wrote:
> On 03/08/2010 12:59, Jonathan Lange wrote:
>> AIUI, the difference between what we've developed for our own,
>> genuine, real-world, honest-to-goodness, proven-in-the-wild needs and
>> what's suggested in your post are:
> Note that I have some specific usecases for porting genuine, real-world,
> honest-to-goodness, proven-in-the-wild nose plugins. The more usecases the
> better though. :-)

Indeed! I really appreciate your efforts to incorporate & standardize
existing practice.

>>   * testtools stores this stuff as a dict of short name ->
>> mime-encoded stuff. Strengths of this is that it's relatively easy to
>> serialize unexpectedly complex data (which leads to easier
>> parallelism). Weaknesses are that it's a bit heavyweight.
> Right - parallelism is an important story that I will probably be working on
> next.

It's really easy. Load a list of tests, split them into N sublists,
run them each separately producing a stream-based serialization of
your test results, then have something that gathers those streams and
displays them in human-readable format.

subunit works well for this as a serialization format and Trial,
zope.testrunner and nose all have support for it.

>>  * testtools doesn't use this mechanism to support custom outcomes
> What is the mechanism (and use cases) for custom outcomes?

To be honest, I don't need custom outcomes, so I don't know the use
cases. But here's what you could do.

There's a public attribute called exception_handlers. It's a list of
(ExceptionClass, handler), where 'handler' is called with the test
case, the test result and the error.

If you want custom outcomes in your test, then you would insert such a
tuple at the start of your list. Presumably your handler would call
some custom method on your result object.

>>  * testtools.addDetail is fully backwards compatible with older
>> TestResult objects – they just don't display the new data
> This actually sounds *fairly* different from what I have in mind, although
> my system *would* support arbitrary metadata, so you could add mimetyped
> data if you want (mimetyped - really! :-).

Can you provide such data only in the case of test failure?

> I'll sketch something up and come back. Thanks *very* much for replying to
> this. This information will make it a lot easier to understand the testtools
> code.

Happy to help.


More information about the testing-in-python mailing list