[TIP] Test reporting in unittest (and plugins)

Jonathan Lange jml at mumak.net
Tue Aug 3 04:59:54 PDT 2010


On Tue, Aug 3, 2010 at 12:23 AM, Michael Foord <michael at voidspace.org.uk> wrote:
> On 03/08/2010 00:14, Robert Collins wrote:
>>
>> Michael,
>>     I'd *really* love it if you grabbed testtools, read up on the
>> TestCase.addDetail API and looked at the modified contract for
>> addSuccess/addError/addFailure/addSkip etc which is backwards
>> compatible, provides a rich medium for additional data - and we're
>> having *great* success with additional things building on this
>> foundation to include additional data automatically.
>>
>> As for test naming - I used TestCase.id() *everywhere* in the UI now,
>> the description interface is terrible as a machine managable
>> interface, and most of the automated workflows being built on test
>> frameworks want machine processing of data - see for instance
>> http://pypi.python.org/pypi/testrepository or
>> http://pypi.python.org/pypi/junitxml.
>>
...
>
> I can *look* at the projects you describe, and *hopefully* the documentation
> will tell me what I need to know. I won't have time to go spelunking too
> deeply into the source code to work out what they are doing though. :-(
> Hopefully that won't be necessary.
>

Well, of course you can also just steal the code and trust us that it
works, which would actually save time for everyone :P

Anyway, I can try to give some examples of how we use
TestCase.addDetail. I may be wrong on the details.

In bzrlib, when tests fail, we add the relevant section of the
.bzr.log file as a text/plain attachment. bzr has a custom reporter
which calls getDetails() on tests to render the log appropriately. The
mechanism is used for benchmarks and subprocess stderr dumping.

In Launchpad, we use addDetail to store OOPS reports for tests that
generate them. OOPS reports occur when there's an unhandled error in
the webapp or our backend processes, and contain query logs and all
sorts of yummy debugging stuff.

AIUI, the difference between what we've developed for our own,
genuine, real-world, honest-to-goodness, proven-in-the-wild needs and
what's suggested in your post are:

  * In testtools, the TestCase adds information as it sees fit using
TestCase.addDetail. The TestResult gets the information whenever it
sees fit using TestCase.getDetails.

  * testtools stores this stuff as a dict of short name ->
mime-encoded stuff. Strengths of this is that it's relatively easy to
serialize unexpectedly complex data (which leads to easier
parallelism). Weaknesses are that it's a bit heavyweight.

 * testtools doesn't use this mechanism to support custom outcomes

 * testtools.addDetail is fully backwards compatible with older
TestResult objects – they just don't display the new data

Hope this helps,
jml



More information about the testing-in-python mailing list