[TIP] unitteset outcomes (pass/fail/xfail/error), humans and automation

holger krekel holger at merlinux.eu
Tue Dec 15 01:21:09 PST 2009


On Tue, Dec 15, 2009 at 08:46 +1100, Robert Collins wrote:
> On Mon, 2009-12-14 at 16:46 +0100, holger krekel wrote:
> > 
> > Do you have interest in trying to standardize some bits across tools
> > and approaches?  I'd see the goal in having tools that process results
> > somewhat uniformly. 
> 
> I have a huge interest in this: currently everyone that extends unittest
> creates a little walled garden, because of limitations in unittest's
> design. So I'm working at the root cause trying to make it possible for
> multiple extensions to really place nice together.

After some years of heavy work  on pluginizations i see it this way: allowing 
multiple extensions to hook into various aspects of the testing process and 
offering spellings for configuring plugins is one part of the work.  Handling
the inherent 1:N communication (N is the number of participating plugins 
for any given interaction) is another. Ordering, dependencies etc. 

> As a for-instance, I've got reporters in subunit such as the stats
> reporter, and filtering reporter, which are almost certainly useful
> outside it, but:
>  - there isn't currently a way to (easily) say 'hey, use this reporter'
> from the unittest command line. (There is for trial, IFF its a trial
> plugin. There is for nose. There is for py.test)
>  - even if there was, there isn't a standard way for users that need
> more resolution to add it without having to patch a lot of the system.
>
> In my view its these sorts of issues that have driven 'test.py' - the
> zope runner, trial, nose and even bzrlibs selftest to be separate tools
> rather than plugins to a core framework. I hope that by coming up with
> ways to interoperate that are really flexible, we can slowly converge
> again.

Hum, designing API interactions for combining plugins for timeout-ing tests,
tracing/coverage, distributed-testing, CI-integration, randomizing tests,
enhanced PDB ... just to name a few, i find somewhat tricky.  So i rather 
think about standardizing output produced by the various testing tools, 
to ease post processing/reporting and to establish a common understanding. 

cheers,
holger



More information about the testing-in-python mailing list