[TIP] unittest subtests

holger krekel holger at merlinux.eu
Sun Jan 20 03:27:04 PST 2013


On Sun, Jan 20, 2013 at 02:03 -0800, Chris Jerdonek wrote:
> On Sun, Jan 20, 2013 at 1:28 AM, holger krekel <holger at merlinux.eu> wrote:
> > On Sat, Jan 19, 2013 at 16:07 -0800, Chris Jerdonek wrote:
> >> Just thought I would let this list know about an issue/patch to add a
> >> sort of parametrized test functionality to unittest called subtests:
> >>
> >> http://bugs.python.org/issue16997
> >>
> >> It would be good to know how the proposed implementation plays with
> >> the various test frameworks that are out there.
> >
> > Thanks, I am following that subtest idea.  As far as pytest is
> > concerned we have some related thoughts but - as in the above discussion -
> > it's not clear how it should interact with reporting, xfail test states,
> > let alone test addressability, plugin hooks, distributed testing and
> > fixture setup/teardown.  I guess that pytest can support whatever the
> > above hack results in -- will probably just add to the existing compat
> > code for running nose, unittest and trial tests.
> >
> > However, i am not sure how the issue16997 example maps to the in-development
> > unittest plugin branches.   Maybe Michael or Jason can shed some light.
> >
> > Lastly, you can map the above example from the link easily to today's pytest:
> >
> >     @pytest.mark.parametrize(("i", "j"), [(i,j)
> >                              for i in range(2,5) for j in range(0,3)])
> >     def test_b(i, j):
> >         assert i % 3 != j
> >
> > which runs the test 9 times with the different "i, j" combinations passed
> > passed in.  As all of these 9 tests are "normal", there is no special code
> > or consideration needed for the above interaction issues.
> 
> Thanks for your thoughts, Holger.  It seems like another difference
> with the subtest proposal is that setUp() is run just once before the
> collection of subtests, as opposed to before each one as in the above.
>  I believe that's one reason the subtest proposal is characterized as
> "light."  Is something like that possible in pytest?

You could probably use the still existing "yield" syntax.  However, with
pytest fixtures setup/teardown is less of a problem because you can list
the fixtures you need as input arguments.  If you don't list them or if
they have a higher than function scope, the parametrizing tests will
run quite lightly as well (i.e. not require re-setup of things).

> Another difference is permitting multiple groups of subtests per
> TestCase.

Would like to see some more real world use cases.  Maybe it's mostly
about seeing some "." reporting dots for sub parts of a test?
Also you can group tests into a class.

> Another thing I've been wondering is the extent to which test
> frameworks rely on the 2-tuples in TestResult.errors, failures, etc.
> containing runnable TestCase instances:
> 
> http://docs.python.org/dev/library/unittest.html#unittest.TestResult
> 
> (In the current subtest proposal, the objects stored in
> TestCase.errors, etc. are _SubTest objects and neither runnable nor
> addressable.)  For example, are there any frameworks/plug-ins, etc.
> that "re-run" the TestCase objects in TestResult.errors or save them
> for later re-running?  Also, how do they deal with individual TestCase
> objects possibly having more than one entry in TestResult.failures,
> say (e.g. because of a failure in both setUp() and tearDown() or
> something similar)?

pytest doesn't need to care.  It implements the various unittest
result.addSuccess/addError/... methods to produce pytest reporting
objects and all plugins work from those, including re-running tests,
junitxml etc.

best,
holger



More information about the testing-in-python mailing list