[TIP] unittest subtests

Chris Jerdonek chris.jerdonek at gmail.com
Sun Jan 20 02:03:02 PST 2013


On Sun, Jan 20, 2013 at 1:28 AM, holger krekel <holger at merlinux.eu> wrote:
> Hi Chris,
>
> On Sat, Jan 19, 2013 at 16:07 -0800, Chris Jerdonek wrote:
>> Just thought I would let this list know about an issue/patch to add a
>> sort of parametrized test functionality to unittest called subtests:
>>
>> http://bugs.python.org/issue16997
>>
>> It would be good to know how the proposed implementation plays with
>> the various test frameworks that are out there.
>
> Thanks, I am following that subtest idea.  As far as pytest is
> concerned we have some related thoughts but - as in the above discussion -
> it's not clear how it should interact with reporting, xfail test states,
> let alone test addressability, plugin hooks, distributed testing and
> fixture setup/teardown.  I guess that pytest can support whatever the
> above hack results in -- will probably just add to the existing compat
> code for running nose, unittest and trial tests.
>
> However, i am not sure how the issue16997 example maps to the in-development
> unittest plugin branches.   Maybe Michael or Jason can shed some light.
>
> Lastly, you can map the above example from the link easily to today's pytest:
>
>     @pytest.mark.parametrize(("i", "j"), [(i,j)
>                              for i in range(2,5) for j in range(0,3)])
>     def test_b(i, j):
>         assert i % 3 != j
>
> which runs the test 9 times with the different "i, j" combinations passed
> passed in.  As all of these 9 tests are "normal", there is no special code
> or consideration needed for the above interaction issues.

Thanks for your thoughts, Holger.  It seems like another difference
with the subtest proposal is that setUp() is run just once before the
collection of subtests, as opposed to before each one as in the above.
 I believe that's one reason the subtest proposal is characterized as
"light."  Is something like that possible in pytest?  Another
difference is permitting multiple groups of subtests per TestCase.

Another thing I've been wondering is the extent to which test
frameworks rely on the 2-tuples in TestResult.errors, failures, etc.
containing runnable TestCase instances:

http://docs.python.org/dev/library/unittest.html#unittest.TestResult

(In the current subtest proposal, the objects stored in
TestCase.errors, etc. are _SubTest objects and neither runnable nor
addressable.)  For example, are there any frameworks/plug-ins, etc.
that "re-run" the TestCase objects in TestResult.errors or save them
for later re-running?  Also, how do they deal with individual TestCase
objects possibly having more than one entry in TestResult.failures,
say (e.g. because of a failure in both setUp() and tearDown() or
something similar)?

--Chris



More information about the testing-in-python mailing list