[TIP] py.test doesn't appear to consider testscenarios tests to be distinct tests

holger krekel holger at merlinux.eu
Sun Feb 10 14:42:29 PST 2013


Hi Julian,

On Sun, Feb 10, 2013 at 16:42 -0500, Julian Berman wrote:
> Hi. I'd post this as a bug on one of the two relevant projects but I
> haven't spent any time investigating and Robert nudged me to just post it
> here :).
> 
> Consider the following test module:
> 
> ### test.py
> from unittest import TestCase
> from testscenarios import WithScenarios
> 
> 
> class TestTest(WithScenarios, TestCase):
>     scenarios = [("one", {"foo" : 13}), ("two", {"foo" : 12})]
> 
>     def test_stuff(self):
>         if self.foo == 12:
>             self.fail()
> 
> 
> Let's see what happens with each test runner :)
> 
> (relevant versions are 2.3.4 for pytest, 1.2.1 for nose, 12.3.0 for
> Twisted, 2.7.3 for unittest)
> 
> ⊙  py.test test.py
> 
>                                         Julian at air> ===========================================================================================
> test session starts
> ============================================================================================
> platform darwin -- Python 2.7.3 -- pytest-2.3.4
> collected 1 items
> 
> test.py F
> 
> =================================================================================================
> FAILURES
> =================================================================================================
> ___________________________________________________________________________________________
> TestTest.test_stuff
> ____________________________________________________________________________________________
> 
> self = <test.TestTest testMethod=test_stuff>
> 
>     def test_stuff(self):
>         if self.foo == 12:
> >           self.fail()
> E           AssertionError: None
> 
> test.py:10: AssertionError
> =========================================================================================
> 1 failed in 0.28 seconds
> =========================================================================================
> 
> 
> py.test runs each scenario apparently, but considers them all to be one
> test so failures are essentially impossible to debug. --collectonly shows
> that py.test finds only "1 test".

pytest finds and collects the one test_stuff method and dispatches to
self.run() which (through WithScenarios.run) performs the parametrized
run.  Therefore all scenarios are run if there is no failure.

At pytest's collection phase (which ends prior to invoking .run()) 
TestScenario's running parametrization is not and probably cannot easily
be known as it's quite intertwined code -- testscenarios generates
at its run()-time new tests with cloned modified test ids, and then 
calls TestCase.run with the new dynamically generated test cases.

however, i guess you are aware of this testscenario/pytest porting snippet:

http://pytest.org/latest/example/parametrize.html#a-quick-port-of-testscenarios

It allows to use pytest with scenario definitions.  It's passing the
argument in rather than setting an instance attribute.  If you use this
method, it's also fully compatible with all other pytest options and
parametrization methods and scopes (see on that page for more examples),
and of course generally compatible to the fixture dependency injection 
mechanism (http://pytest.org/latest/fixture.html)

HTH,
holger



> trial does the right thing (yay :):
> ⊙  trial test
> 
>                                          Julian at air
> test
>   TestTest
>     test_stuff(one) ...
>  [OK]
>     test_stuff(two) ...
>  [FAIL]
> 
> ===============================================================================
> [FAIL]
> Traceback (most recent call last):
>   File
> "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py",
> line 327, in run
>     testMethod()
>   File "/Users/Julian/Desktop/test.py", line 10, in test_stuff
>     self.fail()
>   File
> "/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py",
> line 408, in fail
>     raise self.failureException(msg)
> exceptions.AssertionError: None
> 
> test.TestTest.test_stuff(two)
> -------------------------------------------------------------------------------
> Ran 2 tests in 0.064s
> 
> FAILED (failures=1, successes=1)
> 
> 
> and nose blows up (though I'm told this isn't really a surprise due to how
> differently nose runs tests :):
> ⊙  nosetests
> 
>                                           Julian at air
> E
> ======================================================================
> ERROR: test_stuff (test.TestTest)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "/usr/local/lib/python2.7/site-packages/nose/case.py", line 133, in
> run
>     self.runTest(result)
>   File "/usr/local/lib/python2.7/site-packages/nose/case.py", line 151, in
> runTest
>     test(result)
> AssertionError: ResultProxy for Test(<test.TestTest testMethod=test_stuff>)
> (4374260944) was called with test <test.TestTest testMethod=test_stuff>
> (4374299152)
> 
> ----------------------------------------------------------------------
> Ran 0 tests in 0.064s
> 
> FAILED (errors=1)
> 
> The unittest module also works, though apparently there's an outstanding
> bug preventing it from working ideally:
> ⊙  python -m unittest -v test
> 
>                                          Julian at air
> test_stuff (test.TestTest) ... ok
> test_stuff (test.TestTest) ... FAIL
> 
> ======================================================================
> FAIL: test_stuff (test.TestTest)
> ----------------------------------------------------------------------
> Traceback (most recent call last):
>   File "test.py", line 10, in test_stuff
>     self.fail()
> AssertionError: None
> 
> ----------------------------------------------------------------------
> Ran 2 tests in 0.001s
> 
> FAILED (failures=1)
> 
> 
> Anyways, the surveying is idle fun, but do you (Holger :) or Robert happen
> to have any idea what's going on with pytest here?
> 
> Cheers,
> Julian

> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python




More information about the testing-in-python mailing list