[TIP] why do you use py.test?

Herman Sheremetyev herman at swebpage.com
Tue Feb 22 07:41:56 PST 2011


On Tue, Feb 22, 2011 at 8:19 PM, meme dough <memedough at gmail.com> wrote:
> Hi,
>
> I think all the test runners are good, though I would like to point
> out the the speed comparison here isn't right.

How so? You seem to confirm it with your numbers..

> There is only a small sub second difference due to the once off start
> up time - and it's sub second.  Whilst humans do have time perception
> in the sub second range, really this is still so small that it won't
> prevent or slow down TDD.  I do TDD with pytest all the time and I use
> nosier (disclaimer I wrote it) to automatically run all tests whenever
> a file is changed.

As I said, it's not that py.test will not work for TDD for everyone --
it doesn't work for TDD *for me*. The lag in running the test suite
with py.test makes it impractical to run the tests on every change, by
which I mean every line changed in the file. If you're going to your
terminal or hitting the "build" button and running the tests manually
then the lag time is probably acceptable. But if they're running
automatically every time you update your file, and giving you the
red/green bar, then having even such a small amount of tests take a
half a second is way too laggy to be practical. FWIW I think nose it
also a bit slow, though much more tolerable, but I typically use
unittest as it gives the best response times.

> Running 100 very fast tests all like:
>
> def test_1():
>    assert 1 == 1
>
> pytest:
> platform linux2 -- Python 2.6.5 -- pytest-1.3.4
> test path 1: tests/test_foo.py
> tests/test_foo.py
> ....................................................................................................
> real    0m0.481s
> user    0m0.430s
> sys     0m0.040s
>
>
> nose:
> ....................................................................................................
> ----------------------------------------------------------------------
> Ran 100 tests in 0.009s
> OK
> real    0m0.151s
> user    0m0.110s
> sys     0m0.040s
>
>
> Running 100 slow tests all like:
>
> def test_1():
>    time.sleep(1)
>
>
> pytest:
> platform linux2 -- Python 2.6.5 -- pytest-1.3.4
> test path 1: tests/test_foo.py
> tests/test_foo.py
> ....................................................................................................
> real    1m40.596s
> user    0m0.460s
> sys     0m0.060s
>
>
> nose:
> ....................................................................................................
> ----------------------------------------------------------------------
> Ran 100 tests in 100.103s
> OK
> real    1m40.248s
> user    0m0.130s
> sys     0m0.030s
>
>
> So in both cases (fast tests and slow tests) the difference is in the
> 0.3-0.4 sec range.

Which is pretty huge for the scenario I'm concerned with. And this
number only increases with the amount of tests -- 100 is not a lot and
the ones you're benchmarking are literally as fast as 100 tests can
possibly go as they don't actually do anything. In my case the real
execution times are more like this (number of tests is slightly
different due to runner-specific tests):

unittest python2.6
.............................................................................................
----------------------------------------------------------------------
Ran 93 tests in 0.048s

OK

real    0m0.223s
user    0m0.114s
sys     0m0.055s

nosetests with python2.6
......................................................................................................
----------------------------------------------------------------------
Ran 102 tests in 0.036s

OK

real    0m0.395s
user    0m0.243s
sys     0m0.110s

py.test with python2.6
======================================================== test session
starts =========================================================
platform darwin -- Python 2.6.6 -- pytest-2.0.0
collected 94 items

tests/flexmock_pytest_test.py
..............................................................................................

===================================================== 94 passed in
0.52 seconds ======================================================

real    0m0.874s
user    0m0.529s
sys     0m0.166s


py.test takes nearly a second here, and that's just for 94 tests. If
there were 200 that time would go up by another 0.5 seconds, and I'd
expect to be able to easily handle 1000 such tests without feeling the
lag -- I'd estimate unittest would run them in 0.5 seconds, nose to
take 0.7 or 0.8 and py.test to take over 5 seconds.

None of this is to say that py.test isn't good for larger tests or
running tests in parallel -- it sounds like it's great for those use
cases. But for running large numbers of small tests without noticeable
lag, nose seems like a better choice, and unittest the best choice.
Right tool for the job and all that ;)

Cheers,

-Herman



More information about the testing-in-python mailing list