[TIP] why do you use py.test?

holger krekel holger at merlinux.eu
Tue Feb 15 02:12:58 PST 2011


On Tue, Feb 15, 2011 at 10:12 +0900, Herman Sheremetyev wrote:
> I've seen a lot of messages on this list of people mentioning they use
> py.test, so I'm curious what advantages it offers. I gave it a try and
> immediately noticed it was literally 10x slower than nose, taking 0.60
> seconds to run the same set of tests it takes nose 0.062. As a result
> I pretty much wrote it off since test speed is pretty important to me,
> and all the red in my terminal seemed a bit unnecessary. But as I see
> people continuously mention it I wonder if I missed something about
> py.test's functionality that could be really useful and offset the
> speed problems.

first a note on speed.  py.test has higher overhead for executing a single
tests than nose or unittest.  It spends more time in plugin hook invocations 
and could be optimized to be less noticeable.  But this hasn't been a priority 
because no one has yet told me it's of practical concern to them.  
Is it in your case?

> So, what are your reasons for using py.test (or some other test runner) ? :)

i like many aspects of how it behaves and reports probably
because i wrote much of it :)  I'd also be curious what people
have to say why they prefer nose or other test runners, btw :)

> -Herman
> 
> P.S. I know people tend to get sensitive about their tools, so I don't
> want to start a flame war, just genuinely curious why people are using
> this particular one..

The truth is that nose, py.test and unittest2 have somewhat converged these
days.  In 2004 py.test was the only tool offering many now common features. 
Jason Pellerin liked py.test ideas but wanted to implement it differently and so
started nose.  He brought about a plugin system which py.test only evolved later. 
Now we have Michael Foord trying to overtake it all and bringing some concepts to
the standard library, thereby improving the experience of many just using the
"standard unittest" module.  So we have all learned from each other although
code sharing remains low.

Regarding my own use, I have not much to add to the other posts.  Maybe just
that dependency injection (funcargs) helped me a lot to implement elegant
functional tests and later refactor the code-under-test without changing
the test code.  I just went through this experience for my cleanup effort for
py.test-2.0 which hardly required changes to its ca. 500 functional tests
although some 30% of lines were removed from the code base and import locations
reshuffled.

Also, i sometimes have a pattern which is incremental like this:

    def test_multi_step_something(...):
        code1
        assertion / test
        code2 (requires code1 to have run)
        assertion / test
        code3 (requires code2 to have run)
        assertion / test
        assertion-or-code-dependent-on-param?!

at the very end i want to execute different assertions depending on a
parameter that flowed into code1.  with funcargs a solution is:

    @pytest.mark.multiarg(mode=[1,2,3])
    def test_multi_step_something(mode):
        ...

and the test function will be run with three different values
for "mode" and i can make conditional branches in the test.  
You can do something similar with subclassing or Robert's 
test scenario extensions for unittest but i prefer the above 
because it nicely lumps together and works anywhere.

cheers,
holger



More information about the testing-in-python mailing list