[TIP] unittest & TDD

m h sesquile at gmail.com
Mon Mar 12 09:39:18 PDT 2007

On 3/12/07, Kumar McMillan <kumar.mcmillan at gmail.com> wrote:
> On 3/11/07, Collin Winter <collinw at gmail.com> wrote:
> > * supporting refcount checking around each test.
> > * marking certain tests as TODO.
> > * skipping certain tests based on boolean criteria.
> > * formatting test results a la the current unittest.
> > * writing test results to an XML file.
> > * emitting TAP (Perl's Test Anything Protocol) for a test run.
> > * making sure you ran the expected number of tests.
> I'm just going to sneak this in (since you asked for feedback)...
> I am strongly opposed to expecting a number of tests to run.  I think
> it's a wart in the perl community and requiring this feature means
> you've written a bad test (granted, this is also due to perl sucking
> in how it captures errors).

We keep track of expected run/pass/fail/skip.  It is quite useful for
tracking metrics over time.

My rants about testing in python:

I want the oob experience to be nice.  I write a lot of utilities that
are just 1000 file scripts.  I don't want to force to user to use
easyinstall.  I usually have a 1500 line long test file accompaning
it.  I understand the anti-java rants against unittest (though I think
they are a little silly).

I'm an ardent don't invent the wheel person, so the new/improved
unittest module rubs the wrong way on that.  But I do see a huge need
for getting something that is useful in the standard library.  So I
have mixed feelings on that.  But it seems weird to expect that the
new framework will be standardized/feature rich, which nose/py.test
have been around for a bit now, but won't be able to make it into

Perhaps there needs to be some sort of WSGI interface for testing?
Then tools can be mixed /matched.  (Don't know how this would work out
or what it would even be, but perhaps we can brainstorm on a common
testing interface....)


More information about the testing-in-python mailing list