[TIP] Dingus screencast and source code

Gary Bernhardt gary.bernhardt at gmail.com
Thu Apr 9 16:50:57 PDT 2009

On Thu, Apr 2, 2009 at 2:52 AM, Marius Gedminas <marius at gedmin.as> wrote:
> On Wed, Apr 01, 2009 at 10:31:13PM -0400, Gary Bernhardt wrote:
>> On Wed, Apr 1, 2009 at 12:57 PM, Marius Gedminas <marius at gedmin.as> wrote:
>> > What do you use to integrate nose with vim?
>> There are a few of pieces:
>>   * The nose_machineout plugin makes nose output errors in a way
>> that's roughly compatible with make's output.
>>   * Vim's :make command knows how to parse make output. This is how it
>> jumps me to the correct line.
> How well does this work with medium and large testsuites?  The reason I
> don't use :make myself is that it blocks the whole vim process while the
> testsuite runs, and if that's not instant, it quickly becomes
> irritating.
> The nice thing about your Dingus screencast was that the tests gave the
> result instantly -- but then when you have 5 unit tests with trivial
> setup, that's often the case.  How well does this scale in practices, to
> test suites with thousands of tests?
> At some point in a large project just importing all the code and test
> modules tends to take a couple of seconds.  Do you limit the tests you
> run at any particular time while developing to a single test module?
> (Okay, looking at your .vimrc, I see that you do.)

(Better late than never... ;)

The project that Dingus grew out of had about 500 unit tests, most of
which were at high levels of isolation. The tests themselves took
about 1 second to run as reported by nose, but the imports took
another 2.5 seconds. The tests never even used those imported
libraries, though, because they were being dingused away for the

Those slow imports were my main motivation for running only part of
the suite. By only running one test file, I hit only a fraction of
those imports, bringing my test runs down to 1-1.5 seconds on average.
That was still very annoying, but I dealt with it. :)

At one point, I tried to write an import hook to fake out every import
in the entire system except nose and dingus. I didn't need the actual
imported libraries anyway; these were unit tests, after all. ;) That
experiment ended frustratingly when I realized that I was subclassing
some classes from external libraries. Faking the imports broke those

> The other interesting question is a consequence of using mocking
> heavily.  When you do that, you really need integration tests (or
> mistakes about API will get replicated in your code and in your
> (passing) tests, but the code won't work in real life).  How do you
> write and organize those?

There were 44 integration tests and 88 functional/acceptance tests. My
workflow changed over time (this was a three year project in which I
went from hack-and-slash to fully-isolated TDD). At the end, I would
write a failing acceptance test (using a very high-level interface for
interacting with the app's GUI). I'd watch it go red, then I'd start
writing unit tests to push it toward green. Once it was green, I
wasn't allowed to write any more code without first writing another
red acceptance test.

My "integration tests" were usually testing integration with external
systems – usually file systems. Few of them tested integration between
components; that happened as a side effect the functional tests. I
suppose it's a bit strange to call them "integration tests" then. So
it goes. :)


More information about the testing-in-python mailing list