[TIP] Nosejobs!

Michael Foord fuzzyman at voidspace.org.uk
Sat Apr 11 07:58:56 PDT 2009


Jesse Noller wrote:
> On Sat, Apr 11, 2009 at 9:55 AM, Michael Foord
> <fuzzyman at voidspace.org.uk> wrote:
>   
>> Just for the record, here is a rough outline of how we do distributed
>> testing over a network at Resolver Systems. It is a 'home-grown' system and
>> so is fairly specific to our needs, but it works *very* well.
>>
>> Master machine does a binary build, sets up a new test run in the database
>> (there can be multiple simultaneously), then pushes the binary build with
>> test framework and the build guid to all the machines being used in the
>> build (machines are specified by name in command line arguments or a config
>> file that starts the build). The master introspects the build run and pushes
>> a list of *all* tests by class name into the database.
>>     
>
> This is a point I was thinking about last night when I should have
> been sleeping. I like the "magic discovery" of tests on the client ala
> nose, but historically, I too have had the master index the tests and
> push the exact series of tests to run to the client. In your case,
> it's a pull from the client (which is nice) - in mine, we created a
> test manifest file, pushed it to the client so we knew exactly what
> tests were being run.
>
>   

The advantage of client pull is that if a slave machine dies we have a 
maximum of five test classes for that build that fail to run. It also 
automatically balances tests between machines without having to worry 
about whether a particular set of tests will take much longer than 
another set.

>> A web application allows us to view each build - number of tests left,
>> number of passes, errors and failures. For errors and failures tracebacks
>> can be viewed whilst the test run is still in progress. Builds with errors /
>> failures are marked in red. Completed test runs with all passes are marked
>> in green.
>>     
>
> Yup, visualization of where you are is critical. How deep are the Unit
> test suites you're pushing into the db for each run - meaning, how
> many individual test cases are in each file?
>   

How many test methods in a test file? Typically ten to twenty I guess, 
but there is a huge variation between max and min. Our hierarchy is 
fairly, but not too obscenely, deep. The deepest is something like:

    
Library.Engine.Dependency.UnitTests.DependencyAnalyser.DependencyTest.test_method


>   
>> A completed run emails the developers the results.
>>
>>     
>
> I used jabber for this at one point - when you get a big enough lab,
> with enough continuous runs and client results, it can get annoying :)
>   

A full run of tests on several machine still takes a couple of hours - 
we have a lot of functional tests that take a *loong* time.

Michael

-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog





More information about the testing-in-python mailing list