[TIP] Nosejobs!

Michael Foord fuzzyman at voidspace.org.uk
Sat Apr 11 06:55:22 PDT 2009


Just for the record, here is a rough outline of how we do distributed 
testing over a network at Resolver Systems. It is a 'home-grown' system 
and so is fairly specific to our needs, but it works *very* well.

Master machine does a binary build, sets up a new test run in the 
database (there can be multiple simultaneously), then pushes the binary 
build with test framework and the build guid to all the machines being 
used in the build (machines are specified by name in command line 
arguments or a config file that starts the build). The master 
introspects the build run and pushes a list of *all* tests by class name 
into the database.

When the zip file arrives on a slave machine a daemon unzips and deletes 
the original zipfile. Each slave then pulls the next five test classes 
out of the database and runs them in a subprocess. Each test method 
pushes the result (pass, failure, time taken for test, machine it was 
run on, build guid and traceback on failure) to the database. If the 
subprocess fails to report anything after a preset time (45 mins I think 
currently) then it kills the test process and reports the failure to the 
database. Performance tests typically run each test five times and push 
the times taken to a separate table so that we can monitor performance 
of our application separately.

A web application allows us to view each build - number of tests left, 
number of passes, errors and failures. For errors and failures 
tracebacks can be viewed whilst the test run is still in progress. 
Builds with errors / failures are marked in red. Completed test runs 
with all passes are marked in green.

A completed run emails the developers the results.

The page for each build allows us to pull machines out whilst the tests 
are running. If a machine is stopped then it stops pulling tests from 
the database (but runs to completion those it already has).

We have a build farm (about six machines currently) typically running 
two continuous integration loops - SVN head and the branch for the last 
release. These run tests continuously - not just when new checkins are made.

This works very well for us, although we are continually tweaking the 
system. It is all built on unittest.

Michael


Jesse Noller wrote:
> On Thu, Apr 9, 2009 at 3:32 PM, holger krekel <holger at merlinux.eu> wrote:
>   
>> Hi Jesse, Doug, all,
>>
>> On Thu, Apr 09, 2009 at 14:54 -0400, Jesse Noller wrote:
>>     
>>>> On Thu, Apr 9, 2009 at 2:27 PM, Doug Philips <dgou at mac.com> wrote:
>>>>         
>>> [...]
>>>       
>>>>> Given the cross-polination between nose and py.test I would be suprised if this ability weren't already on the radar of the nose developers. Ideally it could be abstracted out for other testing frameworks to use too.
>>>>>           
>>> [...]
>>>       
>>>> And if py.execnet/py.test/any other open source work can save me time:
>>>> I plan on absconding with it/using it. I'm not a fan of wheel
>>>> recreation. Yes, I have looked at alternatives - my goal is to make
>>>> this as automatic as possible, support multiple languages for tests,
>>>> and *not* require that a test be written *for* the framework. A test
>>>> should be as simple as a shell script, or as complex as you want, it
>>>> should not have to know about the framework running it.
>>>>
>>>> -jesse
>>>>         
>> Just to let you know: i share most of this world view and principles.
>> I have an additional focus on zero-installation techniques,
>> i.e. having code interact in a network such that I do not
>> need to manually maintain server/client code.
>>
>>     
>
> I'm not that hung up about a simple server that delegates to
> nose/another entity on each client. Any client should be as simple as
> possible (with as few dependencies as possible).
>
> I have the additional problem of pushing product installs to the
> client (not just python modules - I mean, full blown installers).
> Which means while I *might* be able to use SSH, I might need something
> more for the server on the client to handle.
>
>   
>> Related to another post of yours: It's true that
>> day long-running tests would be problematic for the current
>> py.test/execnet code.  More async communication is needed in
>> this case.  Maybe by reporting to an http-service where
>> correlation is done by some unique test run id.  One can
>> then do reporting and programmatically access the results.
>>     
>
> Yup. That's the idea behind nosejobs: pass a UUID back to the caller,
> the caller can circle back or be notified when the result is ready. I
> don't want to have to have a constant connection to the test clients,
> or the build slaves. They should be able to go off and do their thing
> without maintaining one long connection.
>
> -jesse
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>   


-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog





More information about the testing-in-python mailing list