jnoller at gmail.com
Sat Apr 11 07:25:10 PDT 2009
On Sat, Apr 11, 2009 at 9:55 AM, Michael Foord
<fuzzyman at voidspace.org.uk> wrote:
> Just for the record, here is a rough outline of how we do distributed
> testing over a network at Resolver Systems. It is a 'home-grown' system and
> so is fairly specific to our needs, but it works *very* well.
> Master machine does a binary build, sets up a new test run in the database
> (there can be multiple simultaneously), then pushes the binary build with
> test framework and the build guid to all the machines being used in the
> build (machines are specified by name in command line arguments or a config
> file that starts the build). The master introspects the build run and pushes
> a list of *all* tests by class name into the database.
This is a point I was thinking about last night when I should have
been sleeping. I like the "magic discovery" of tests on the client ala
nose, but historically, I too have had the master index the tests and
push the exact series of tests to run to the client. In your case,
it's a pull from the client (which is nice) - in mine, we created a
test manifest file, pushed it to the client so we knew exactly what
tests were being run.
> A web application allows us to view each build - number of tests left,
> number of passes, errors and failures. For errors and failures tracebacks
> can be viewed whilst the test run is still in progress. Builds with errors /
> failures are marked in red. Completed test runs with all passes are marked
> in green.
Yup, visualization of where you are is critical. How deep are the Unit
test suites you're pushing into the db for each run - meaning, how
many individual test cases are in each file?
> A completed run emails the developers the results.
I used jabber for this at one point - when you get a big enough lab,
with enough continuous runs and client results, it can get annoying :)
> We have a build farm (about six machines currently) typically running two
> continuous integration loops - SVN head and the branch for the last release.
> These run tests continuously - not just when new checkins are made.
Yeah, the test clients should always be, well, testing - the big
problem I ended up running into was the fact that we would litterally
get thousands of results a day. It was nuts.
> This works very well for us, although we are continually tweaking the
> system. It is all built on unittest.
Not terribly far off from what I'd like to initially do, although I
want to avoid needing all test cases in python and in classes.
Personally, I like class-based organization, some people seem to get
hives when you mention it.
More information about the testing-in-python