[TIP] Result protocol (was: Nosejobs!)

Michael Foord fuzzyman at voidspace.org.uk
Sat Apr 11 07:55:17 PDT 2009

Jesse Noller wrote:
> On Sat, Apr 11, 2009 at 12:39 AM, Douglas Philips <dgou at mac.com> wrote:
>> Personally my interest is in something much simpler. it isn't the test
>> that I want to stream back, it is the test running infrastructure that
>> will keep track of what is going on and reporting back on things. But
>> even that can get wedged/confused (esp. if it is a simple thing
>> without much smarts), so I don't want the wire level protocol to be
>> any more fragile than it has to be. As when I say streaming, I don't
>> necessarily mean keeping a connection open, it could be done RESTfully
>> for that matter.
> I thought I'd pull out this discussion around a "results protocol"
> into it's own thread since it's interesting in and of itself. I'm just
> throwing stuff against the wall here.
> There's plenty of discussion to be had on this subject - let me see if
> I can outline some of the discussion points:
> 1> There is the "object" a given *unit test* framework (subunit, nose,
> py.test, unittest, etc) tracks as a result. These are execution
> frameworks, and so the result object they track is an artifact of of
> writing tests into the framework.

How does this relate to the protocol? Do you mean that the test 
framework used should be an element in the protocol? Probably not 
important for the protocol itself.

> 2> There is the generic concept of a "test result" - in my mind, this
> is the simple "PASS/FAIL/ERROR" concept of a test which is completed.

Agreed - there should be a single result element with values PASS / FAIL 

> 3> Then there is the concept of a "result object" which can be passed
> over the network to "something" such as a results collector. There has
> been discussion around using something like a flat-text file
> pushed/pulled using key:value pairs, JSON/YAML objects, or something
> else.

This is the protocol itself. If we are to hammer out a protocol we 
should avoid talking about objects. It conflates the protocol with the 
system used to deal with it.

For a test protocol representing results of test runs I would want the 
following fields:

Build guid
Machine identifier
Test identifier: in the form "package.module.Class.test_method" (but a 
unique string anyway)
Time of test start
Time taken for test (useful for identifying slow running tests, slow 
downs or anomalies)

Anything else? What about collecting standard out even if a test passes? 
Coverage information?

We sometimes have to kill wedged test processes and need to push an 
error result back. This can be hard to associate with an individual 
test, in which case we leave the test identifier blank.

Extra information (charts?) can be generated from this data. If there is 
a need to store additional information associated with an individual 
test then an additional 'information' field could be used to provide it.



More information about the testing-in-python mailing list