[TIP] Result protocol (was: Nosejobs!)

Jesse Noller jnoller at gmail.com
Sat Apr 11 07:10:33 PDT 2009


On Sat, Apr 11, 2009 at 12:39 AM, Douglas Philips <dgou at mac.com> wrote:

> Personally my interest is in something much simpler. it isn't the test
> that I want to stream back, it is the test running infrastructure that
> will keep track of what is going on and reporting back on things. But
> even that can get wedged/confused (esp. if it is a simple thing
> without much smarts), so I don't want the wire level protocol to be
> any more fragile than it has to be. As when I say streaming, I don't
> necessarily mean keeping a connection open, it could be done RESTfully
> for that matter.

I thought I'd pull out this discussion around a "results protocol"
into it's own thread since it's interesting in and of itself. I'm just
throwing stuff against the wall here.

There's plenty of discussion to be had on this subject - let me see if
I can outline some of the discussion points:

1> There is the "object" a given *unit test* framework (subunit, nose,
py.test, unittest, etc) tracks as a result. These are execution
frameworks, and so the result object they track is an artifact of of
writing tests into the framework.

2> There is the generic concept of a "test result" - in my mind, this
is the simple "PASS/FAIL/ERROR" concept of a test which is completed.

3> Then there is the concept of a "result object" which can be passed
over the network to "something" such as a results collector. There has
been discussion around using something like a flat-text file
pushed/pulled using key:value pairs, JSON/YAML objects, or something
else.

Personally, I feel as if 1 and 2 are largely things which depend on
the exact executor, and can easily be parsed/translated into point 3
objects.

It's also easy enough to make something like a nose plugin which takes
a given result object and outputs all or part of a point 3 object,
which can then be collected into a final "report" on a given run of a
test / group of tests and then sent to the collector.

Obviously, I'm also advocating the fact that there is more involved in
the *artifacts* of a test then just a binary pass/fail.

Previously, I had a system in which a test would run, the executor
would build a report of the tests run, and their results - these
results would be concise (pass, fail, etc) and be stored in a
database. Additionally, there would be a pointer in this result to the
additional artifacts of the run.

The additional artifacts would be compressed and stored in a central
location - they would be marked with the test run ID, and could
contain log files, charts, etc. These artifacts would be deleted over
time depending on the needs of the company, however, the results in
the database would never be deleted automatically.

So, for the objects in point 3, I'm a fan of something human readable
(in case you don't want a database, instead you want to read them off
the filesystem, or some other thing) and something which is cross
language. The pointers to the artifacts were all URIs to a different
data store.

In my personal experience, flat text key-value pair style files didn't
work out long term for this, they couldn't express enough information
for my needs, and I ended up writing special parsers for it in
multiple languages eventually. This is why I am a fan of something in
the vein of XML/JSON/YAML.

I'd be interested in other thoughts on this

jesse



More information about the testing-in-python mailing list