[TIP] Result protocol (was: Nosejobs!)

Jesse Noller jnoller at gmail.com
Sat Apr 11 19:25:36 PDT 2009


On Sat, Apr 11, 2009 at 6:08 PM, Robert Collins
<robertc at robertcollins.net> wrote:
> I'm sorry to rabbit on about subunit but this conversation really does
> seem to be focusing on reinventing the wheel.

Now that I'm done cooking for tomorrow and got to spend time with
subunit, I figured I'd take a stab at replying. Mind you, no - I don't
want to reinvent the wheel, I just want to make one that fits the car
I drive.

> A couple of points I think are important:
>  * subunit is *not* a unit test framework. It is a 'Result protocol'.
>    It is what you are designing. I assume you haven't looked at it at
>    all or you wouldn't have classified it as a test framework. It may
>    not be the best, or other folk may build it differently - thats
>    fine, at a certain point this is bikeshedding. Anyhow, its
>    multilanguage (C/C++/python/shell at this time of writing), simple
>    (human readable).
>    https://launchpad.net/subunit
>

So, I somehow missed the right code, or I was not grokking what I was
looking at. I'm familiar with TAP and others in it's ilk, but thanks
to getting the right URL, I did spend some time with it now :)

Yes, subunit is essentially something which parses specialized
output/streams to provide a unified facade of test results. It uses a
custom, simple syntax so that test results can be extracted, organized
in a simple way.

Sound about right? :)

> There are lots of tests standards around; xfail, unexpected pass and
> skip are needed. (xfail is a test that isn't expected to work yet,
> unexpected pass is when that test actually works, and skip is for tests
> you do not want to run based on some condition). Some of these can be
> done in the reporting framework, but if you do that it needs more
> metadata and becomes more tied to a situation preventing aggregation and
> sharing.

I was pondering this looking at your status codes as well. I think
that the initial volley I sent out can be slightly condensed, but
added to. Perhaps:

PASS | FAIL | ERROR | SKIPPED | NOTSUPPORTED | TIMEOUT

Timeout is an implicit KILLED, but I dropped KILLED as a it could be
filed until ERROR or TIMEOUT

The executor may be able to set some of these (for example, SKIPPED on
tests which have a WIN32 attr)

> We don't have such an object today; pandokia's wire protocol, subunit,
> and I imagine py.test has one for its parallelising reporting will all
> be creating it in some manner.

Yup, this is partially what I've been trying to personally (and here) hash out.

> To me it is the other way around: its easy to write a wire protocol to
> match [most] object protocols, but its very hard to write one when the
> object protocol doesn't support what you want to accomplish.

I can't disagree here, which is why I am proposing using YAML (let's
ignore JSON for the moment) and/or XML. It's easily extendable, does
not require custom parsers, and convey a near unlimited amount of
information. Additionally, they go on the wire pretty nicely.

>
> Now Michael says:
>
>> For a test protocol representing results of test runs I would want
>> the following fields:
>>
>> Build guid
>> Machine identifier
>> Test identifier: in the form "package.module.Class.test_method" (but
>> a unique string anyway)
>> Time of test start
>> Time taken for test (useful for identifying slow running tests, slow
>> downs or anomalies)
>> Result: PASS / FAIL / ERROR
>> Traceback
>
> subunit does all this today; it is limited by the TestCase->TestResult
> protocol :(.
>
> Here is an example, Build guid and machine identifier I would just tag,
> the time: instruction acks as a clocking signal which allows duration to
> be inferred. (this is hand typed, so the traceback is..odd :). It shows
> a 37 minute duration for this test to execute.
>
> tag: build-2009-04-12-12:23:00, machine-test-alpha1
> test: Library.Engine...DependencyTest.test_method
> time: 2009-04-12 12:23:00
> time: 2009-04-12 13:00:00
> fail: Library.Engine...DependencyTest.test_method [
>  AssertionError:
>    1 != 0 in demo.py
> ]
>

Ok, so here's my first issue; a custom syntax. I don't think we need
or want one. True, you can add tags with subunit, but I don't know
that in the case Michael and are talking about, that we want to be
limited by the constraints of TestCase->TestResult protocol.

I'm also worried about rogue stdout - what about malformed output,
random [ ] characters, and so on. I know you can code defensively
around this, and any parser runs this risk - but again, it's another
reason I wanted to avoid yet another syntax/parser.

I also want to convey more information, in a more structured format
from the executor to the collector. What subunit is doing is useful -
and in fact not to far from what you'd need to put together  to deal
with tests-in-multiple, or "simpler" languages (such as bash).

Also, why did you put all the code in __init__.py, that make baby sad :(

jesse



More information about the testing-in-python mailing list