[TIP] Result protocol

Michael Foord fuzzyman at voidspace.org.uk
Sat Apr 11 11:35:10 PDT 2009


Jesse Noller wrote:
> [snip...]
>>> 2> There is the generic concept of a "test result" - in my mind, this
>>> is the simple "PASS/FAIL/ERROR" concept of a test which is completed.
>>>
>>>       
>> Agreed - there should be a single result element with values PASS / FAIL /
>> ERROR.
>>     
>
> Building on this, and Scott's point -
>
> test_cases:
>     test:
>         id: STR
>         result: STR: PASS | FAIL | ERROR | SKIP | UNKNOWN | KILLED (or TIMEOUT)
>         result_id: STR: 0 | 1 | 255 | ... others
>
> Ideally, the ID is a simple int, ala Unix error codes. In the past,
> I've used something like ERROR: 300, SKIP: 301, or other. I try not to
> use ones low enough to be relevant to real exit codes as it could be
> useful to actually pass back the exit code of the test command itself
> - on the fence about that though.
>
>   

Having result *and* result_id seems like a recipe for confusion (what if 
they contradict). Keeping it as a string keeps it human readable.

> [snip...]
>> For a test protocol representing results of test runs I would want the
>> following fields:
>>
>> Build guid
>> Machine identifier
>> Test identifier: in the form "package.module.Class.test_method" (but a
>> unique string anyway)
>> Time of test start
>> Time taken for test (useful for identifying slow running tests, slow downs
>> or anomalies)
>> Result: PASS / FAIL / ERROR
>> Traceback
>>
>>     
>
> I think traceback is an artifact of the type of test, in the case of
> non-python tests, you might not have one. You would however
> (hopefully) have stderr. So -1 on traceback and +1 on error_out or
> something akin to that.
>
>   

But with most testing tools the traceback *is* the diagnostic 
information. It isn't *obvious* where it should go in the schema you 
have below.

> build_id: STR
> machine_id: STR (uuid?)
> test_cases:
>     test:
>         id: STR
>         result: STR
>         result_id: STR
>         start: FLOAT (time.time())
>         stop: FLOAT (time.time())
>         total_time: FLOAT (seconds)
>
>   

OK

>> Anything else? What about collecting standard out even if a test passes?
>> Coverage information?
>>     
>
> Hmm, stdout and stderr are a PITA - I dealt with a test in the past
> that created log files several mb in size, logjamming it in this would
> suck. How about something like "last N lines" for both? Or we make it
> optional, ala something like:
>
> build_id: STR
> machine_id: STR (uuid?)
> test_cases:
>     test:
>         id: STR
>         result: STR
>         result_id: STR
>         start: FLOAT (time.time())
>         stop: FLOAT (time.time())
>         total_time: FLOAT (seconds)
>         additional:
>             coverage_info: Big str
>             stdout: Big Str
>             stderr: Big str
>
>   
OK - although I would still rather see one obvious place for the 
traceback (whatever it may be called) for failure information.

The rest looks good to me.

Michael



-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog





More information about the testing-in-python mailing list