[TIP] Result protocol / pass-fail-error
jnoller at gmail.com
Mon Apr 13 13:11:10 PDT 2009
On Mon, Apr 13, 2009 at 3:42 PM, Mark Sienkiewicz <sienkiew at stsci.edu> wrote:
> Jesse Noller wrote:
>> For *my* case, I need to be able to communicate what a result "means"
>> to the system above the consumer of the result file and take action
>> (not just report). I see no reason that consumers could not define
>> custom codes.
> So you have a second level of processing that understands the significance
> of certain statuses. That's an interesting scenario. I gather that your
> proposed list is effectively the list of meaningful status values in your
>> See my followup, I outlined something like this:
>> result: STR: PASS | FAIL | ERROR | SKIP | UNKNOWN | KILLED
>> KILLED might be site-specific, although I could argue that it can be
>> generally defined as "executed by the test runner for any reason
>> (timeout, etc)"
> I must have missed your other followup.
> We have to define the meanings, but I think we are largely in agreement
> Your definition for killed looks good. Examples could be timeout, resources
> unavailable, whatever -- the significance of killed is 1) we tried to run
> it, 2) we couldn't finish for reasons that may have nothing to do with the
> actual test.
> I must have missed how you view SKIP. One definition that has been
> discussed is "the test execution environment decided not to run (or finish)
> this test because it not relevant to the environment we are executing in".
> Is that what you have in mind here?
> I have another status DISABLED which means "the user directed the test
> system not to run this test".
SKIPPED and DISABLED are relatively close to one another. In my case,
if a resource is missing, the tests may self-SKIP - they could also be
forcibly skipped by the user
> I also have MISSING, which means we did not receive a result for this test.
> Ordinarily, you would not write a test result with status=missing, but it
> appears when you process the data. And, of course, a database export could
> save this status into a result file. I think this is fundamentally
> different from UNKNOWN because unknown implies that we don't have a status,
> while MISSING means that we did not receive a test report.
That makes sense, in terms of JSON/YAML/etc - I would fill in all
fields, period - even if they were with None values. This makes
parsing it easier.
> I'm a little clear on UNKNOWN. It means we 1) we did run the test, 2) we do
> not know the status that resulted, 3) but not because there was an error.
> Does your system do this, or did you just include it for completeness?
The latter - I would probably never use UNKNOWN, but I can see it as a
last-ditch "wtf" versus using a None/null value.
More information about the testing-in-python