[TIP] Result protocol / pass-fail-error
Mark Sienkiewicz
sienkiew at stsci.edu
Mon Apr 13 12:42:11 PDT 2009
Jesse Noller wrote:
> For *my* case, I need to be able to communicate what a result "means"
> to the system above the consumer of the result file and take action
> (not just report). I see no reason that consumers could not define
> custom codes.
>
So you have a second level of processing that understands the
significance of certain statuses. That's an interesting scenario. I
gather that your proposed list is effectively the list of meaningful
status values in your system.
> See my followup, I outlined something like this:
>
> result: STR: PASS | FAIL | ERROR | SKIP | UNKNOWN | KILLED
>
> KILLED might be site-specific, although I could argue that it can be
> generally defined as "executed by the test runner for any reason
> (timeout, etc)"
>
I must have missed your other followup.
We have to define the meanings, but I think we are largely in agreement
here.
Your definition for killed looks good. Examples could be timeout,
resources unavailable, whatever -- the significance of killed is 1) we
tried to run it, 2) we couldn't finish for reasons that may have nothing
to do with the actual test.
I must have missed how you view SKIP. One definition that has been
discussed is "the test execution environment decided not to run (or
finish) this test because it not relevant to the environment we are
executing in". Is that what you have in mind here?
I have another status DISABLED which means "the user directed the test
system not to run this test".
I also have MISSING, which means we did not receive a result for this
test. Ordinarily, you would not write a test result with
status=missing, but it appears when you process the data. And, of
course, a database export could save this status into a result file. I
think this is fundamentally different from UNKNOWN because unknown
implies that we don't have a status, while MISSING means that we did not
receive a test report.
I'm a little clear on UNKNOWN. It means we 1) we did run the test, 2)
we do not know the status that resulted, 3) but not because there was an
error. Does your system do this, or did you just include it for
completeness?
Mark S.
More information about the testing-in-python
mailing list