[TIP] Determine test status in tearDown method

Olemis Lang olemis at gmail.com
Wed Oct 21 06:06:16 PDT 2009


On 10/20/09, Robert Collins <robertc at robertcollins.net> wrote:
> On Tue, 2009-10-20 at 16:34 -0500, Olemis Lang wrote:
>
>  > > I have a draft of an API to add logging data to the failure output, but
>  > > haven't yet drafted the API for test writers to provide that data.
>  > >
>  >
>  > I think something like this would be awesome, especially for only
>  > services and hosted apps (e.g. in the cloud ;o)
>  >
>  > ... but I don't see the relationship with the original question
>
> Jesse wants find out if a test errored, *so as to provide logging data
>  for errors*. The relationship is that if you can just provide the
>  logging data without making the test output be cluttered on success,
>  then Jesse may not care about whether the test failed or not.
>

Ah ! well now I see. I agree on this point .

BTW I always had the impression that something weird was happening in
there. I mean, `tearDown` is just to clean up resources related to the
test. If somebody needs to *add extra logging to the test result* then
I don't think that should be handled in `tearDown` method. If a
failure is detected while exc-ing `tearDown` then an exception should
be raised so that the test runner be able to know that something was
wrong and include an error entry in the active test result (that's the
way the std protocol «works» ). But anyway, I don't know about the
details in this specific case .

The only thing I can mention so far is that once upon a time I had the
need to keep track of some conditions that were not precisely errors,
but didn't look like total success either (just like a fuzzy measure
of test success, more or less ;o) . I was benchmarking the support
provided by some DBC implementations for Py and sometimes I wanted to
have a result like e.g. «it is consistent with contracts semantics but
doesn't support custom messages to describe the failure». What I did
was to add support for warnings in instances of test cases. Following
that model anybody can keep track of such warnings by issuing a call
just like `tc.warn(msg, ...)` , and you could do it anywhere, even
inside `tearDown` method.

But even in this case, I don't think that requesting whether a test
case has failed or not is the right way to emit such warnings. IMO it
is better to express such conditions using elements found in the SUT,
instead of elements found in the testing framework .

CMIIW anyway

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:



More information about the testing-in-python mailing list