[TIP] Determine test status in tearDown method

Robert Collins robertc at robertcollins.net
Tue Oct 20 14:18:03 PDT 2009


On Tue, 2009-10-20 at 09:28 -0700, Jesse Thompson wrote:
> I'm having a difficult time figuring out how to determine a test's
> status from the tearDown method, do any of you know how I might go
> about doing so?
> Example: While running a test some unhandled exception is thrown,
> which causes the test to fail, I would like to know about it in the
> tearDown method. Or if a test fails due to an assertion failure, I
> would still like to know about it in tearDown so that I can attach
> some extra logging information to the test's failure output. 

This is the sort of case I have in mind with my changes I'm calling for
in
http://rbtcollins.wordpress.com/2009/09/23/python-unittest-api-time-to-fix-it/

I have a draft of an API to add logging data to the failure output, but
haven't yet drafted the API for test writers to provide that data.

I'm guessing here: You plan on printing to stdout/stderr the extra
logging data, and then nose or whatever you're running with will show
that appropriately? And its because you're sending it to stderr/stdout
*that you care* about whether the test failed or not ? (Because the test
framework can't hide it for you when the test succeeded)

Now, I may be wrong, and you really do care about whether the test
failed or not - in that case it should be added to the bugs we need to
fix, because even a test decorator can't ascertain that fully (as tests
can blow up in cleanups or tear downs).

There are two interesting conversations to me, right now:
 - what you can do today
 - what we'd like to be able to do

For today:
grab testtools, in there there is a TestResultDecorator. subclass that
and get your test runner to use YourSubClass(original_result).
When a test fails, outcome methods like addError/addFailure are called.

you want (for both of those outcomes):
def addError(self, test, error):
    try:
        log_data = test.get_log_data()
        # Do what you want here - e.g. print to stderr, mangle the error
        # (which is an exc_info tuple) to contain it...
    except AttributeError:
        pass
    self.result.addError(test, error)

This will trigger before your tearDown (unless, and only unless, you
raise exceptions in tearDown/cleanups).


For the future:
My current draft API changes addError() to take a details dict rather
than a simple exc_info tuple. (It also gives the details dict to
addSuccess - I know times when looking at a log on a successful test
would be really useful). 

In my blog post I also note that we really should provide access to the
result object to 'user code' - code like your tearDown.

This suggests two ways of getting the logging data - buffering/callback
on the test, and direct to the result.

buffering on the test would look something like:
def tearDown(self):
    # defaults to text/plain;charset=utf8 for unicode objects
    self.attach('log', logfile_string)

Then the test would have to accumulate all the attached things and pass
then to the result object:
def attach(self, name, content):
    if isinstance(content, unicode):
         orig_content = content
         content = Content(
             ContentType('text', 'plain', {'charset':'utf8'}),
             lambda:[orig_content.encode('utf8')])
    self.__current_details[name] = content


Talking directly to the result object would look something like:
def tearDown(self, result):
    result.attach_details('log', logfile_string)

with the attach_details method looking very similar; it would have to
gather up multiple attachments until the test outcome was known - and
then be able to choose whether to show them or not.

This suggests to me that making outcomes - e.g. 'addFailure' richer, is
actually the wrong direction. Perhaps we should instead 'just' add the
'attach_details' API, and make sure that errors etc are not rendered
until 'stopTest' is called (thus signifying that all the attachments
possible have been attached). A separate discussion can be had about
whether we can/should get rid of the error/reason fields on many
outcomes - with the attachment facility they would be redundant and
limiting.

-Rob
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20091021/b99ec006/attachment.pgp>


More information about the testing-in-python mailing list