[TIP] Finding a way to support test data (logs) in unittest [WAS] Determine test status in tearDown method

Olemis Lang olemis at gmail.com
Thu Oct 22 06:21:51 PDT 2009


Separate thread because this discussion goes far beyond the scope of
the original question ... ;o)

On Tue, Oct 20, 2009 at 4:18 PM, Robert Collins
<robertc at robertcollins.net> wrote:
> On Tue, 2009-10-20 at 09:28 -0700, Jesse Thompson wrote:
>
[...]
>
> In my blog post I also note that we really should provide access to the
> result object to 'user code' - code like your tearDown.
>

-1 ... you're just writing functional tests so the goal here should be
to handle just the part of the testing (artifacts | process) that's
related to the SUT and its expected behavior.

> This suggests two ways of getting the logging data - buffering/callback
> on the test, and direct to the result.
>
> buffering on the test would look something like:
> def tearDown(self):
>    # defaults to text/plain;charset=utf8 for unicode objects
>    self.attach('log', logfile_string)
>
> Then the test would have to accumulate all the attached things and pass
> then to the result object:
> def attach(self, name, content):
>    if isinstance(content, unicode):
>         orig_content = content
>         content = Content(
>             ContentType('text', 'plain', {'charset':'utf8'}),
>             lambda:[orig_content.encode('utf8')])
>    self.__current_details[name] = content
>

At least for warnings you'll need to accumulate logs (i.e. warning
messages), not override the previous message

-1 for `__current_details` IMO `_current_details` or a shorter name is
better (unless a property i)

;o)

>
> Talking directly to the result object would look something like:
> def tearDown(self, result):
>    result.attach_details('log', logfile_string)
>

-1 ... because :

  - data should be bound to the test case instance involved in the test.
    sometimes this is important.
  - IMHO it's a very bad idea and a very poor design decision to allow
    access to TestResult objects in methods of TestCase

> with the attach_details method looking very similar; it would have to
> gather up multiple attachments until the test outcome was known - and
> then be able to choose whether to show them or not.
>

IMO, even if they are not shown, they should still be stored in
instances of TestCase (keep reading ;o)

> This suggests to me that making outcomes - e.g. 'addFailure' richer, is
> actually the wrong direction.

+1

> Perhaps we should instead 'just' add the
> 'attach_details' API,

Name's toooooo long why not just `tc.log.xxx` or `tc.xxx` thus
providing an interface compatible with logging module . This would
imply that :

  - interface is well known and std
  - low learning curve but ...
  - should be compatible with TestCase semantics (e.g. tc.log.error
    should behave just like tc.fail)
  - level of detail could be used to (specify | filter) the data that
    will be stored during the test (e.g. if log level is INFO then
    DEBUG calls will be ignored or at least will have no impact on
    the state of the test result) ...
  - but error level *MUST* always be reported even if CRITICAL is set
  - logging module supports GoF command pattern thus log entries
    could be stored directly inside instances of TestResult

BTW , why to reinvent the wheel ?

;o)

> and make sure that errors etc are not rendered
> until 'stopTest' is called (thus signifying that all the attachments
> possible have been attached).

I see no problem here. Logging calls just store the data and everything
else happens as usual

;o)

> A separate discussion can be had about
> whether we can/should get rid of the error/reason fields on many
> outcomes - with the attachment facility they would be redundant and
> limiting.
>

Perhaps, but maybe they are special enough to be considered in
different ways. Nonetheless they still fit to have a uniform treatment
considering the Command thing I mentioned above

;o)

PD: Ufff ... I drank a single bottle of Vodka this morning : What d'u
think about its side-effects ?

:P

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
Looking for a technique to create flexible, graphical dashboards ...
- http://feedproxy.google.com/~r/TracGViz-full/~3/QO5N8AG0NnM/d6e3b3fd323d5b52



More information about the testing-in-python mailing list