[TIP] Finding a way to support test data (logs) in unittest [WAS] Determine test status in tearDown method

Olemis Lang olemis at gmail.com
Fri Oct 23 06:13:32 PDT 2009

On Thu, Oct 22, 2009 at 3:39 PM, Robert Collins
<robertc at robertcollins.net> wrote:
> On Thu, 2009-10-22 at 08:21 -0500, Olemis Lang wrote:
>> Separate thread because this discussion goes far beyond the scope of
>> the original question ... ;o)
>> On Tue, Oct 20, 2009 at 4:18 PM, Robert Collins
>> <robertc at robertcollins.net> wrote:
>> > On Tue, 2009-10-20 at 09:28 -0700, Jesse Thompson wrote:
>> >
>> [...]
>> >
>> > In my blog post I also note that we really should provide access to the
>> > result object to 'user code' - code like your tearDown.
>> >
>> -1 ... you're just writing functional tests so the goal here should be
>> to handle just the part of the testing (artifacts | process) that's
>> related to the SUT and its expected behavior.
> This isn't about function||unit tests, its about the connection between
> the code of the test, and the reporting function.

At least the way I see it (and implemented) the main concern is about
annotating test results with data beyond the scope of the testing
framework (i.e. details about the code doesn't matter, for example, if
test cases are generated then annotations should be bound to each
particular test carried instead of the test code of the generator and
the generic test case ) ... if they get «reported» or not that's a
subsequent step (BTW, perhaps we are talking about the same subject,
and it's just a matter of vocabulary ...)

> At the moment the
> TestCase acts as a poor proxy: it has an unextendable interface - one
> must reimplement 'run' to change it.

That's an implementation detail , focus on the concept. TestCase
represents the minimal (mostly functional but not necessarily ;o) step
or condition that needs to be asserted . At least in my case, I want
to bind the extra-test data to the test case instance so as to be able
to relate the «extra data» to the test case.

Besides other people could always add their own implementation on top
of it and make it look the way they want

I still think that provide access to TestResult inside TestCase
methods (or somewhere else where the behavior of the SUT is to be
asserted ;o) is not a good idea ...

>> > This suggests two ways of getting the logging data - buffering/callback
>> > on the test, and direct to the result.
>> >
>> > buffering on the test would look something like:
>> > def tearDown(self):
>> >    # defaults to text/plain;charset=utf8 for unicode objects
>> >    self.attach('log', logfile_string)
>> >
>> > Then the test would have to accumulate all the attached things and pass
>> > then to the result object:
>> > def attach(self, name, content):
>> >    if isinstance(content, unicode):
>> >         orig_content = content
>> >         content = Content(
>> >             ContentType('text', 'plain', {'charset':'utf8'}),
>> >             lambda:[orig_content.encode('utf8')])
>> >    self.__current_details[name] = content
>> >
>> At least for warnings you'll need to accumulate logs (i.e. warning
>> messages), not override the previous message
>> -1 for `__current_details` IMO `_current_details` or a shorter name is
>> better (unless a property i)
> The __current_details indicates private:

That's exactly what I was saying. IMHO that shouldn't be private field
but «protected»

> in this example approach the
> TestCase would provide all the attached details to the TestResult at
> once - no other code would access __current_details *at all*. I
> mentioned accumulation later, but conceptually you'd attach a logs
> detail that is itself an accumulator.

Hmmmm ... don't get it but probably you are right

>> > Talking directly to the result object would look something like:
>> > def tearDown(self, result):
>> >    result.attach_details('log', logfile_string)
>> >
>> -1 ... because :
>>   - data should be bound to the test case instance involved in the test.
>>     sometimes this is important.
> I don't understand this - this doesn't change the responsibility for
> data binding, it changes /how things are reported/. Currently the only
> thing reportable against a test case is the stacktrace.

Let's say that you do things this way ... How could you know what test
case (instance) was the one that appended such data in `tearDown`
method  ?

>>   - IMHO it's a very bad idea and a very poor design decision to allow
>>     access to TestResult objects in methods of TestCase
> Why? Details!

I think that reasons for this could be found in the papers written by
the authors (and also designers ;o) of the whole XUnit paradigm

>> > Perhaps we should instead 'just' add the
>> > 'attach_details' API,
>> Name's toooooo long why not just `tc.log.xxx` or `tc.xxx` thus
>> providing an interface compatible with logging module .
> Because logging is one special case of the needs I've seen.
>> This would
>> imply that :
>>   - interface is well known and std
>>   - low learning curve but ...
>>   - should be compatible with TestCase semantics (e.g. tc.log.error
>>     should behave just like tc.fail)
>>   - level of detail could be used to (specify | filter) the data that
>>     will be stored during the test (e.g. if log level is INFO then
>>     DEBUG calls will be ignored or at least will have no impact on
>>     the state of the test result) ...
>>   - but error level *MUST* always be reported even if CRITICAL is set
>>   - logging module supports GoF command pattern thus log entries
>>     could be stored directly inside instances of TestResult
>> BTW , why to reinvent the wheel ?
> Because logging isn't sufficient,

Why ?

> and I'm trying to ensure the interface
> for *extenders* is good - as long as that is in place, we can offer a
> plethora of different interfaces on TestCase, compatibly.

I get your idea. The logging thing would be an extension on top of the
low-level test data API ?



Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:
Gasol-ina para España  -

More information about the testing-in-python mailing list