[TIP] decorators in testing: functional benchmarking & cloning
olemis at gmail.com
Mon Apr 6 06:33:29 PDT 2009
On Sat, Apr 4, 2009 at 7:48 AM, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> Any other useful decorators provided by the xUnit frameworks?
The second and final (at least for a while) example of using test
decorators is the one that makes possible the «functional benchmarking»
I told you about in the former explanation. Anyway I would like to
tell you a little bit more about it so that you really understand why
it is really necessary in our case.
CASPy _ is a module that allows to specify contracts to be satisfied by
Pythonic classes. It is based on function decorators ... However
there are other solutions out there for this, and even a deferred
PEP. So at the time when I was improving it and adding further
features, I was also in the process of learning how testing worked in
Python. So it's like an experiment. Beyond that, I realized that if
the library was to be seriously considered (I have even written some
kind of PEP, dont remember where it is now ...), I had to state clearly
its benefits with respect to the other «competitors». Therefore :
1- I had to identify some concrete scenarios where contracts could be
used, and they should cover relevant and representative use cases
for the different features involved in contract declarations, and
2- They could be used to test whether the implementation was right or
3- ... but also whether other libraries satisfied the contracts
semantics. In the end, contracts semantics is the same no matter
how they are declared. But doing so ...
4- ... the module had to be impartial. I mean, perhaps I was doing
something wrong or I didnt understand an specific scenario.
5- Beyond all this, I thought it would be very nice to have experts
to take a look at the code and just tell them ... «do you think
that after declaring these contracts the class should do all this ?».
There are a lot of these people using Java & Eiffel; dont know
about Py. So the test suite had to be readable enough so that a
person not aware of testing tricks & details, could take a look
and assess whether the contracts were correctly declared and also
if the affected classes had to behave like mentionned in the test
Therefore (I mention inside parenthesis the reqs involved):
- I wrote some classes having contracts using the style supported by
CASPy. I started with some examples offered by JMSAssert (under
permission of course) ... and later I included a lot more classes.
After all this done I wrote classes _ doing the same thing, but using
other styles -PyDBC, ipdbc, PEP 306- (1,2,3,5).
- Having all this done, I wrote doctests to express how these
classes had to behave to satisfy contract semantics *DISREGARDING*
the style employed to do so. (2,3,4,5)
- There were some other (a few actually ...) test cases written using
unittest ... therefore I implemented the loaders & classes in
dutest to easily retrieve all these examples and test cases at once.
- After all these being done there is a unique test suite which
should be tested against the classes written for each and every
styles involved. Hmmmmm ? The solution is to wrap the resultig
tests using a test decorator. When the whole test is being run, it
behaves as follows:
* It clones the tests (... yes !!! GoF's prototype pattern ;)
* It connects the suite with the target classes for a specific style
under test, so that the examples make use of them.
* It performs the test ...
* ... and reports a failure only if framew-specific tests have failed.
And you may be wondering ... what is all this about? Well, doing so:
- The test is completely impartial ... we are executing the same code
snippets, to verify the same semantics ...
- Besides it scales ... if another similar lib is built, you only
need to reproduce in a separate module the same scenarios for
contract definitions using the new style . The expected results,
and the complexity involved
in building the test suite (cloning and so on ;) is transparent ...
code reuse, separation of concerns, and similar benefits ...
- And the only thing needed in the test code to get all this done, is
to wrap the tests with an instance of the test decorator.
- ... and the test code itself scales and can be reused ... if I need
to test another test suite several times following the same
procedure, the only thing needed is to wrap it using an instance
of the test decorator.
- More ... ;)
In fact this is an example of the «advantages» of using dutest in
order to combine doctest & unittest.
Conclusions ... test decorators allow to add (certainly) useful
functionality to TestCase which is not provided by default, without
rewriting it from scratch, but using the decorator pattern instead;
which in turn is a dynamic approach to add functionality to a class
without needing inheritance (e.g. at run-time ;).
That's what patterns are all about ... writing great code that can
be reused ...
I'm sure you can find multiple use cases for test decorators in Java.
They didnt include it in JUnit just for fun ... they should have had
their very good reasons supported by strong arguments. Perhaps they
are useful in Python ... maybe not. They are for me ... ;o)
PS: Recently I reread Meszaros' book and I found that this UC may be
related to a few testing patterns ... what t(he)ll, I see patterns
everywhere ... Am I becoming crazy or something ? XD
Besides I also found this pattern named
- Setup Decorator (447): We wrap the test suite with a Decorator
that sets up the shared test fi xture before running the tests and
tears it down after all the tests are done.
... which seems to be undoubtedly a test decorator ... i.e. another
(abstract) use case ... d;^)x-
..  oop.dbc module
..  oop.test package (see modules exdbc_*.py)
Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/
No me gustan los templates de Django ...
More information about the testing-in-python