[TIP] hackish generative tests (was: Re: Generating tests, II (was: Re: Meta-test methods...))

holger krekel holger at merlinux.eu
Wed May 6 02:12:20 PDT 2009


Hi Doug,

thanks for the feedback - i added a reference to your mail to
the issue.  more comments inline. 

On Tue, May 05, 2009 at 18:44 -0400, Doug Philips wrote:
> On or about Saturday, May 02, 2009, at 02:39PM, holger krekel indited:
> >... Given such an additional mechanism that re-runs the same test 
> >method with multiple different values for the same funcarg would
> >provide this "variations on a them".  This issue 
> >
> >    http://bitbucket.org/hpk42/py-trunk/issue/2/next-generation-generative-tests
> >
> >is the result of a discussion with Samuele Pedroni who wants
> >to run tests with multiple different browsers and is currently
> >using the yield-way.  The issue contains a code sketch
> >how a plugin could trigger the running of the same test
> >function with multiple browsers via funcargs. 
> 
> Interesting. Almost inside out of what I had to do in the past two days. Your funcarg'y way passes the function to be parameterized in, and gets None/sequence-of-functions out. What I did was just return a sequence from a test creating method. (details below)
>
> 
> We've grown our testing infrastructure around unittest's setUp/test<foo>/tearDown point of view.
> We have 500+ test methods and growing, and, for better or worse, self.<kitchen-sink-of-helper-methods> available to our test writers.
> I really like the funcargs approach, and it might work in my setting. It would require a gradual transition from self.power_cycle(..) to power_device.power_cycle() where power_device was a funcargs provided parameter. :)
> 
> The question I have is "when" are the func args processed?

funcargs are instantiated at setup time for a test function, after collection. 
 
> I needed to convert one monster-lithic test with about 200 "scenarios" into a test-creating method that returned 200 scenario-customized tests to be run.
> 
> I could not do that at module load or test discovery time because the device personality data that I need to use to create those scenarios is not available until later in the process.
> 
> (I could have redone how we spin-up our regressions so that we discover and load tests later, but that would have been a lot of work for marginal return. It reminds me way too much of my 80286 days with "load this TSR last", "No, load -this- TSR last" dependency nightmares.)
> 
> I made some very minor changes instead.
> In addition to discovering methods that have the prefix 'test', I also discover methods that have the prefix 'make_tests' (and I notice the difference between them).
> I arranged for the unittest harness to preserve and pass back method return values.
> If the test I just ran was a test-creating-test, and it PASSed, then I recursively execute the tests that it returned.

do "make_tests" prepare the environment/device/perform setup so that 
your returned-back ("generated") tests can run in that given setup? 

do you have multiple "make_tests" that are specific for a
test-case / test file? 

I think it makes sense to have some "make_test" like way of
generating tests - maybe "make_test_something" would be called
for each "test_something", and be passed the "function" object
that you can already see in the issue?  At least i rather
introduce a new naming conventions than reuse the "test_*" naming ;) 

This way we'd end up with a way for plugins to generate tests
and for test code to make more tests directly.  Makes sense
to design them both at the same time so they work nicely
together as well. 

> This is "very late dynamic creation of test methods". Which is what I need.
> However, it completely blows away the notion that you can count tests "up front". Doing that wasn't something we needed to do anyways, so no practical loss. :)

I think with the above approach one could still count tests -
although you would implicitely run any setup that "make_test_*" 
would imply. 

> In my current endeavor I do not pass in a function/method to be customized, rather I let the test creating method simply return what it needs.
> I really like how funcargs feels for writing tests. I suspect/fear that the problem I will have with it is not the essence of the mechanism, but rather the choice of -when- the mechanism will be triggered. :)
> 
> Hope this makes sense. Now I'm off for a while (dinner, not a book :) ).

hope you had good dinner.  i am off to phone, doctor and lunch soon :)
cheers,
holger


> -Doug
> 

-- 
Metaprogramming, Python, Testing: http://tetamap.wordpress.com
Python, PyPy, pytest contracting: http://merlinux.eu 



More information about the testing-in-python mailing list