[TIP] hackish generative tests (was: Re: Generating tests, II (was: Re: Meta-test methods...))

Doug Philips dgou at mac.com
Tue May 5 15:44:49 PDT 2009


On or about Saturday, May 02, 2009, at 02:39PM, holger krekel indited:
>that [eliminating setUp/tearDown] is the ultimate goal.  there are some use cases,
>however, that still make using setup/teadown more useful.

I find funcargs interesting because it is, what do you kids call it these days?..., dependency injection.
Much more functional-programming kind of style than having a setUp method accrete stuff into fields of 'self'. :)
 

>When it comes to parametrizing Test methods with different
>app classes, subclassing a base set of tests and
>overriding/configuring things in a setup/teardown is
>currently more straightforward than using funcargs.

Agreed.


>... Given such an additional mechanism that re-runs the same test 
>method with multiple different values for the same funcarg would
>provide this "variations on a them".  This issue 
>
>    http://bitbucket.org/hpk42/py-trunk/issue/2/next-generation-generative-tests
>
>is the result of a discussion with Samuele Pedroni who wants
>to run tests with multiple different browsers and is currently
>using the yield-way.  The issue contains a code sketch
>how a plugin could trigger the running of the same test
>function with multiple browsers via funcargs. 

Interesting. Almost inside out of what I had to do in the past two days. Your funcarg'y way passes the function to be parameterized in, and gets None/sequence-of-functions out. What I did was just return a sequence from a test creating method. (details below)

We've grown our testing infrastructure around unittest's setUp/test<foo>/tearDown point of view.
We have 500+ test methods and growing, and, for better or worse, self.<kitchen-sink-of-helper-methods> available to our test writers.
I really like the funcargs approach, and it might work in my setting. It would require a gradual transition from self.power_cycle(..) to power_device.power_cycle() where power_device was a funcargs provided parameter. :)

The question I have is "when" are the func args processed?

I needed to convert one monster-lithic test with about 200 "scenarios" into a test-creating method that returned 200 scenario-customized tests to be run.

I could not do that at module load or test discovery time because the device personality data that I need to use to create those scenarios is not available until later in the process.

(I could have redone how we spin-up our regressions so that we discover and load tests later, but that would have been a lot of work for marginal return. It reminds me way too much of my 80286 days with "load this TSR last", "No, load -this- TSR last" dependency nightmares.)

I made some very minor changes instead.
In addition to discovering methods that have the prefix 'test', I also discover methods that have the prefix 'make_tests' (and I notice the difference between them).
I arranged for the unittest harness to preserve and pass back method return values.
If the test I just ran was a test-creating-test, and it PASSed, then I recursively execute the tests that it returned.

This is "very late dynamic creation of test methods". Which is what I need.
However, it completely blows away the notion that you can count tests "up front". Doing that wasn't something we needed to do anyways, so no practical loss. :)
In my current endeavor I do not pass in a function/method to be customized, rather I let the test creating method simply return what it needs.
I really like how funcargs feels for writing tests. I suspect/fear that the problem I will have with it is not the essence of the mechanism, but rather the choice of -when- the mechanism will be triggered. :)

Hope this makes sense. Now I'm off for a while (dinner, not a book :) ).
-Doug




More information about the testing-in-python mailing list