[TIP] Meta-test methods...

Douglas Philips dgou at mac.com
Sat Apr 25 07:52:39 PDT 2009


On or about Fri, 2009-04-24 at 20:11 -0400, Victoria G. Laidler indited:
> So when I test, I want to make sure my tests probe most
> regions of the potential parameter space of input data. I use
> generated tests to run exactly the same test with varying
> inputs.

My problem space is different, but the testing approach seems similar.  
Details below.


On or about 2009 Apr 24, at 8:30 PM, Robert Collins indited:
> I use a form of dependency injection to do this: I write one test that
> takes as parameters the stuff it needs to test. With unittest, I  
> wrote a
> little library 'testscenarios' to make it nice and easy to use. Hmm,
> just remembered, I need to upload it to pypi.
>
> Anyhow - https://launchpad.net/testscenarios is the homepage, README  
> is
> http://bazaar.launchpad.net/~lifeless/testscenarios/trunk/annotate/head:/README

... <example elided -dwp>

> and so on. I think this nicer than yielding inner functions - but its
> arguably just a different spelling.
...

Interesting.
I'm not sure I've grok'd it all, but from what I understand those  
scenarios are all "predefined" before the tests themselves run?  
Looking at lines 210-240 of the README, it seems that the scenario  
generation, can, like all python modules, compute the scenarios as the  
module is loaded? So it is dynamic in the sense of querying the  
environment in which the test code is being loaded?

(Our Python testing environments are simpler in the sense that we  
always have the needed Python imports available.)

Our devices under test have an extensive set of potential capabilities  
of variable capacity which we've abstracted away into "device  
personalities"... i.e. table-driven test code, with lots of tables. :)

Many of our tests are simple and don't need Scenarios.

Our most involved tests have our own "create a list of scenarios" phase:
     Sometimes those scenarios are pre-determined, and thus your pre- 
computable Scenarios would be a good fit.
     Sometimes those scenarios are dependent on the specific device- 
under-test's capability's capacity. Devices with a certain capability  
can span from the small/cheap to the large/expensive. The actual range  
to be tested is not known until the device itself can be interrogated.  
The most "interesting" tests involve interactions between the  
different capabilities along the varying ranges of those capabilities.

Ok, enough background... :)

The problem I need to solve is that a test method only reports one  
status. In the context of one test method with hundreds of scenarios,  
unless every scenario passes, we lose information about the nature,  
severity, and extent of the problems/failures. Maybe the first few  
devices we tested had dozens of scenario failures in a particular  
test. A new version of the device might fix some issues and have fewer  
(scenario) failures. However, that test's results won't look any  
different. Worse case is if only some of original issues are fixed and  
new issues are introduced, and so even the total number of scenario  
failures might remain the same. For these kinds of tests we have to  
turn on tracing and manually check the results. :(

I doubt that Titus or Vicki has this exact problem :). I think Vicki  
has a similar problem, if I understood her... I'm not sure I  
understand Titus' use cases, nor do I know what drove py.test and nose  
to put in their capacity for dynamic test method creation.

I remain hopeful that we have a common enough problem that we can  
share a common solution. :)

-Doug






More information about the testing-in-python mailing list