[TIP] Meta-test methods...

Douglas Philips dgou at mac.com
Mon Apr 27 05:48:17 PDT 2009


On or about 2009 Apr 26, at 10:42 AM, Scott David Daniels indited:
> Douglas Philips wrote:
>> Ok, enough background... :)
>> In the context of one test method with hundreds of scenarios,   
>> unless every scenario passes, we lose information about the  
>> nature,  severity, and extent of the problems/failures. Maybe the  
>> first few  devices we tested had dozens of scenario failures in a  
>> particular  test. A new version of the device might fix some issues  
>> and have fewer  (scenario) failures. However, that test's results  
>> won't look any  different. Worse case is if only some of original  
>> issues are fixed and  new issues are introduced, and so even the  
>> total number of scenario  failures might remain the same. For these  
>> kinds of tests we have to  turn on tracing and manually check the  
>> results. :(
>>
> Sounds like the "test" is really a test class, while the scenario is
> a test case. A test should test one thing and be "independent."
> Naming the scenarios separately, so they can be reported separately,
> seems part of the issue here.  Is there a way to run only a few of the
> scenario elements without the entire set?

Currently we have no way to run just a few scenarios from a test  
method, this is another aspect of the problem we're trying to solve...

This gets into questions about what distinguishes one test case from  
the next. :) The simplest thing is to have one result per test method.  
That seems good. Corollary: If you want more independent results, have  
more test methods.

That is, to me, the genesis for wanting to create test methods  
dynamically.

I don't know what py.test's and nose's use cases were, nor what anyone  
else's are...
Our scenarios are "variations on a theme". We would like to be able to  
report very fine-grained failures; in the event that a test causes a  
catastrophic device failure (i.e. failing the test turns the device  
into a brick), we would like the ability to disable running of certain  
scenarios individually.

However, right now we have something akin to:

def reset_and_voltage_test(self, reset_action, voltage_level):
     # return a function that will test the given
     # reset_action and voltage_level combination interactions.

# Using a new naming convention for meta-test-methods:
def make_tests_for_reset_and_voltage_interactions(self):
     return [self.reset_and_voltage_test(reset_action, voltage_level)
                 for reset_action in self.reset_actions()
                 for voltage_level in self.voltage_levels()]

In some of our tests, the values to be tested could be known at test  
load time or test method discovery time.
In some of our tests, the values to be tested aren't known until the  
test runs and does some additional device operations (usually  
inquiries).

I'm open to ideas.
On thing I've thought of over the weekend is allowing one test method  
to report several test statuses... solves the "need to create new test  
methods on the fly" but complicates the ability to run just a few  
specific scenarios...


-Doug




More information about the testing-in-python mailing list