[TIP] Meta-test methods...

Scott David Daniels Scott.Daniels at Acm.Org
Sun Apr 26 07:42:51 PDT 2009


Douglas Philips wrote:
> Ok, enough background... :)
> In the context of one test method with hundreds of scenarios,  
> unless every scenario passes, we lose information about the nature,  
> severity, and extent of the problems/failures. Maybe the first few  
> devices we tested had dozens of scenario failures in a particular  
> test. A new version of the device might fix some issues and have fewer  
> (scenario) failures. However, that test's results won't look any  
> different. Worse case is if only some of original issues are fixed and  
> new issues are introduced, and so even the total number of scenario  
> failures might remain the same. For these kinds of tests we have to  
> turn on tracing and manually check the results. :(
>   
Sounds like the "test" is really a test class, while the scenario is
a test case. A test should test one thing and be "independent."
Naming the scenarios separately, so they can be reported separately,
seems part of the issue here.  Is there a way to run only a few of the
scenario elements without the entire set?

--Scott David Daniels
Scott.Daniels at Acm.Org




More information about the testing-in-python mailing list