[TIP] Result protocol / problems being solved
laidler at stsci.edu
laidler at stsci.edu
Mon Apr 13 19:18:58 PDT 2009
I'm going to take a step yet further back and ask, what are the problems people are trying to solve here?
Here are some of the problems I've been hearing in the background and prologue to this discussion:
1. I have a skizillion tests running in varied environments and I don't want to miss small variations in the sea of output:
1A. I expected N tests to run on machine Foo but only N-1 tests ran
1B. I expected m tests to pass in package Bar, but only m-1 tests passed
Implications for result protocol: Should contain enough information so that something downstream can collect tests into buckets of interest (package/project, host/execution environment) and do the appropriate accounting to catch the small variations.
Alternatively/additionally: the protocol itself should support aggregate results, as well as single test results.
2. I want to be able to easily distinguish the following:
2A. Broken software
2B. Broken tests
2C. Broken execution environments
and distinguish them from deliberately (by human or machine) omitted tests.
Implications for result protocol: Status should distinguish between Fail (implies broken software) and Error (implies broken test or execution environment); and possibly further distinguish between "error during test" and "error attempting to set up test" (might distinguish between broken test and broken environment, although it might be sufficient to use the environment-related bookkeeping from case 1 for this).
3. I want to handle all my tests (unit->acceptance) for all my projects written in any languages and using any test framework with the same result protocol.
Implications: Think language-agnostically and outside the unittest box.
4. I want to be able to find any stuff associated with a given test.
Implications: allow a way to associate stuff with a test result.
5. I want to be able to do statistics on my test results.
Implications: include start&stop times along with host/execution environment; allow a way to associate arbitrary quantities of interest with a test result.
6. I want to be able to figure out why this bunch of tests failed.
Implications: Allow a way to associate arbitrary descriptive information with test results. (This is the case Pandokia's test definition, result, & configuration attributes are trying to solve.)
7. I want to be able to figure out why this specific test failed.
Implications: Allow a way to associate stdout/stderr with the test result.
Did I miss anything?
More information about the testing-in-python