[TIP] Result protocol / problems being solved
jnoller at gmail.com
Tue Apr 14 06:45:20 PDT 2009
On Mon, Apr 13, 2009 at 10:18 PM, <laidler at stsci.edu> wrote:
> 1. I have a skizillion tests running in varied environments and I don't want to miss small variations in the sea of output:
> 1A. I expected N tests to run on machine Foo but only N-1 tests ran
> 1B. I expected m tests to pass in package Bar, but only m-1 tests passed
> Implications for result protocol: Should contain enough information so that something downstream can collect tests into buckets of interest (package/project, host/execution environment) and do the appropriate accounting to catch the small variations.
> Alternatively/additionally: the protocol itself should support aggregate results, as well as single test results.
> Implications for result protocol: Status should distinguish between Fail (implies broken software) and Error (implies broken test or execution environment); and possibly further distinguish between "error during test" and "error attempting to set up test" (might distinguish between broken test and broken environment, although it might be sufficient to use the environment-related bookkeeping from case 1 for this).
Actually, we need only define basic "result codes" and allow users to
add in what they want as needed. That way they can do the
hair-splitting, and have internal bikeshed arguments.
> 3. I want to handle all my tests (unit->acceptance) for all my projects written in any languages and using any test framework with the same result protocol.
> Implications: Think language-agnostically and outside the unittest box.
Correct. I couldn't care less about the unittest result object (at
this moment in time) as I do not use unittest for execution.
> 6. I want to be able to figure out why this bunch of tests failed.
> Implications: Allow a way to associate arbitrary descriptive information with test results. (This is the case Pandokia's test definition, result, & configuration attributes are trying to solve.)
This is the additional: field in my JSON/YAML definition. Tests and
executors can jam in as much as they like.
> 7. I want to be able to figure out why this specific test failed.
> Implications: Allow a way to associate stdout/stderr with the test result.
> Did I miss anything?
8> This should not require a custom wire format, or parser. It should
use something in widespread use (e.g. XML, JSON), it should have
cross-language support and be human readable.
Other than that, the basic outline is correct. Let me also point out
that there is a use case for some people involved that involves
parsing results directly from a stream, such as stdout. This means
they need the ability to parse the result messages from the stream.
More information about the testing-in-python