[TIP] test results tracking, etc...

Douglas Philips dgou at mac.com
Sun Apr 5 08:40:45 PDT 2009

On or about 2009 Apr 5, at 10:38 AM, Michael Foord indited:
> We use a custom TestResult with unittest to push results into a  
> database whilst the tests are still running. Once the results are in  
> a database building custom reporting tools is much easier.

Understood. From the discussions at pycon there seemed to be interest  
in having a common results format across many testing tools so that  
"we" could build up a suite/eco-system of result processing tools that  
would be decoupled from the particular tool(s) being used to generate  
the results.

I've thought about using a database in the manner you suggest, but I  
have to admit some concern over the coupling there too. Not just in  
terms of "SQL" noise (to be fast and loose with terminology), but also  
in the potential fragility of what would happen to my test runner if  
the reporting mechanism were to hang or break.
Our current test log output format was designed to be easily flat-file- 
processable after the fact. Reality is that we haven't (yet) written  
any of those tools. My instinct for simplicity says that I want to  
keep the regression test runner simple, and be able to plug it in to  
any number of back ends. One of things that I don't like about  
unittest is that (at least in 2.4.4), there is a lot of intertangled  
hair between the components, the text test runner does too much. (I've  
heard rumors that there is unittest work on the cusp of being  
released, but I don't (yet) know what the result of that work is.)

My impulse is to see if there is leverage with having a simple (TAP- 
like?) output format which can then be plugged into test result  
processing pipelines. One of those pipelines could start with  
injecting the results into a database, but I'm leary of having that be  
part and parcel of the test running infrastructure itself.


More information about the testing-in-python mailing list