[TIP] test results tracking, etc...

Michael Foord fuzzyman at voidspace.org.uk
Sun Apr 5 07:38:38 PDT 2009


Douglas Philips wrote:
> On or about 2009 Apr 5, at 3:08 AM, Robert Collins indited:
>   
>> You may find subunit (http://launchpad.net/subunit) useful then. It
>> doesn't support inconclusive, but there are a few ways you could get  
>> it
>> to do so, for instance by using the 'raise a special exception and
>> having your result object handle the specially' pattern.
>>     
>
> Ok, I'm a bit confused. subunit seems to be all about using more  
> processes to run the tests which seems like overkill to just get a few  
> new status codes.
> On bit I didn't mention in my background notes, we link with some C++  
> glue code to talk to the custom drivers needed to talk to our  
> hardware. For various technical reasons it would be painful to have  
> subprocesses (even just one) because of initial startup discovery. I  
> don't like it, but it is not a part of our testing environment over  
> which we can exert control.
>
> ...
>
>   

We use a custom TestResult with unittest to push results into a database 
whilst the tests are still running. Once the results are in a database 
building custom reporting tools is much easier.

Michael

>> I think this should be done purely in python, with the use of  
>> subunit or
>> something like it to get external results into python, where we can  
>> use
>> nice things like any of the existing gui runners to report on results.
>>     
>
> Well, our tests themselves and all of our framework are written in  
> Python, so I'm not sure what you're suggesting here.
>
> A bit more background. We started out our testing effort on just one  
> product and have grown to support more. As Titus mentions in his talk,  
> this kind of organic growth has its issues. In agile-speak we've done  
> the simplest things that could possibly work. What we haven't always  
> done is refactor, so we've built up technical debt. One of those  
> debts, the itch-to-be-scratched du jour, is test results reporting.  
> We're in the process of dehairballing some of the existing code. For  
> example, when we started out the devices we were testing were brand  
> new. And when some of our tests failed, they would put the devices  
> into an unrecoverable state. Since that pretty much torpedoes a  
> regression run, we had to adopt a defensive strategy: If a test is  
> known failure, don't run it, but do report it as an expected failure  
> status. Now that we're testing mature devices with a programmatic way  
> of resetting to a "factory state", we are free to run all the tests  
> all the time. Hence our current itch: Yes, we have 10 test failures on  
> this run. But no, we no longer are tracking, in the testing  
> infrastructure, which are expected to fail on this device. We want to  
> "separate concerns" and decouple the test status tracking from the  
> test regression running.
>
> -Doug
>
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>   


-- 
http://www.ironpythoninaction.com/
http://www.voidspace.org.uk/blog





More information about the testing-in-python mailing list