[TIP] test results tracking, etc...

Douglas Philips dgou at mac.com
Sun Apr 5 07:03:15 PDT 2009


On or about 2009 Apr 5, at 3:08 AM, Robert Collins indited:
> You may find subunit (http://launchpad.net/subunit) useful then. It
> doesn't support inconclusive, but there are a few ways you could get  
> it
> to do so, for instance by using the 'raise a special exception and
> having your result object handle the specially' pattern.

Ok, I'm a bit confused. subunit seems to be all about using more  
processes to run the tests which seems like overkill to just get a few  
new status codes.
On bit I didn't mention in my background notes, we link with some C++  
glue code to talk to the custom drivers needed to talk to our  
hardware. For various technical reasons it would be painful to have  
subprocesses (even just one) because of initial startup discovery. I  
don't like it, but it is not a part of our testing environment over  
which we can exert control.

...

> I think this should be done purely in python, with the use of  
> subunit or
> something like it to get external results into python, where we can  
> use
> nice things like any of the existing gui runners to report on results.

Well, our tests themselves and all of our framework are written in  
Python, so I'm not sure what you're suggesting here.

A bit more background. We started out our testing effort on just one  
product and have grown to support more. As Titus mentions in his talk,  
this kind of organic growth has its issues. In agile-speak we've done  
the simplest things that could possibly work. What we haven't always  
done is refactor, so we've built up technical debt. One of those  
debts, the itch-to-be-scratched du jour, is test results reporting.  
We're in the process of dehairballing some of the existing code. For  
example, when we started out the devices we were testing were brand  
new. And when some of our tests failed, they would put the devices  
into an unrecoverable state. Since that pretty much torpedoes a  
regression run, we had to adopt a defensive strategy: If a test is  
known failure, don't run it, but do report it as an expected failure  
status. Now that we're testing mature devices with a programmatic way  
of resetting to a "factory state", we are free to run all the tests  
all the time. Hence our current itch: Yes, we have 10 test failures on  
this run. But no, we no longer are tracking, in the testing  
infrastructure, which are expected to fail on this device. We want to  
"separate concerns" and decouple the test status tracking from the  
test regression running.

-Doug




More information about the testing-in-python mailing list