[TIP] Result protocol / problems being solved

Robert Collins robertc at robertcollins.net
Tue Apr 14 03:28:44 PDT 2009

On Tue, 2009-04-14 at 11:13 +0100, Michael Foord wrote:

> It looks good to me. I'm a new committer so I still need the nod from 
> another developer. Once I have pushed out fresh Mock and ConfigObj 
> releases I'll return to unittest issues.
> I'd like to just apply:
> * startTestRun / stopTestRun
> * a keyword argument to main (TestProgram) *not* to call sys.exit
> * addCleanUp (with the fix to use while / pop instead of iterating 
> through reversed and with a final resolution to the order debate)

Cool. I'm still strongly on 'run after', but honestly, make a decision
and land it.

> I'd also like to review the 'removeTest' method patch. I like the 
> functionality but am not sure about the patch.

I haven't looked at this - I have no strong opinion about it. Would it
be useful for me to review it?

> After this I'd like to do a proper implementation of the discovery 
> stuff. There is still one decision that needs to be made (not discussed 
> so far) - should the top level directory be treated as a package? At the 
> moment it is treated as a directory and all modules loaded separately. 
> It should probably be checked for an __init__.py first and treated as a 
> package if it exists.

Perhaps we can recast it slightly - if it is 'discover from a name' then
the question of top level being directory or package or not is moot: it
import name
start_dirs = name.__path__
(loop here)

That seems to remove the need to take such a decision at all.

> After that I would like to turn unittest and its tests into a package. 
> I'm hoping to do all of this in time to get it into 2.7.

I'd like to cleanup the skip support somewhat, the classsuiteclass thing
is odd - do you (or anyone here) know what the motivation behind it is?
AFAICT it is just to report 'a skipped module' as one skip rather than
one-per-test (which is IMO a misfeature, as test totals won't add up
properly then).

> One other issue - at the moment the only way to use a custom TestResult 
> is to to override the '_makeResult' method (off the top of my head - may 
> not be quite the correct method name) of TestRunners. Would it make 
> sense for the runTests method to take an optional result argument?

Well, TextTestRunner requires a TextTestResult, which is a little more
than a TestResult.

I'd like to see a runner that uses only the methods the top level
TestResult interface provides, and have more of the output live on the
TestResult. This would make suppling a runner, or a runner factory more
useful and flexible.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
Url : http://lists.idyll.org/pipermail/testing-in-python/attachments/20090414/742904a4/attachment.pgp 

More information about the testing-in-python mailing list