[TIP] adding custom tests to a unittest test run

holger krekel holger at merlinux.eu
Thu May 17 00:00:47 PDT 2012


On Wed, May 16, 2012 at 20:08 -0700, Chris Jerdonek wrote:
> On Wed, May 16, 2012 at 6:27 AM, Marius Gedminas <marius at gedmin.as> wrote:
> > On Tue, May 15, 2012 at 09:25:28PM -0700, Chris Jerdonek wrote:
> >> I often have the need to add extra tests to my unittest.main() test
> >> runs.  The tests I need to add are tests that depend on data obtained
> >> at run-time, e.g. user-provided command-line options.
> >
> > I'm used to the following idiom for this:
> >
> >    def test_suite():
> >        tests = unittest.TestSuite(...)
> >        if some_condition:
> >            tests.addTests(...)
> >        if some_condition:
> >            tests.addTests(...)
> >        return tests
> >
> >    if __name__ == '__main__':
> >        unittest.main(defaultTest='test_suite')
> >
> > Although in real life I'll use a test runner (such as zope.testrunner)
> > that will add up all the tests returned by test_suite() in all of my
> > test modules, instead of relying on unittest.main().
> 
> I'm interested in solutions that don't require setting (and then
> reading) some kind of global state prior to loading tests.
> 
> In the solution above, the test loader will invoke test_suite()
> without arguments, so there is no way to pass data to the function
> without relying on global state (e.g. setting a variable on a module).
> 
> I see now that the load_tests protocol might be another possibility.
> unittest.main()'s calls to load_tests pass the arguments (loader,
> tests, pattern), where loader is a TestLoader instance, so perhaps the
> needed data could be attached to a loader, and the loader passed as
> the testLoader argument to unittest.main().  That solution doesn't
> seem very elegant, but it looks like it would do the job.
> 
> >> So far, the best I've come up with is the below.  Is there a better
> >> solution?  What do others recommend?
> >>
> >> [Also see: https://gist.github.com/2707352 ]
> >
> > I find this code difficult to understand.
> 
> That's why I'm e-mailing this list. :) I'd like to know if there is a
> simpler, more straightforward solution I'm missing.

I think it's generally a better idea to skip tests when a configuration
or command line option is missing rather than to not collect them.
IOW, it's a good idea to keep the overall number of tests stable.

Not sure this is of interest to you but with the pytest runner there is
an easy way to skip tests based on command line options:

http://pytest.org/latest/example/simple.html#control-skipping-of-tests-according-to-command-line-option

This also works if your tests are written using the unittest.TestCase
subclassing mechanism for marking tests.  You can then make data available
globally through the "pytest_configure" hook or - using a
pytest-specific paradigm of writing tests - locally pass data as
parameters to test functions, see this as a basic example:

http://pytest.org/latest/example/simple.html#pass-different-values-to-a-test-function-depending-on-command-line-options

best,
holger





More information about the testing-in-python mailing list