[TIP] including (or not) tests within your package

Fernando Perez fperez.net at gmail.com
Mon Aug 9 23:54:31 PDT 2010


On Mon, Aug 9, 2010 at 10:53 PM, Andrew Bennetts <andrew at bemusement.org> wrote:
> There's an assumption here that running ‘python ./setup.py test’ will give
> as accurate results as running tests installed on the system with the rest
> of the package.  I can think of some^Wseveral^Wmany ways that might not be
> true.  Perhaps I'm overly paranoid.

You're not, you're just being minimally cautious against easy to
encounter disasters.  Two common scenarios that justify having tests
available beyond the setup.py stage:

1. Users without root access on a shared cluster are seeing odd
behavior.  You want to know if there's a problem with their
installation or not.  You can ask them to run foo.test() as a user,
but perhaps not to download/install anything else.  And you want to
know *on that system*, not on some other environment they may have
root access to, so being able to test on that system is critical.

2. Package is installed and tests run OK, all is fine.  Months later,
odd behavior.  It turns out that some obscure dependency had a
run-time library updated, and there's now a subtle, hard-to-trigger
bug in it.  You want to be able to re-run the test suite now to
reproduce this bug.

The fact that the tests ran at install time and not now is a clear
indicator that (assuming your package wasn't updated) it's a problem
with a run-time dependency.  Imagine a bug like the one I linked
above, which was in SSE assembly in ATLAS: that's the kind of thing
for which you want a nice, clear yes/no way of knowing if the bug is
in your camp or someone else's, because it can be a bear to track down
(just compiling ATLAS alone is a PITA, in case you're not familiar
with it, since it does extensive auto-tuning and code auto-generation
at build time).


So yes, there's plenty of good reasons to make sure that a normal
user, post-install phase, can easily and cleanly run your tests and
copy/paste back a reasonably informative report.  We see lots of those
in the numpy/scipy lists all the time:

http://www.google.com/search?hl=en&lr=&safe=off&as_qdr=all&ie=ISO-8859-1&oe=ISO-8859-1&q=numpy.test+site%3Ahttp%3A%2F%2Fmail.scipy.org%2Fpipermail&aq=f&aqi=&aql=&oq=&gs_rfai=

http://www.google.com/search?hl=en&safe=off&ie=ISO-8859-1&oe=ISO-8859-1&q=scipy.test+site%3Ahttp%3A%2F%2Fmail.scipy.org%2Fpipermail&aq=f&aqi=&aql=&oq=&gs_rfai=

and they do help tremendously in tracking down bugs that would be
otherwise a nightmare to even know existed (as they may depend on
obscure interactions with proprietary C/Fortran compilers, for
example).  There's lots of weird software/hardware combinations out
there, so having these kinds of 'probes' into the user's system is
absolutely critical for many projects.

Cheers,

f



More information about the testing-in-python mailing list