[TIP] including (or not) tests within your package
alfredodeza at gmail.com
Tue Aug 10 06:23:00 PDT 2010
On Tue, Aug 10, 2010 at 8:21 AM, Olemis Lang <olemis at gmail.com> wrote:
> On Mon, Aug 9, 2010 at 10:37 PM, Fernando Perez <fperez.net at gmail.com>
> > On Fri, Jul 30, 2010 at 5:01 AM, Olemis Lang <olemis at gmail.com> wrote:
> >> Probably this is a matter of flavors . The fact is that I've not seen
> >> yet (or probably skipped ...) a strong reason (besides my current
> >> laziness ;o) in favor of *always* including tests inside a package
> >> prepared to run in production systems .
> > This bug in the production, official Debian/Ubuntu delivered version of
> > https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510
> > was detected because numpy ships its test suite along with it, and
> someone ran
> > import numpy
> > numpy.test()
> > on a production system, and boom.
> I'm not saying that testing code in production systems is a crazy idea
> , I'm just saying that , if that's needed , tests can be installed at
> any time and , once done with testing , then removed once again.
> > Yes, it could have been found
> > otherwise, but it was very, very useful for the numpy developers to
> > know this, and also to be able to ask users of other similar systems
> > to also run numpy.test() until we could understand where it was coming
> > from.
> > Production systems are the ones where you want to know that things
> > work correctly, right? So they should be the ones where you actually
> > test your code.
> Sorry , but I don't agree . I'd rather say «they should be the ones
> where you actually test your code.*once something is going wrong & the
> problem cannot be identified in a test environment* »
> > We *always* ship the tests with all projects I am involved with, and
> > work very hard to provide really easy ways for users to run the full
> > test suite and provide us with useful feedback when something goes
> > wrong.
> you see ? «when something goes wrong.»
> > I can't count the number of bugs we've found in this manner, simply
> > because they only appear with some combination of compiler, hardware
> > endianness, library versions, etc, which developers would *never* have
> > access to, but some user at some big government lab happens to run on.
> In theory , that should be tackled by setting up a slave or VM with
> similar characteristics in a CI environment . If you don't have the
> means to do so (which I understand perfectly ;o) & the user is really
> interested in confirming that everything is OK with future versions ,
> then IMO the right thing to do should be making (him | her) part of
> the (testing | integration | release) process by setting up a testing
> environment as similar as possible to the production env so as to
> test'em (e.g. more or less that's what we do with Trac XmlRpcPlugin
> ;o) .
You can't recommend to set up a VM to be 'as similar as possible to the
If you have a server A, model 1 with OS B version 1, then a VM *will not*
be able to replicate *fully* that environment.
Having an environment for testing that differs from production *is not OK*.
There is no way to 'have the means to' replicate every single combination of
Hardware and Operating Systems out there, hence
the need to include tests.
What you can do/have is a testing environment with VM's all over, which is
fine. But that doesn't mean you will be able to
cover your bases.
I fully agree with Fernando, Jesse, Michael and all the others that
recommended including tests with your package/module
and making it easy for end users to be able to run your tests.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the testing-in-python