[TIP] including (or not) tests within your package

Olemis Lang olemis at gmail.com
Tue Aug 10 05:21:36 PDT 2010


On Mon, Aug 9, 2010 at 10:37 PM, Fernando Perez <fperez.net at gmail.com> wrote:
> On Fri, Jul 30, 2010 at 5:01 AM, Olemis Lang <olemis at gmail.com> wrote:
>> Probably this is a matter of flavors . The fact is that I've not seen
>> yet (or probably skipped ...) a strong reason (besides my current
>> laziness ;o) in favor of *always* including tests inside a package
>> prepared to run in production systems .
>
> This bug in the production, official Debian/Ubuntu delivered version of ATLAS:
>
> https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510
>
> was detected because numpy ships its test suite along with it, and someone ran
>
> import numpy
> numpy.test()
>
> on a production system, and boom.

I'm not saying that testing code in production systems is a crazy idea
, I'm just saying that , if that's needed , tests can be installed at
any time and , once done with testing , then removed once again.

> Yes, it could have been found
> otherwise, but it was very, very useful for the numpy developers to
> know this, and also to be able to ask users of other similar systems
> to also run numpy.test() until we could understand where it was coming
> from.
>

+1

> Production systems are the ones where you want to know that things
> work correctly, right? So they should be the ones where you actually
> test your code.
>

Sorry , but I don't agree . I'd rather say «they should be the ones
where you actually test your code.*once something is going wrong & the
problem cannot be identified in a test environment* »

> We *always* ship the tests with all projects I am involved with, and
> work very hard to provide really easy ways for users to run the full
> test suite and provide us with useful feedback when something goes
> wrong.
>

you see ? «when something goes wrong.»
;o)

> I can't count the number of bugs we've found in this manner, simply
> because they only appear with some combination of compiler, hardware
> endianness, library versions, etc, which developers would *never* have
> access to, but some user at some big government lab happens to run on.
>

In theory , that should be tackled by setting up a slave or VM with
similar characteristics in a CI environment . If you don't have the
means to do so (which I understand perfectly ;o) & the user is really
interested in confirming that everything is OK with future versions ,
then IMO the right thing to do should be making (him | her) part of
the (testing | integration | release) process by setting up a testing
environment as similar as possible to the production env so as to
test'em (e.g. more or less that's what we do with Trac XmlRpcPlugin
;o) .

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:



More information about the testing-in-python mailing list