[TIP] pytest and ipynb

Hans Fangohr fangohr.hans at gmail.com
Thu Jun 16 07:20:25 PDT 2016


Hi Pete and all,

some comments on the nbval tool (https://github.com/computationalmodelling/nbval):

We developed this motivated by a particular use case that I’ll try to outline here. We use the Jupyter Notebook to create documentation for simulation software that is implemented in Python packages. One example of such documentation (created in the notebook) is visible here: http://fidimag.readthedocs.io/en/latest/ipynb/tutorial-basics.html 

Writing the documentation in the notebook is extremely convenient (certainly in comparison to writing restructured text code for sphinx, say, in which we manually include the right code examples and the figures this produces).

As we tend to work on research codes that change quickly (in this example it is this package: http://computationalmodelling.github.io/fidimag/), we also need to keep documentation up-to-date, and in particular update the documentation when functionality or interfaces change.

The nbval tool re-executes a saved notebook, and then (i) reports if that execution raises an error of some type and (ii) compares any output from that execution with the output stored in the notebook, and reports a fail if the output deviates.

As Pete correctly identifies: nbval is like a regression testing tool for Jupyter. It compares outputs with a previous run.

A minimal example is shown here: https://github.com/computationalmodelling/nbval/blob/master/tests/minimal_example.ipynb
where the print(“Hello world!”) command produces the output “Hello World!”, and nbval will compare the “Hello World” that is stored in the saved ipynb file with the “Hello World” that is produced when the print command is executed at test run time.

So the nbval plugin for py.test does several things for us:

- tests wether code that is used in the notebook still executes without raising any exceptions

- tests whether the behaviour has changed (this is like a suite of system test)


What is nice for our context (software in computational science) is that

- we are notified if we need to update the documentation (because nbval reports failures)

- we don’t have to invest any additional effort for treating the notebooks as tests (this is what nbval does)

- the notebooks (may) increase test coverage of the code (a particular benefit if we start from a code that hasn’t got systematic unit tests). While the tests tend to be high level (and thus not as useful as fine grained unit tests), it is a lot better to have these tests than not to have them.


The tool can’t link to coverage.py yet, which would be nice and is the content of this feature request: https://github.com/computationalmodelling/nbval/issues/7


I hope the above is useful; best wishes, 

Hans



> On 13 Jun 2016, at 20:02, Pete Forman <petef4+usenet at gmail.com> wrote:
> 
> In TDD I write test_foo.py so that I can verify foo.py for validity and
> coverage. Recently I have been using Jupyter / IPython where the code
> and docs are in an ipynb notebook.
> 
> What is the recommended way to run pytest on a test_foo.ipynb?
> 
> I've found a couple of solutions but to slightly different questions.
> 
> https://pypi.python.org/pypi/pytest-ipynb enables you to write tests in
> an ipynb. That enables you to build tests into a notebook but does not
> separate the tests from the product.
> 
> https://github.com/computationalmodelling/nbval is a regression testing
> tool for Jupyter. It compares outputs with a previous run.
> 
> Am I asking the right question? What is best practice for testing ipynb?
> 
> -- 
> Pete Forman
> 
> 
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python




More information about the testing-in-python mailing list