[TIP] Comparing coverage between two runs of unit tests

Marius Gedminas marius at gedmin.as
Wed Dec 11 23:37:44 PST 2013


On Wed, Dec 11, 2013 at 08:46:55PM +0000, Masiar, Peter (PMASIAR) wrote:
> I am thinking about creating additional layer on top of coverage which
> would compare summary coverage from two different runs of unit tests
> (say week apart) and check that coverage did not decreased, and if it
> did, which programs have decreased coverage. Idea is, give developers
> hint if some new code sneaked in which is not covered by unit tests.
> We have hundreds of program files in our system and sorting them by
> coverage percentage is not going to help much.

I used to have daily builds that warn me about coverage regressions in
one particular work project.

People seemed to ignore them (myself included).  :-(

> I was considering to use coverage of branches. If few lines are added
> into a branch which we know is not covered by unit test, reporting
> would be false alarm IMHO.
> 
> Is it good idea?

Maybe.

> Is something like this available?

The 'coveragediff' script in https://pypi.python.org/pypi/z3c.coverage

Unfortunately it only works with annotated source files of the format
produced by stdlib's trace.py.  Nobody extended it to support diffing
two .coverage files yet.  A pull request would be very welcome:
https://github.com/zopefoundation/z3c.coverage

Marius Gedminas
-- 
> find /lib/modules/2.4.17-expt/kernel/ -type f|while read i; do insmod $i; done
You're sick.  I like you.
        -- Andrew Morton on lkml
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 190 bytes
Desc: Digital signature
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20131212/1b4157fd/attachment.pgp>


More information about the testing-in-python mailing list