[TIP] Comparing coverage between two runs of unit tests
fuzzyman at voidspace.org.uk
Thu Dec 12 03:31:05 PST 2013
On 12 Dec 2013, at 07:37, Marius Gedminas <marius at gedmin.as> wrote:
> On Wed, Dec 11, 2013 at 08:46:55PM +0000, Masiar, Peter (PMASIAR) wrote:
>> I am thinking about creating additional layer on top of coverage which
>> would compare summary coverage from two different runs of unit tests
>> (say week apart) and check that coverage did not decreased, and if it
>> did, which programs have decreased coverage. Idea is, give developers
>> hint if some new code sneaked in which is not covered by unit tests.
>> We have hundreds of program files in our system and sorting them by
>> coverage percentage is not going to help much.
> I used to have daily builds that warn me about coverage regressions in
> one particular work project.
> People seemed to ignore them (myself included). :-(
Pre-commit hook that runs the tests. Reject commits/merges that have failing tests or reduce overall test coverage.
>> I was considering to use coverage of branches. If few lines are added
>> into a branch which we know is not covered by unit test, reporting
>> would be false alarm IMHO.
>> Is it good idea?
>> Is something like this available?
> The 'coveragediff' script in https://pypi.python.org/pypi/z3c.coverage
> Unfortunately it only works with annotated source files of the format
> produced by stdlib's trace.py. Nobody extended it to support diffing
> two .coverage files yet. A pull request would be very welcome:
> Marius Gedminas
>> find /lib/modules/2.4.17-expt/kernel/ -type f|while read i; do insmod $i; done
> You're sick. I like you.
> -- Andrew Morton on lkml
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
-- the sqlite blessing
More information about the testing-in-python