[TIP] Comparing coverage between two runs of unit tests

Ben Finney ben+python at benfinney.id.au
Wed Dec 11 14:17:40 PST 2013


"Masiar, Peter (PMASIAR)" <PMASIAR at arinc.com> writes:

> I am thinking about creating additional layer on top of coverage which
> would compare summary coverage from two different runs of unit tests
> (say week apart) and check that coverage did not decreased, and if it
> did, which programs have decreased coverage.

This is a good idea.

There are automated build systems which will run a battery of commands
on your current code base, report on the results, and allow comparison
of reports over time. A test run and test coverage report is just one of
many useful things you can do with such automated build system.

Popular free-software automated build platforms include Buildbot
<URL:http://buildbot.net/> and Jenkins <URL:http://jenkins-ci.org/>.

> Idea is, give developers hint if some new code sneaked in which is not
> covered by unit tests. We have hundreds of program files in our system
> and sorting them by coverage percentage is not going to help much.

Use an existing automated build reporting system to graph the coverage
level over time, allowing trends and anomalies to be seen more quickly.

> Is it good idea? Is something like this available? Because in my
> experience, every project/tool idea what I ever had was already solved
> by least one open source project. :-)

Yes :-) Hopefully you can make use of one of these automated build (also
going by the name “continuous integration”) platforms.

-- 
 \           “If [a technology company] has confidence in their future |
  `\      ability to innovate, the importance they place on protecting |
_o__)     their past innovations really should decline.” —Gary Barnett |
Ben Finney




More information about the testing-in-python mailing list