[TIP] Comparing coverage between two runs of unit tests
ned at nedbatchelder.com
Thu Dec 12 03:22:37 PST 2013
On 12/11/13 3:46 PM, Masiar, Peter (PMASIAR) wrote:
> I decided to split different question into separate thread
> I am thinking about creating additional layer on top of coverage which would compare summary coverage from two different runs of unit tests (say week apart) and check that coverage did not decreased, and if it did, which programs have decreased coverage. Idea is, give developers hint if some new code sneaked in which is not covered by unit tests. We have hundreds of program files in our system and sorting them by coverage percentage is not going to help much.
At edX, we use diff-cover: https://github.com/edx/diff-cover . It's a
little different than your approach: it uses git to find which lines you
are actually changing, and subsets the full coverage report to report on
only those changed lines. It also does the same for pep8 and pylint.
It gives you quality metrics for your commit, not for the full repo
after your commit.
Ironically, I've had nothing to do with it's development, it's the
brainchild of our QA genius, Will Daly. :)
> I was considering to use coverage of branches. If few lines are added into a branch which we know is not covered by unit test, reporting would be false alarm IMHO.
Do you mean "branch" as in "branch coverage" or as in "git branch"?
> Is it good idea? Is something like this available? Because in my experience, every project/tool idea what I ever had was already solved by least one open source project. :-)
> Someone is working on it or thinking about it? I would prefer to contribute to common efforts and not reinvent the wheel if possible. What would be good forum for further discussion? Here or elsewhere? Or maybe it is bad idea, someone tried and has battle scars?
> Thank you for any insight.
> Peter Masiar
> ARINC Direct │ SW Engineering
> This e-mail (including any attachments) is intended only for the use of the individual or entity named above and may contain privileged, proprietary, or confidential information. The information may also contain technical data subject to export control laws.
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
More information about the testing-in-python