[TIP] Comparing coverage between two runs of unit tests

Masiar, Peter (PMASIAR) PMASIAR at arinc.com
Wed Dec 11 12:46:55 PST 2013


Folks,
I decided to split different question into separate thread

I am thinking about creating additional layer on top of coverage which would compare summary coverage from two different runs of unit tests (say week apart) and check that coverage did not decreased, and if it did, which programs have decreased coverage. Idea is, give developers hint if some new code sneaked in which is not covered by unit tests. We have hundreds of program files in our system and sorting them by coverage percentage is not going to help much.

I was considering to use coverage of branches. If few lines are added into a branch which we know is not covered by unit test, reporting would be false alarm IMHO.

Is it good idea? Is something like this available? Because in my experience, every project/tool idea what I ever had was already solved by least one open source project. :-)

Someone is working on it or thinking about it? I would prefer to contribute to common efforts and not reinvent the wheel if possible. What would be good forum for further discussion? Here or elsewhere? Or maybe it is bad idea, someone tried and has battle scars?

Thank you for any insight. 

──────
Peter Masiar
ARINC Direct  │ SW Engineering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This e-mail (including any attachments) is intended only for the use of the individual or entity named above and may contain privileged, proprietary, or confidential information. The information may also contain technical data subject to export control laws.





More information about the testing-in-python mailing list