[TIP] detecting when a changeset is not well tested

Augie Fackler lists at durin42.com
Tue Aug 23 19:15:02 PDT 2011


On Aug 23, 2011, at 7:47 PM, Alfredo Deza wrote:
> 
> On Tue, Aug 23, 2011 at 8:40 PM, Phlip <phlip2005 at gmail.com> wrote:
>> Tarek Ziadé wrote:
>> 
>>> So, I can build this in a script or... ask here if something exists already :)
>>> 
>>> I am looking for a script that can:
>>> 
>>> 1/ keep a history of the test coverage for every module of my project
>>> 2/ on a commit, detect when the coverage for a module lowers
>>> 3/ send a warning and point the part that's not covered
>> 
>> Speaking as a TDD zealot, your question is the equivalent of "We are
>> not doing TDD, so how can I work extra hard to get just one of the
>> many benefits that TDD provides as a by-product of extraordinarily
>> rapid & safe development?"
> 
> I like TDD a lot, but you are not always in environments where the codebase
> was TDD'd, or where it was actually possible to TDD'd the product.
> 
> When large codebases that are already in place but you need to  have granular
> coverage metrics, Tarek's idea is *very* useful.

This is also the kind of thing I'd love to see integrated with a code review tool, FWIW.

> 
>> 
>> Use TDD, and you have saturation coverage automatically.
> 
> Right. But that is if you start and continue with TDD.
>> 
>> --
>>   Phlip
>>   http://c2.com/cgi/wiki?ZeekLand
>> 
>> _______________________________________________
>> testing-in-python mailing list
>> testing-in-python at lists.idyll.org
>> http://lists.idyll.org/listinfo/testing-in-python
>> 
> 
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python





More information about the testing-in-python mailing list