[TIP] detecting when a changeset is not well tested

Tarek Ziadé ziade.tarek at gmail.com
Wed Aug 24 01:19:13 PDT 2011


On Wed, Aug 24, 2011 at 4:15 AM, Augie Fackler <lists at durin42.com> wrote:
...
>>> Speaking as a TDD zealot, your question is the equivalent of "We are
>>> not doing TDD, so how can I work extra hard to get just one of the
>>> many benefits that TDD provides as a by-product of extraordinarily
>>> rapid & safe development?"
>>
>> I like TDD a lot, but you are not always in environments where the codebase
>> was TDD'd, or where it was actually possible to TDD'd the product.
>>
>> When large codebases that are already in place but you need to  have granular
>> coverage metrics, Tarek's idea is *very* useful

Yeah that's one reason.

The other one is continuously improving your tests -- sometimes you
are writing tests that are not covering the code base well enough, and
I think keeping an eye on the coverage trending is a good way to
improve them (like what Jenkins offers with the xml output)

But having this without depending on Jenkins gives me the opportunity
to build mercurial hooks, dev-friendly command line tools etc


> This is also the kind of thing I'd love to see integrated with a code review tool, FWIW.

That's a good idea. Can you run the tests from a code review tool with
the applied patch ?

Cheers
Tarek
-- 
Tarek Ziadé | http://ziade.org



More information about the testing-in-python mailing list