[TIP] Testing Hierarchy

Mark Roddy markroddy at gmail.com
Mon Jul 20 20:08:56 PDT 2009


"One can't simply say '100% coverage' or other similar rules"

I think part of the problem is that people want quantifiable goals to
reach rather then abstract set of qualifiers such as "good tests".
Every quantifiable goal I've seen suggested (such as high coverage) is
in actuality a side affect of good tests and not a reliable indicator
of them.  Most good set of tests I've seen have had high coverage
(high being a relative term) though 100% across the board, but I've
also seen plenty bad tests (fragile/hard to maintain, ambiguous as to
why the fail when they do, etc) that did have high coverage.

In think there's some analogs here with object oriented design and the
S.O.L.I.D. folks.  The ideas and practices they promote are very
difficult to quantify as far as their benefits are concerned and
almost impossible to do so in the short run.  How do you measure the
time saved because it was easy to reuse/customize an class and a new
one wasn't needed?

I think the same hold true with good test suites.  The benefits are
seen X months down the line when you can add a new feature in a
shorter amount of time with confidence that you haven't broken the
existing features as well as readily knowing what you've broken when
you do.  How do you measure the development time saved?  I'm not even
sure it's even possible, though it's true from my own experience of
being able to quickly add features.

As for the original post in this thread on categorizing tests, I'd
suggest checking out Misko Hevery's recent blog post 'Software Testing
Categorization':
http://misko.hevery.com/2009/07/14/software-testing-categorization/

He's a software testing advocate at Google and can often be found
cross posting on the official Google Testing Blog.

-Mark


On Mon, Jul 20, 2009 at 6:02 PM, Robert
Collins<robertc at robertcollins.net> wrote:
> On Mon, 2009-07-20 at 11:24 -0700, Noah Gift wrote:
>>
>> It is very easy to say all code should have 100% unit test coverage,
>> have integration tests, functional tests, etc.  What I haven't seen
>> someone talk about yet, is when that is appropriate in the real world
>> and when it isn't, and an honest assessment of the tradeoff.  I am
>> very sold on testing code, but how much depends on the situation I am
>> in.
>
> In previous debates I've had with people about this, I've had people
> argue that because their time is limited, they should do less or not
> automated testing because testing is pure waste.
>
> Now, I don't think that argument is being made here :). However, the
> rebuttal to that point can provide some clues for choosing how much
> testing is enough.
>
> And my rebuttal of choice is: Testing, and automated testing
> specifically is part of development - not a completely separate thing.
> So the time for testing is coming from the same time allowance that lets
> you sit in front of a function thinking 'does that do what I want'.
>
> And from there it should be obvious: whenever spending time on testing
> will help you be more confident that the system does what you want, and
> you'd be willing to stare/think/write-code to increase your confidence,
> you should be willing to increase test coverage too. I know this is
> facile, but I find it really does work well as a rule of thumb. After
> all one doesn't stop coding a function until you're confident it does
> what you want, but you may stop while you still have some doubts, if the
> risk is felt to be low/not worth reducing more.
>
> For a given project, the amount of testing needed to get to this point
> where you feel more testing isn't worth it will vary - and this is the
> hard bit, I think. One can't simply say '100% coverage' or other similar
> rules: they can at most be possible goals to aim for. As tests have
> costs (time to maintain, time to execute) exactly the same as any new
> functions do, you can often get to a situation where the cost of a new
> test isn't worth the increased confidence it will add.
>
> -Rob
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>
>



More information about the testing-in-python mailing list