[TIP] Fwd: Functional Testing of Desktop Applications

Nate Lowrie solodex2151 at gmail.com
Mon Mar 5 08:14:18 PST 2007


I keep forgetting to do a list reply......

---------- Forwarded message ----------
From: Nate Lowrie <solodex2151 at gmail.com>
Date: Mar 6, 2007 9:12 AM
Subject: Re: [TIP] Functional Testing of Desktop Applications
To: Michael Foord <fuzzyman at voidspace.org.uk>


On 3/6/07, Michael Foord <fuzzyman at voidspace.org.uk> wrote:
> Laura Creighton wrote:
> > In a message of Mon, 05 Mar 2007 00:01:35 GMT, Michael Foord writes:
> > <snip>
> >
> >> This leads us into a bit of a debate about how much to mock out in our
> >> unit tests. The most extreme testing doctrine says that you should mock
> >> out *all* your dependencies when testing a unit - even dependencies
> >> within the same object (if you are testing method 'a' and it contains a
> >> call to function or method 'b' you should mock 'b').
> >>
> >> When you have a short function that has several dependencies this can
> >> make writing the tests a real chore. It *also* means that you don't
> >> catch errors at a higher level - where you mock you test adequately what
> >> you *think* the wiring between your units is doing, but your mental
> >> model may be just as buggy as your code.
> >>
> >
> > A different, related problem.  The idea is that your tests stay live
> > and you run them all periodically to make sure nothing breaks.  Do
> > this properly, and write a thing like pypy (and py.test) and you
> > will find that a run of all your tests takes 3-4 hours. :-(  Thus
> > the most extreme doctrine above only works on "small-enough" projects.
> >
> > Deciding what to do when your project is no longer "small enough" is
> > hard.
> >
> >
> "Thus the most extreme doctrine above only works on "small-enough"
> projects."
>
> I don't understand what you mean: mocking makes your tests faster - not
> slower. Do you mean that having full test coverage is only possible /
> worthwhile on smaller projects ?
>

In this case, making stuff faster is the result of using test spies to
stub out methods and return predictable results for certain objects.
If you are connecting to a database, you insert a fake database with
predictable results that is all in memory.  There is no more overhead.
 A test that took 2 seconds to run now takes less than 10
milliseconds.  Other resources that are useful for mocking for a
speedup are filesystems, sockets, email servers, and almost any
network resource.

In addition to the speedup, you gain the advantage of predictable
results and replication of errors very easily.

> Well, our project is a lot smaller than PyPy, but still around 20k
> production lines and growing.
>
> We have 2300 tests (unit and functional), and they take one and a half
> hours to run. Over 2/3 of this time is functional tests.

Wow.....That is alot of time for only 2300 tests.  Then again,
functional tests might be the cause of that.  It sounds to me like you
have several Slow Test smells in your code.  I would look at your
tests, identify the ones taking the longest, and refactor or mock them
to speed it up.

I was working on a C# project not long ago and we had over 6000 unit
tests that were compiled and run every 45 minutes by CruiseControl.
Yes I know it's not the same as python but with python there is also
no need to take 5 minutes to compile.

>
> We are just switching over to say that we only need to run unit tests
> before checking in.

If you run a layered architecture you should be able to get away with
only running unit tests for that layer before checking in.  Then the
Continuous Integration server can run the suite to check the rest.
Your only worry in this is that the functional tests that go cross
layer might fail.

>
> What I'm saying certainly seems to be 'very extreme' for most Pythoneers
> that I've talked to (and those on the list here) - but it really is
> pretty standard XP, and the sort of stuff being explored by the agile
> crowd. I have to say that it works *very* well for us, and provides a
> great environment to develop in.

I would agree.  My biggest argument for blanket mock coverage is the
retesting of functions and methods goes away.  Suppose you have
function A which calls function B to perform X functionality.  If you
want to test that function A can do X, you either give it input that
leads it into and out of B and check states or you mock.  Since B
already has unit tests this is a waste of time, especially if B calls
C and C calls D etc....  If you mock B and assume that the
functionality will be verifying by it's unit tests, you are not
testing code twice and there is a speed up of the process (varies).

It may be extreme, but I think that it avoid duplicated test smells
and is necessary to also avoid Slow Test smells.

>
> All the best,
>
> Fuzzyman
> http://www.voidspace.org.uk/python/articles.shtml
>
> > Laura Creighton
> >
> >
>
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>



More information about the testing-in-python mailing list