[TIP] Interface vs Implementation Testing

Kumar McMillan kumar.mcmillan at gmail.com
Tue Apr 17 11:19:06 PDT 2007


I'm with Michal in that the interface test ensures that your
implementation works (as long as you write the interface test well).

Here is a major problem with testing implementation: when the
implementation changes your tests will break.  Broken tests should
indicate the failure of a component, but in this case it would not.
You might even say to someone else on your team: "The tests are broken
but it still works, pay not attention."  In fact, you may even hold
off on updating your tests until you "get around to it" since you
kinda know things still work.  You see where I'm going with that.  I
tell developers on my team to **only** test interfaces and **never**
test implementation and will slap wrists if I have to :)

As for TDD, everyone has their own style, but to answer your question
in the way I would do it...

1. write functional tests at the highest level for all the features
you want to release [1] so that they are all failing.  The highest
level might be accessing the queue interface (is it a web app?) adding
an item, seeing that it shows up in the list view, deleting, etc.
2. implement just enough code so that they pass.
3. At this point you may want to experiment with a few better
implementations so keep coding
4. decide on an implementaion and write unit tests for all the units involved
5. start back at number 1 with new features you want to add.  Add to
any existing unit tests if the features calls for the modification of
those units

Number 4 sounds like a contradiction because it suggests that you are
testing the implementation of your overall functionality.  Yes you
are, and this makes them somewhat fragile, but the tests themselves
would only test the interface of the units (say, a Queue manager
class).

In other words, a grain of salt is needed to fully never test your
implementation.  But the downside to working only with functional
tests is that when a test fails you can't tell from the test which
exact component is at fault.  The upside is that your implementation
can freely change while still maintaining a well tested suite of
software.

[1] also in my case this would be a scrum sprint:
http://www.codeproject.com/gen/design/scrum.asp#SprintCycle4



On 4/16/07, Michał Kwiatkowski <constant.beta at gmail.com> wrote:
> On 4/16/07, Raphael Marvie <raphael.marvie at lifl.fr> wrote:
> > I have a queue of request providing the functions: add(request),
> > delete(request), list(), next(), and destroy(). I am implementing
> > using TDD, so tests are written before implementation. Then, would
> > you favour tests to be interface-based or implementation-based?
> >
> > As a first implementation the queue persistency is to be managed
> > using the file system (a folder for the queue, a file per request --
> > containing the request details). But, another implementation may be
> > chosen later on, so tests should be re-usable.
> >
> > Would you:
> >
> >   a. write interface-based tests and implementation specific ones
> > (the first ones could then be re-used later on),
> >
> >   b. write interface-based tests only (but you cannot completely be
> > sure your implementation works fine),
>
> I'm gonna challenge your belief and say that with proper test cases
> you *can* be sure your implementation works fine. Every one of your
> tests should verify that your specific implementation adheres to some
> specification. If you cover your specification with tests completely
> you can with much confidence say that your implementation does the
> right thing. If you're not sure that implementation works fine, you
> should revise your tests.
>
> Having said that, if your implementation is complicated enough, you
> should have separate tests for parts of it. *But*, those tests will
> check for other things that the first set of tests. It's just a matter
> of modularizing your code. On the lower level you have functions which
> operate on files, while on higher level you think about queues and
> requests. Your tests should reflect those differences. To sum up, do
> black-box testing of your interfaces, but on many levels, which
> correspond to granularity of your code. Once you decide to replace one
> part of implementation with another, depending on its level you will
> have to replace dependant tests, while higher-level tests should be
> left intact.
>
> And remember that unless you want functional tests you should mock as
> much as you can, checking only for specific parts of behaviour at
> time. With mocks you can also simulate errors and special conditions,
> what makes your test cases even more rich.
>
> > ps for the moment, I would say a.1 in python and a.2 in Java.
>
> Your testing method should not depend on a language, but on behaviour
> you want from your program. What language it is written in is just a
> detail. ;)
>
> Cheers,
> mk
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>


More information about the testing-in-python mailing list