[TIP] Interface vs Implementation Testing

Raphael Marvie raphael.marvie at lifl.fr
Wed Apr 18 04:17:18 PDT 2007

I do agree on the test at the functional level, however when I do  
TDD, I sometime find that tests help me to write the proper  
implementation (cf your remark on telling where the tests fails). So,  
I used to write some implementation specific tests that I remove from  
my test suite (while keeping the test code in my test modules) to  
keep only interface-based ones once the code is written properly (but  
I find it strange not to keep all the tests running all the time).

I do agree for all the other remarks
  - yet it is a web app written using a scrum / xp path, and we go  
function by function
  - we want functional test for the moment, but we also use mock as  
much s we can,
  - my tests may not be the best ones :-) [they sounds ok, the  
implementation works, but they do not help me all the time to write  
the implementation]

Michal said: """Your testing method should not depend on a language,  
but on behaviour
you want from your program. What language it is written in is just a  
detail. ;)"""

I do agree for testing method but not testing means, in that we do  
not have the same artefacts in each language: a.1 was different from  
a.2 in that it used doctest instead of unittest for interface-based  



On Apr 17, 2007, at 8:19 PM, Kumar McMillan wrote:

> I'm with Michal in that the interface test ensures that your
> implementation works (as long as you write the interface test well).
> Here is a major problem with testing implementation: when the
> implementation changes your tests will break.  Broken tests should
> indicate the failure of a component, but in this case it would not.
> You might even say to someone else on your team: "The tests are broken
> but it still works, pay not attention."  In fact, you may even hold
> off on updating your tests until you "get around to it" since you
> kinda know things still work.  You see where I'm going with that.  I
> tell developers on my team to **only** test interfaces and **never**
> test implementation and will slap wrists if I have to :)
> In other words, a grain of salt is needed to fully never test your
> implementation.  But the downside to working only with functional
> tests is that when a test fails you can't tell from the test which
> exact component is at fault.  The upside is that your implementation
> can freely change while still maintaining a well tested suite of
> software.
> [1] also in my case this would be a scrum sprint:
> http://www.codeproject.com/gen/design/scrum.asp#SprintCycle4
> On 4/16/07, Michał Kwiatkowski <constant.beta at gmail.com> wrote:
>> On 4/16/07, Raphael Marvie <raphael.marvie at lifl.fr> wrote:
>>> I have a queue of request providing the functions: add(request),
>>> delete(request), list(), next(), and destroy(). I am implementing
>>> using TDD, so tests are written before implementation. Then, would
>>> you favour tests to be interface-based or implementation-based?
>>> As a first implementation the queue persistency is to be managed
>>> using the file system (a folder for the queue, a file per request --
>>> containing the request details). But, another implementation may be
>>> chosen later on, so tests should be re-usable.
>>> Would you:
>>>   a. write interface-based tests and implementation specific ones
>>> (the first ones could then be re-used later on),
>>>   b. write interface-based tests only (but you cannot completely be
>>> sure your implementation works fine),
>> I'm gonna challenge your belief and say that with proper test cases
>> you *can* be sure your implementation works fine. Every one of your
>> tests should verify that your specific implementation adheres to some
>> specification. If you cover your specification with tests completely
>> you can with much confidence say that your implementation does the
>> right thing. If you're not sure that implementation works fine, you
>> should revise your tests.
>> Having said that, if your implementation is complicated enough, you
>> should have separate tests for parts of it. *But*, those tests will
>> check for other things that the first set of tests. It's just a  
>> matter
>> of modularizing your code. On the lower level you have functions  
>> which
>> operate on files, while on higher level you think about queues and
>> requests. Your tests should reflect those differences. To sum up, do
>> black-box testing of your interfaces, but on many levels, which
>> correspond to granularity of your code. Once you decide to replace  
>> one
>> part of implementation with another, depending on its level you will
>> have to replace dependant tests, while higher-level tests should be
>> left intact.
>> And remember that unless you want functional tests you should mock as
>> much as you can, checking only for specific parts of behaviour at
>> time. With mocks you can also simulate errors and special conditions,
>> what makes your test cases even more rich.
>>> ps for the moment, I would say a.1 in python and a.2 in Java.
>> Your testing method should not depend on a language, but on behaviour
>> you want from your program. What language it is written in is just a
>> detail. ;)
>> Cheers,
>> mk
>> _______________________________________________
>> testing-in-python mailing list
>> testing-in-python at lists.idyll.org
>> http://lists.idyll.org/listinfo/testing-in-python
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python

Raphael Marvie, PhD                http://www.lifl.fr/~marvie/
Maître de Conférences / Associate Professor  @  LIFL -- IRCICA
Directeur du Master Informatique Professionnel spécialité IAGL
Head of Master's in Software Engineering     +33 3 28 77 85 83

More information about the testing-in-python mailing list