[TIP] the best time to start doing a test?

Alfredo Deza arufuredosan at gmail.com
Mon Sep 29 18:25:39 PDT 2008

On Mon, Sep 29, 2008 at 8:56 PM, Ben Finney
<ben+python at benfinney.id.au<ben%2Bpython at benfinney.id.au>
> wrote:

> "Alfredo Deza" writes:
> > A Python developer and friend of mine told me to start in the good
> > habit of writing tests for my code.
> I encourage you to continue getting advice from this friend :-)
> > Although I find this as good practice, I was wondering when exactly
> > is a good moment (in the coding process) to start writing a test for
> > it.
> I find that the best way to know *how* to write the code is to think
> about how the code is supposed to *behave*.
> Writing a test for the code encourages me to state in unambiguous
> "true or false" statements exactly what the code will do, in terms of
> its observable behaviour on entities outside itself — i.e.,
> desigining the API (including function signature, return values,
> access to libraries, etc.)
> This is what I like to call "behaviour-driven development"
> <URL:http://behaviour-driven.org/>; some speak of "test-driven
> development", but I prefer the term "behaviour driven development"
> since the focus is on the immediate goal: The next change I want to
> see in the behaviour of the program.
> * Start with a project under version control, and an automated unit
>  test suite that currently has no failing tests. If you have no unit
>  tests yet, you'll have no failures yet of course, but you should be
>  able to add new tests easily and then run a single command to get
>  all of them run and reported.
> * Begin with a unit test suite that has no failing tests. If you have
>  no tests yet, this is easy :-) Your project should be under version
>  control, with no outstanding changes against the latest revision.
> * State, out loud either to your programming partner or to your rubber
>  duck <URL:http://c2.com/cgi/wiki?RubberDucking>, the next change in
>  behaviour you want to see in the program. This needs to be a very
>  simple change that can nevertheless be detected from outside the
>  code module by making true-or-false assertions about what the code
>  *does*.
> * Write a new test that exercises the behaviour and makes *exactly
>  one* true-or-false assertion about the behaviour. Exercising the
>  behaviour might require setting up some initial state and/or
>  providing input parameters; the assertion might be about the return
>  value, or about the change in some state, or about the fact that a
>  particular library function was called with specific values.
> * Run the entire test suite and watch the new test fail. If the test
>  doesn't fail, make sure your test is actually testing something of
>  interest! A failure from the new test at this point gives you
>  confidence that, should the behaviour regress, your unit test suite
>  will catch it.
> * Change the program in the simplest, laziest way you can think of to
>  make the entire test suite, including the new test, pass. Don't
>  worry about whether it's repetetive or ugly at this point; just do
>  the simplest thing that could possibly work
>  <URL:http://www.xprogramming.com/Practices/PracSimplest.html>.
> * Run the entire test suite and watch all of them pass. If they don't
>  all pass, you need to repeat changing the code or, sometimes, the
>  test (if e.g. your design is flawed, or your test is testing the
>  wrong thing). I like to have the test suite running continuously in
>  the background, so that as soon as I've made a change in my code I
>  can switch to the test suite window and very quickly see the result
>  without having to run any command.
> * Once the entire test suite is passing, *now* you go back to your
>  code and clean it up. Don't restrict yourself only to the new code
>  you just wrote; if some other area would benefit from cleanup at the
>  same time, and that code is covered by unit tests, now is the time
>  to clean it up. Do *not* change how the code behaves; refactoring is
>  strictly a change in the implementation, not behaviour.
> * Run the entire test suite again, to know that the refactoring didn't
>  break any tests. If any tests fail, you know it was caused by the
>  refactoring work; go back and fix the code (or the tests) until the
>  entire test suite passes.
> * Commit the project to your version control system, describing the
>  change in behaviour you just implemented.
> * Repeat from the top, by either writing a new test to make another
>  specific assertion about the new behaviour, or stating another
>  change in behaviour to be satisfied.
> --
>  \       "He was the mildest-mannered man / That ever scuttled ship or |
>  `\       cut a throat." —"Lord" George Gordon Noel Byron, _Don Juan_ |
> _o__)                                                                  |
> Ben Finney
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
> Ben,

Thanks for the thorough answer + guidance.

This is greatly appreciated.

You can only imagine that since I am starting out this is a lot to

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.idyll.org/pipermail/testing-in-python/attachments/20080929/2c5b4dee/attachment.htm 

More information about the testing-in-python mailing list