<div dir="ltr"><br>
<br><br><div class="gmail_quote">On Mon, Sep 29, 2008 at 8:56 PM, Ben Finney <span dir="ltr"><<a href="mailto:ben%2Bpython@benfinney.id.au">ben+python@benfinney.id.au</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d">"Alfredo Deza" writes:<br>
<br>
> A Python developer and friend of mine told me to start in the good<br>
> habit of writing tests for my code.<br>
<br>
</div>I encourage you to continue getting advice from this friend :-)<br>
<div class="Ih2E3d"><br>
> Although I find this as good practice, I was wondering when exactly<br>
> is a good moment (in the coding process) to start writing a test for<br>
> it.<br>
<br>
</div>I find that the best way to know *how* to write the code is to think<br>
about how the code is supposed to *behave*.<br>
<br>
Writing a test for the code encourages me to state in unambiguous<br>
"true or false" statements exactly what the code will do, in terms of<br>
its observable behaviour on entities outside itself — i.e.,<br>
desigining the API (including function signature, return values,<br>
access to libraries, etc.)<br>
<br>
This is what I like to call "behaviour-driven development"<br>
<URL:<a href="http://behaviour-driven.org/" target="_blank">http://behaviour-driven.org/</a>>; some speak of "test-driven<br>
development", but I prefer the term "behaviour driven development"<br>
since the focus is on the immediate goal: The next change I want to<br>
see in the behaviour of the program.<br>
<br>
* Start with a project under version control, and an automated unit<br>
test suite that currently has no failing tests. If you have no unit<br>
tests yet, you'll have no failures yet of course, but you should be<br>
able to add new tests easily and then run a single command to get<br>
all of them run and reported.<br>
<br>
* Begin with a unit test suite that has no failing tests. If you have<br>
no tests yet, this is easy :-) Your project should be under version<br>
control, with no outstanding changes against the latest revision.<br>
<br>
* State, out loud either to your programming partner or to your rubber<br>
duck <URL:<a href="http://c2.com/cgi/wiki?RubberDucking" target="_blank">http://c2.com/cgi/wiki?RubberDucking</a>>, the next change in<br>
behaviour you want to see in the program. This needs to be a very<br>
simple change that can nevertheless be detected from outside the<br>
code module by making true-or-false assertions about what the code<br>
*does*.<br>
<br>
* Write a new test that exercises the behaviour and makes *exactly<br>
one* true-or-false assertion about the behaviour. Exercising the<br>
behaviour might require setting up some initial state and/or<br>
providing input parameters; the assertion might be about the return<br>
value, or about the change in some state, or about the fact that a<br>
particular library function was called with specific values.<br>
<br>
* Run the entire test suite and watch the new test fail. If the test<br>
doesn't fail, make sure your test is actually testing something of<br>
interest! A failure from the new test at this point gives you<br>
confidence that, should the behaviour regress, your unit test suite<br>
will catch it.<br>
<br>
* Change the program in the simplest, laziest way you can think of to<br>
make the entire test suite, including the new test, pass. Don't<br>
worry about whether it's repetetive or ugly at this point; just do<br>
the simplest thing that could possibly work<br>
<URL:<a href="http://www.xprogramming.com/Practices/PracSimplest.html" target="_blank">http://www.xprogramming.com/Practices/PracSimplest.html</a>>.<br>
<br>
* Run the entire test suite and watch all of them pass. If they don't<br>
all pass, you need to repeat changing the code or, sometimes, the<br>
test (if e.g. your design is flawed, or your test is testing the<br>
wrong thing). I like to have the test suite running continuously in<br>
the background, so that as soon as I've made a change in my code I<br>
can switch to the test suite window and very quickly see the result<br>
without having to run any command.<br>
<br>
* Once the entire test suite is passing, *now* you go back to your<br>
code and clean it up. Don't restrict yourself only to the new code<br>
you just wrote; if some other area would benefit from cleanup at the<br>
same time, and that code is covered by unit tests, now is the time<br>
to clean it up. Do *not* change how the code behaves; refactoring is<br>
strictly a change in the implementation, not behaviour.<br>
<br>
* Run the entire test suite again, to know that the refactoring didn't<br>
break any tests. If any tests fail, you know it was caused by the<br>
refactoring work; go back and fix the code (or the tests) until the<br>
entire test suite passes.<br>
<br>
* Commit the project to your version control system, describing the<br>
change in behaviour you just implemented.<br>
<br>
* Repeat from the top, by either writing a new test to make another<br>
specific assertion about the new behaviour, or stating another<br>
change in behaviour to be satisfied.<br>
<br>
--<br>
\ "He was the mildest-mannered man / That ever scuttled ship or |<br>
`\ cut a throat." —"Lord" George Gordon Noel Byron, _Don Juan_ |<br>
_o__) |<br>
<font color="#888888">Ben Finney<br>
<br>
<br>
</font><br>_______________________________________________<br>
testing-in-python mailing list<br>
<a href="mailto:testing-in-python@lists.idyll.org">testing-in-python@lists.idyll.org</a><br>
<a href="http://lists.idyll.org/listinfo/testing-in-python" target="_blank">http://lists.idyll.org/listinfo/testing-in-python</a><br>
<br></blockquote></div>Ben,<br><br>Thanks for the thorough answer + guidance. <br><br>This is greatly appreciated.<br><br>You can only imagine that since I am starting out this is a lot to digest...!<br><br><br><br>alfredo <br>
</div>