[TIP] A rare philosophical thought
jnoller at gmail.com
Fri Aug 1 11:21:26 PDT 2008
On Fri, Aug 1, 2008 at 1:28 PM, C. Titus Brown <ctb at msu.edu> wrote:
> So, I just posted a reply to someone new (or returning) to Python,
> and one of his complaints was this:
> The Python gurus recommend unit testing to make sure code is solid.
> That's great. If I wanted to write dozens of lines of boilerplate
> code in order to make sure stuff worked, I'd have stuck with C++. I want
> to write less code and be confident in the belief that that code is
> correct and error free.
> My response was this:
> Your take on unit testing seems just plain wrong. I know of no useful
> language that can prevent the majority of programming errors without
> some form of actually running the code, a.k.a. "testing". You might
> think YMMV, but you're almost certainly wrong.
> For some reason, this was the first time I'd really thought of things
> this way: "testing" is really just "running the code", under actual or
> likely-to-be actual circumstances. The quality of your test effort can
> be measured by how reflective it is of the actual circumstances under
> which the code will be used, and the cost of the mismatches.
> C. Titus Brown, ctb at msu.edu
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
I forgot to reply-all :)
I'd tend to strongly agree: which is why I commonly make the mental
distinction that unit tests are there to save me time/bellyache as a
*developer* - but as a tester - tests which don't reflect actual
usage/behavior of the product as a whole are not as useful.
It's a partial distinction with white vs. black box testing. I can
test the methods within code (unit tests) - I can test the compiled
binary in simulation or spoofed input/output (white box) or I can test
the entire system as a whole in as close to a real-world environment
I personally put more "mental value" on the last one - You're not
shipping a component - you're shipping a bunch of components which
have to interact with one another in sometimes "interesting" ways.
Lack of focus on running you code in this real-world/user type of
system means that when you do ship - you'll tend to find painfully
obvious bugs caused by interactions you never checked for.
This is also why I tend to put a lot of stock in "stress" style tests:
Don't just test to the level you *think* a user will push the
application to, push it far beyond those limitations - users don't
care you didn't test your application to 1.2 billion records or
messages/second, they just care that when they pushed it that far it
Users are tricky creatures :) That's why as testers - we need to push
the boundaries and prove the usage patterns, even those which are
tight corner cases.
More information about the testing-in-python