[TIP] how to test "impossible" events
gheorghe_gheorghiu at yahoo.com
Mon Sep 29 19:30:04 PDT 2008
--- On Mon, 9/29/08, Andrew Dalke <dalke at dalkescientific.com> wrote:
> I'm talking to a Tomcat server. The back-end Java
> servlet was using
> the wrong port number to speak to another server, which
> caused a NullPointerException. This was converted into a
> SOAP error
> response, encoded as a MTOM message and send with an HTTP
> status of 500 - "Internal Server Error".
> Using the server you just pointed out:
> >>> import urllib2
> >>> f =
> Traceback (most recent call last):
> File "<stdin>", line 1, in
> python2.5/urllib2.py", line 353, in _call_chain
> result = func(*args)
> python2.5/urllib2.py", line 499, in http_error_default
> raise HTTPError(req.get_full_url(), code, msg, hdrs,
> urllib2.HTTPError: HTTP Error 500: Internal Server Error
> I couldn't figure out how to suppress that error
> message - urllib/
> urllib2 is a confusing mess of a module - so I ended up
> writing to
> the httplib module, which meant using some of the low-level
> Since I had to write my own (limited) MTOM parser, it also
> ended up
> being easier with httplib than urllib2.
Well, as I said, what you're doing here is testing the remote Tomcat server, and not the code you wrote. To be more precise, you're testing the *integration* of your code with the Tomcat server. It's a valuable test, but it's not a unit test for your code. Doesn't mean that you shouldn't write and run it, but you would probably run it much less often than your code's unit tests.
> > A more interesting test is what happens at the
> higher-level, i.e.
> > with the actual payload/data that your code receives.
> At that level
> > you can write unit tests against corner cases and see
> how your code
> > reacts to various abnormal data sets.
> I can, but that's my question. It seems to take a lot
> more work to
> set up automated tests for those not-known-to-exist corner
> cases than
> doing the manual tests that just check to see that the
> AssertionError('something bad happened')"-type
> code path doesn't
> have a typo.
Just having 2 unit tests per piece of functionality, one that tests your code in the presence of valid data, and one that tests it in the presence of invalid/malformed data, would go a long way IMO in making sure your code is fairly solid. You can then go to more extremes in terms of corner cases if you want.
> > A better test in this case would be to verify that the
> > document that you generate contains the data you
> Yeah, I haven't gotten to testing the Excel results
> yet, other than
> visual inspection. How do I make sure that the text in
> column D
> which contains more than 5 characters is rotated
> There are ways to do it, and perhaps this is simply bad
> habits of
> mine learned over long years of BASIC programming or some
> reason, but it feels easier to check the output visually
> than trying
> to figure out some automated system system for this. In
> any case,
> that's diverging from my question.
> > Just verifying an exit code will not tell you anything
> > exciting about the correctness of your code.
I wasn't necessarily thinking about making sure a cell is rotated properly. I was thinking about testing the contents of each cell at least. Having *some* automated tests is much better than relying on eyeballing your Excel spreadsheet.
> Checking the error code has proved useful before:
> - paths change and directories get reorganized. The exit
> tells me if the program didn't start.
A better test would be to have 2 unit tests for the method of your code which deals with those paths/directories. One unit test for a correct path, and one for an incorrect path.
But maybe you're talking about a piece of code you didn't write, and that you're not able to unit test. In this case, I agree, checking the command-line script black-box style, by looking at its output, may be the only way to go.
> Perhaps I can rephrase my question. I know how to write
> mock systems
> for all of the things I've talked about. the question
> I've been
> thinking about as I write this code is the trade-off
> - do I write code to test for cases that aren't
> known to occur,
> based on experience that other similar code has done
> things during rare circumstances?
> - if so, do I test the code paths? Should I
> - do manual tests that the code path executed
> (The tests are always similar to:
> if impossible_condition:
> raise AssertionError("message:
> %s" % value)
> so there's very little that can go wrong with
> - or should I automate those tests?
> - How much time should I spend setting up mock tests for
> something that isn't known to ever occur?
> I know that, for example, SQLite has a filesystem isolation
> designed to test rare cases. I'm quite impressed with
> the code
> coverage. But at least some of those rare tests are still
> for errors
> that may occur and have occurred on some hardware. I'm
> talking about
> for code where there's no evidence that a given failure
> mode will
> ever occur.
I would say that the best way to figure out the answer to these questions is to run and observe the code in circumstances as close to production as possible, and do exploratory manual testing. If you do cause such an exceptional condition to occur, and the code bombs badly, then it's time to add an automated test for that condition, and fix your code at the same time. Otherwise, YAGNI.
More information about the testing-in-python