[twill] Testing a web API with twill?

Titus Brown titus at caltech.edu
Mon Apr 9 20:36:55 PDT 2007


-> I'm trying to find a web unit and load testing tool that will let me
-> test URL-based APIs. This is basically a parameterized URL that will
-> return JSON-encoded results. I would like to be able to easily set up
-> 100+ test cases that are all test the same API, but with different URL
-> parameter values and different expected results. I would to be able to
-> flexibly configure these test cases, ideally as a text table or other
-> simple-to-create-and-parse data structure, and generate a report of
-> pass/fail cases for each test. 
-> 
-> I would really like to use something like twill, but I'm getting stuck
-> on 2 fronts:
-> 
-> 1.	How to configure a single test case. The key here is that to
-> check for valid results I need to unserialize the json returned result.
-> Twill obviously doesn't have this built in, so I assume I will have to
-> call a twill script from python and then record its results for
-> subsequent processing. Is that right? Should the calling code itself
-> then be part of a unit testing framework?
-> 2.	How to configure the series of test cases. I would like to avoid
-> writing a separate test class / function for each case, and instead
-> somehow iterate over the param/result pairs. I can't figure out how to
-> do this without having the entire suite abort after the first test
-> failure. I would like to run all 100 tests, and have it report back
-> which ones succeeded and which ones failed.

Hey Ramon,

I think a combination of twill and nose would work quite well.

First, you would need to write some JSON parsing code; I don't know much
about JSON, but I'm sure there are modules out there.  However, twill is
written in Python and it should be completely trivial to write a master
script that uses twill to fetch the json result and then parses it using
some Python function.

Second, to configure a series of test cases, you can use the 'yield'
trick in nose.  If you write something like this:

def repeat_fn(params, expected_result):
   result = do_something(params)
   assert result == expected_result
  
def test_function():
   for (params, result) in test_cases:
      yield repeat_fn, params, result

nose will understand that test_function is a test case generator that is
yielding test cases, and run 'repeat_fn(params,result)' for each yielded
tuple.

You can take a look at

	http://darcs.idyll.org/~t/projects/figleaf/tests/__init__.py

to see an actual test that uses this to run through a bunch of files and
compare expected results to actual results.

Does this make any sense?

cheers,
--titus

p.s. 'nose' is a unit test discovery program for Python.



More information about the twill mailing list