[TIP] testing against HTTP services

holger krekel holger at merlinux.eu
Sun Dec 9 12:26:54 PST 2012


On Sun, Dec 09, 2012 at 14:49 -0500, Alfredo Deza wrote:
> For the longest time, I've been working with gigantic applications where
> testing was slow because everything depended on the whole application
> running.
> 
> Even if you had an isolated, granular test that didn't need a database, a
> database would be created for you. And you would have to wait for it.
> 
> Right now things are different in my current environment. A few of the core
> functionalities of this application have been decoupled and are now
> independent HTTP services. This is great for modularity, but poses a
> problem for testing.
> 
> The current approach for running a test in application Foo that talks to
> HTTP service Bar, is to run the Bar HTTP service as part of some setup.
> 
> This seems straightforward, it is only an HTTP service after all, but what
> happens when that service become expensive to run? What if it needs a
> database and other things in order to run properly? What if it gets to the
> point where you need to wait for several seconds just to bring it up for
> your test that takes 0.01 to complete?
> 
> If Bar keeps getting updated, having Foo test Bar against a number of
> versions is a pain multiplier.
> 
> I've heard a few ideas, but I am interested in what you guys have
> experienced as a good approach. Here are the list of things I've heard,
> some of them include ones that I am using:
> 
> 1. Bring the real HTTP service with whatever dependencies it needs to test
> against it.
> 2. Mock everything against the HTTP service with canned responses
> 3. Use #1 for recording answers and run subsequent tests against it using
> the recorded responses.
> 4. Have "official" mocks from the HTTP service where behavior is maintained
> but setup is     minimum.
> 
> All of the above have some caveat I don't like: #1 is slow and painful to
> deal with, #2 will always make your tests pass even if the HTTP service
> changes behavior, #3 only works in certain situations and you still need at
> least a few slow setups and deal with a proper setup of the whole HTTP
> service, #4 is a problem if the HTTP service itself wasn't crafted with 3rd
> party testing in mind, so that mocks can be provided without reproducing
> the actual app again.
> 
> Any ideas or suggestions are greatly appreciated!

Here my 2c - I am also interested in suggestions and comments, however :)

In theory, the #3 "run functional tests, record request/reply traffic" idea
looks promising.  Not really practical i am afraid, though. For example,
a previously recorded traffic pattern probably does not help much if i
am writing or modifying tests - and that's something i often do.  If the
latter requires going functional most of these times, nothing is won.

#4 - having "official mocks" would be great.  It is always a big
plus point if a service/framework thinks about providing nice testing
facilities and keeping them in sync.  Often not the case, i am afraid.

#2 - mocking myself i really only like to do in a very limited manner.
Otherwise i end up having to maintain my tests as much as the actual
application structure, and refactoring becomes much less fun.  Most of
the time I just don't want to spend my hacking time this way.

This leaves #1 - living with setting up the stuff and optimizing that.
If each test is slow, i usually try to use distributed testing.  But if just 
the setup is slow, as in your case, i'd probably try to think about keeping
processes/services alive _between_ test runs and have my test runner /
configuration evolve to manage that.  Something like "start this
service, memorize it's started, kill it in 10 minutes if it's not used
anymore; on next test run start up check if we have the long-running
fixture already running".  If I have a slow many-second service-startup
needed for the tests, and my tests are very fast, i'd go down this
route.

best,
holger



More information about the testing-in-python mailing list