[TIP] testing against HTTP services
sienkiew at stsci.edu
Mon Dec 10 14:59:19 PST 2012
On 12/09/12 14:49, Alfredo Deza wrote:
> This seems straightforward, it is only an HTTP service after all, but what
> happens when that service become expensive to run? What if it needs a
> database and other things in order to run properly? What if it gets to the
> point where you need to wait for several seconds just to bring it up for
> your test that takes 0.01 to complete?
Your phrasing suggests to me that you want to set up the http service and tear it down for every test. I would not have thought of doing that.
In one project, I set up the web server once for an entire test run. I have ~22k tests, though only ~3k actually talk to the web server. The startup time of the web server is small compared to the total elapsed time. I have a test database that already exists, so I have no overhead for that. I don't need to initialize the database before the test, because the tests are looking at database records that are created as part of the test scenario.
In another project, I use multiple stages of testing, where one stage inserts data (testing some aspects of collecting the data) and another tests that correct data is found in the database (testing both that the correct data was collected and that it was stored properly). This project is not as fully developed, but it basically starts with either a new database or a bunch of "delete from tablename", depending which database engine. (sqlite3 databases are _very_ easy to create.) Again, the time to start the server and initialize the database is relatively small compared to the actual testing.
In both cases, I find it useful to run many tests concurrently.
Running each transaction in its own instance of web server is good for isolating the effects of one test from another, but in a web app, I am usually more interested in what happens with lots of transactions. e.g. Are there memory leaks? Does it get unreasonably slow under heavy load? One part of my load testing is just to run many instances of the test suite at the same time.
I do not use nose or py.test setup/teardown features to set up my database and start my web server. I have a shell script that creates the test environment, then uses various test frameworks to run the tests. It does not automatically shut down the web server or database after the test, so that the test environment will then be available for diagnosis. (It does shut down the web server when the continuous integration system sees it still running from a previous continuous integration run.)
I've never really considered mocking or record/playback of the http streams, but that is usually because part of the code that I want to test is actually IN the web server. There are tests of that same code that call directly into the computational code, but I'm still interested in whether I can send a transaction to the web interface and get a good answer.
In principle, you can stub out the web transaction (what we called "mocking" before it got a fancy new name). I could make a web server that has the compute parts stubbed/mocked, but sometimes it is more work to use a stub/mock than to use the real thing.
More information about the testing-in-python