I Think there is someone workng on #4 -> <a href="https://github.com/gabrielfalcao/HTTPretty/issues/10">https://github.com/gabrielfalcao/HTTPretty/issues/10</a><div class="gmail_extra"><br><br><div class="gmail_quote">2012/12/9 holger krekel <span dir="ltr"><<a href="mailto:holger@merlinux.eu" target="_blank">holger@merlinux.eu</a>></span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Sun, Dec 09, 2012 at 14:49 -0500, Alfredo Deza wrote:<br>
> For the longest time, I've been working with gigantic applications where<br>
> testing was slow because everything depended on the whole application<br>
> running.<br>
><br>
> Even if you had an isolated, granular test that didn't need a database, a<br>
> database would be created for you. And you would have to wait for it.<br>
><br>
> Right now things are different in my current environment. A few of the core<br>
> functionalities of this application have been decoupled and are now<br>
> independent HTTP services. This is great for modularity, but poses a<br>
> problem for testing.<br>
><br>
> The current approach for running a test in application Foo that talks to<br>
> HTTP service Bar, is to run the Bar HTTP service as part of some setup.<br>
><br>
> This seems straightforward, it is only an HTTP service after all, but what<br>
> happens when that service become expensive to run? What if it needs a<br>
> database and other things in order to run properly? What if it gets to the<br>
> point where you need to wait for several seconds just to bring it up for<br>
> your test that takes 0.01 to complete?<br>
><br>
> If Bar keeps getting updated, having Foo test Bar against a number of<br>
> versions is a pain multiplier.<br>
><br>
> I've heard a few ideas, but I am interested in what you guys have<br>
> experienced as a good approach. Here are the list of things I've heard,<br>
> some of them include ones that I am using:<br>
><br>
> 1. Bring the real HTTP service with whatever dependencies it needs to test<br>
> against it.<br>
> 2. Mock everything against the HTTP service with canned responses<br>
> 3. Use #1 for recording answers and run subsequent tests against it using<br>
> the recorded responses.<br>
> 4. Have "official" mocks from the HTTP service where behavior is maintained<br>
> but setup is minimum.<br>
><br>
> All of the above have some caveat I don't like: #1 is slow and painful to<br>
> deal with, #2 will always make your tests pass even if the HTTP service<br>
> changes behavior, #3 only works in certain situations and you still need at<br>
> least a few slow setups and deal with a proper setup of the whole HTTP<br>
> service, #4 is a problem if the HTTP service itself wasn't crafted with 3rd<br>
> party testing in mind, so that mocks can be provided without reproducing<br>
> the actual app again.<br>
><br>
> Any ideas or suggestions are greatly appreciated!<br>
<br>
</div></div>Here my 2c - I am also interested in suggestions and comments, however :)<br>
<br>
In theory, the #3 "run functional tests, record request/reply traffic" idea<br>
looks promising. Not really practical i am afraid, though. For example,<br>
a previously recorded traffic pattern probably does not help much if i<br>
am writing or modifying tests - and that's something i often do. If the<br>
latter requires going functional most of these times, nothing is won.<br>
<br>
#4 - having "official mocks" would be great. It is always a big<br>
plus point if a service/framework thinks about providing nice testing<br>
facilities and keeping them in sync. Often not the case, i am afraid.<br>
<br>
#2 - mocking myself i really only like to do in a very limited manner.<br>
Otherwise i end up having to maintain my tests as much as the actual<br>
application structure, and refactoring becomes much less fun. Most of<br>
the time I just don't want to spend my hacking time this way.<br>
<br>
This leaves #1 - living with setting up the stuff and optimizing that.<br>
If each test is slow, i usually try to use distributed testing. But if just<br>
the setup is slow, as in your case, i'd probably try to think about keeping<br>
processes/services alive _between_ test runs and have my test runner /<br>
configuration evolve to manage that. Something like "start this<br>
service, memorize it's started, kill it in 10 minutes if it's not used<br>
anymore; on next test run start up check if we have the long-running<br>
fixture already running". If I have a slow many-second service-startup<br>
needed for the tests, and my tests are very fast, i'd go down this<br>
route.<br>
<br>
best,<br>
holger<br>
<div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
testing-in-python mailing list<br>
<a href="mailto:testing-in-python@lists.idyll.org">testing-in-python@lists.idyll.org</a><br>
<a href="http://lists.idyll.org/listinfo/testing-in-python" target="_blank">http://lists.idyll.org/listinfo/testing-in-python</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Vanderson Mota dos Santos<br><br>
</div>