No subject

Fri Mar 25 01:15:04 PDT 2011

objects to record.  It uses a proxy by default but you can turn it off
with a flag to operate on the real object.  I've never tried to use
this feature though.

> As for the webservice, would it not save some time and effort to have
> the real test and the mock test be the same test, and the mock
> information updated directly from the "real test"?

It would be hard and perhaps impossible to test destructive web
service operations like deleting or changing data.  Even if the web
service were in a sandbox, you'd have to rely on the state of the
external sandbox.  Additionally, it's hard to reproduce exception
handling -- i.e. how your code handles exceptions returned from the
web service.

But, of course, it always depends on the situation.  I worked once on
a thin wrapper around the Google AdWords API.  For something like
this, it made more sense to run all tests against the real web service
(in a sandbox).  We had to manage data setup and teardown against the
API and thus the tests were very slow.  I wonder if it would have been
helpful to occasionally switch into mock mode for speed.  Maybe.


> Regards,
> Geoff
> On Thu, Mar 24, 2011 at 11:15 PM, Kumar McMillan
> <kumar.mcmillan at> wrote:
>> Hi Geoff.
>> Mocker already does record/playback for this same reason (and there
>> are probably others that do it since it's an old pattern from Java):
>> I never found much value in record/playback because I tend to use
>> mocks sparingly and only for hard-to-reproduce situations. =A0For
>> example, I tend to mock out web services and add a few smoke tests
>> against the real service for good measure. =A0The majority of tests
>> could not run against the real web service so it would be hard to use
>> record/playback.
>> Trying to keep mocks in sync with real objects is an interesting idea
>> though. =A0I believe some libs (mock maybe?) let you generate a mock
>> using the interface of a real object. =A0This would at least raise
>> errors when a method you stubbed out no longer exists. =A0It wouldn't
>> cover changed function behaviors though.
>> "Keeping Mocks In Sync With Real Objects" would be a good section to
>> add to the python mock comparison. =A0For Fudge, nothing like this has
>> been implemented.
>> K
>> On Thu, Mar 24, 2011 at 4:14 PM, Geoff Bache <geoff.bache at> wro=
>>> Hi all,
>>> I can hear the groans already that here comes Python Mocking Framework
>>> Number 15 with new and improved syntax on the other 14, but plase hear
>>> me out for a paragraph or two! I've been following the Python mocking
>>> framework discussions with interest and wondered about joining in with
>>> my own contribution, but as this one is a bit different and is trying
>>> to solve a slightly different problem I've stayed out of it so far.
>>> The problem I'm (primarily) addressing is mocking out substantial
>>> subsystems and third-party modules when doing functional tests, and
>>> being easily able to handle the situation when the mocked subsystem
>>> changes its behaviour. I've found in the past that it's easy for mocks
>>> to diverge from what the real system does, and continue to pass even
>>> when the real code will break. Where other mocking frameworks
>>> primarily focus on mocking code that you own and write yourself,
>>> CaptureMock is more about mocking out code that you don't own.
>>> Anyway, I've created a mock framework that captures interaction with
>>> any given module, attribute or function, and stores the results in a
>>> text file. Any tests using it can then either be run in record or
>>> replay mode each time they are run. An obvious setup is therefore to
>>> ordinarily use replay mode but to re-record if anything in the
>>> interaction changes. And perhaps to have a CI setup somewhere that
>>> always uses record mode and hence verifies that the integration still
>>> works.
>>> Using it doesn't really involve writing mock code like existing mock
>>> tools do. It just requires saying what you want to capture:
>>> from capturemock import capturemock
>>> @capturemock("smtplib")
>>> def test_something_sending_email():
>>> =A0 =A0 # etc
>>> @capturemock("")
>>> def test_some_real_time_code():
>>> =A0 =A0 # etc
>>> @capturemock("matplotlib")
>>> def test_my_graph_stuff():
>>> =A0 =A0 # etc
>>> For more details (including what the recorded format looks like), see
>>> and obviously install from PyPI with "pip install capturemock" or simil=
>>> Any feedback much appreciated. I've been using it for some time, it's
>>> been hardwired into my TextTest tool until now, but it should be
>>> regarded as Alpha outside that context for obvious reasons...
>>> Regards,
>>> Geoff Bache
>>> _______________________________________________
>>> testing-in-python mailing list
>>> testing-in-python at

More information about the testing-in-python mailing list