[TIP] [ANN] CaptureMock 0.1 - honestly not just yet another mocking framework!

Herman Sheremetyev herman at swebpage.com
Thu Mar 24 18:11:06 PDT 2011


Interesting idea, I like the notion of writing the same set of tests
that can be run as either unittests or large integration tests with
just a commandline switch. Though to be fair I think this looks like
YAPTR* rather than YAPML** and the use of the word mock in the title
and docs is a bit misleading as it's treading heavily in both mocking
library and test runner territory.

I haven't looked at your code and how much you've already invested,
but as you're still in Alpha I would suggest that maybe you should
take a look at the available mocking libraries, many of which are
already good at the non-trivial task of doing the mocking bit, and see
if you can integrate with one of them rather than trying to write both
a new mocking library and what is essentially going to boil down to a
test runner at the same time.

Just off the top of my head I can imagine an implementation that would
take an existing mocking library and make it ignore hard-coded mocks
in favor of making all the real calls with an environment variable or
something similar. You could also have another env variable that will
make the mocking library store the results of its capture stage,
giving you the same kind of replay-from-previously-good-state
capability.

Cheers,

-Herman

* yet another python test runner
** yet another python mocking library

On Fri, Mar 25, 2011 at 7:29 AM, Geoff Bache <geoff.bache at gmail.com> wrote:
> Hi Kumar,
>
> I should have addressed this in my message... The record/playback in
> CaptureMock is not at all the same as that in Mocker. From my
> documentation front page:
>
> "Note that there are other mocking frameworks that say they have a
> capture-replay approach which do not work like this, so the term can
> be a bit confusing if you have used other frameworks. These tools
> generally "record" some behaviour in the first half of the test based
> on some code you write, and then switch to "replay" halfway through
> the test to actually execute the code under test. They do not offer
> two ways to run the same test, nor the "connection to the real world"
> that goes with it."
>
> I too have never seen much point in Mocker-style Capture/Replay. It
> just seems like a more convoluted way of coding up some expectations.
> CaptureMock actually captures those expectations based on what
> actually happens and records them in a separate file, meaning you
> don't have to code them at all.
>
> As for the webservice, would it not save some time and effort to have
> the real test and the mock test be the same test, and the mock
> information updated directly from the "real test"?
>
> Regards,
> Geoff
>
> On Thu, Mar 24, 2011 at 11:15 PM, Kumar McMillan
> <kumar.mcmillan at gmail.com> wrote:
>> Hi Geoff.
>> Mocker already does record/playback for this same reason (and there
>> are probably others that do it since it's an old pattern from Java):
>> http://labix.org/mocker#head-a6bf69501920fd40fe30e73e414444cc533ba13d
>>
>> I never found much value in record/playback because I tend to use
>> mocks sparingly and only for hard-to-reproduce situations.  For
>> example, I tend to mock out web services and add a few smoke tests
>> against the real service for good measure.  The majority of tests
>> could not run against the real web service so it would be hard to use
>> record/playback.
>>
>> Trying to keep mocks in sync with real objects is an interesting idea
>> though.  I believe some libs (mock maybe?) let you generate a mock
>> using the interface of a real object.  This would at least raise
>> errors when a method you stubbed out no longer exists.  It wouldn't
>> cover changed function behaviors though.
>>
>> "Keeping Mocks In Sync With Real Objects" would be a good section to
>> add to the python mock comparison.  For Fudge, nothing like this has
>> been implemented.
>>
>> K
>>
>> On Thu, Mar 24, 2011 at 4:14 PM, Geoff Bache <geoff.bache at gmail.com> wrote:
>>> Hi all,
>>>
>>> I can hear the groans already that here comes Python Mocking Framework
>>> Number 15 with new and improved syntax on the other 14, but plase hear
>>> me out for a paragraph or two! I've been following the Python mocking
>>> framework discussions with interest and wondered about joining in with
>>> my own contribution, but as this one is a bit different and is trying
>>> to solve a slightly different problem I've stayed out of it so far.
>>>
>>> The problem I'm (primarily) addressing is mocking out substantial
>>> subsystems and third-party modules when doing functional tests, and
>>> being easily able to handle the situation when the mocked subsystem
>>> changes its behaviour. I've found in the past that it's easy for mocks
>>> to diverge from what the real system does, and continue to pass even
>>> when the real code will break. Where other mocking frameworks
>>> primarily focus on mocking code that you own and write yourself,
>>> CaptureMock is more about mocking out code that you don't own.
>>>
>>> Anyway, I've created a mock framework that captures interaction with
>>> any given module, attribute or function, and stores the results in a
>>> text file. Any tests using it can then either be run in record or
>>> replay mode each time they are run. An obvious setup is therefore to
>>> ordinarily use replay mode but to re-record if anything in the
>>> interaction changes. And perhaps to have a CI setup somewhere that
>>> always uses record mode and hence verifies that the integration still
>>> works.
>>>
>>> Using it doesn't really involve writing mock code like existing mock
>>> tools do. It just requires saying what you want to capture:
>>>
>>> from capturemock import capturemock
>>>
>>> @capturemock("smtplib")
>>> def test_something_sending_email():
>>>     # etc
>>>
>>> @capturemock("datetime.datetime.now")
>>> def test_some_real_time_code():
>>>     # etc
>>>
>>> @capturemock("matplotlib")
>>> def test_my_graph_stuff():
>>>     # etc
>>>
>>> For more details (including what the recorded format looks like), see
>>> http://www.texttest.org/index.php?page=capturemock
>>> and obviously install from PyPI with "pip install capturemock" or similar.
>>>
>>> Any feedback much appreciated. I've been using it for some time, it's
>>> been hardwired into my TextTest tool until now, but it should be
>>> regarded as Alpha outside that context for obvious reasons...
>>>
>>> Regards,
>>> Geoff Bache
>>>
>>> _______________________________________________
>>> testing-in-python mailing list
>>> testing-in-python at lists.idyll.org
>>> http://lists.idyll.org/listinfo/testing-in-python
>>>
>>
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>



More information about the testing-in-python mailing list