[TIP] [ANN] CaptureMock 0.1 - honestly not just yet another mocking framework!

Kumar McMillan kumar.mcmillan at gmail.com
Thu Mar 24 15:15:21 PDT 2011


Hi Geoff.
Mocker already does record/playback for this same reason (and there
are probably others that do it since it's an old pattern from Java):
http://labix.org/mocker#head-a6bf69501920fd40fe30e73e414444cc533ba13d

I never found much value in record/playback because I tend to use
mocks sparingly and only for hard-to-reproduce situations.  For
example, I tend to mock out web services and add a few smoke tests
against the real service for good measure.  The majority of tests
could not run against the real web service so it would be hard to use
record/playback.

Trying to keep mocks in sync with real objects is an interesting idea
though.  I believe some libs (mock maybe?) let you generate a mock
using the interface of a real object.  This would at least raise
errors when a method you stubbed out no longer exists.  It wouldn't
cover changed function behaviors though.

"Keeping Mocks In Sync With Real Objects" would be a good section to
add to the python mock comparison.  For Fudge, nothing like this has
been implemented.

K

On Thu, Mar 24, 2011 at 4:14 PM, Geoff Bache <geoff.bache at gmail.com> wrote:
> Hi all,
>
> I can hear the groans already that here comes Python Mocking Framework
> Number 15 with new and improved syntax on the other 14, but plase hear
> me out for a paragraph or two! I've been following the Python mocking
> framework discussions with interest and wondered about joining in with
> my own contribution, but as this one is a bit different and is trying
> to solve a slightly different problem I've stayed out of it so far.
>
> The problem I'm (primarily) addressing is mocking out substantial
> subsystems and third-party modules when doing functional tests, and
> being easily able to handle the situation when the mocked subsystem
> changes its behaviour. I've found in the past that it's easy for mocks
> to diverge from what the real system does, and continue to pass even
> when the real code will break. Where other mocking frameworks
> primarily focus on mocking code that you own and write yourself,
> CaptureMock is more about mocking out code that you don't own.
>
> Anyway, I've created a mock framework that captures interaction with
> any given module, attribute or function, and stores the results in a
> text file. Any tests using it can then either be run in record or
> replay mode each time they are run. An obvious setup is therefore to
> ordinarily use replay mode but to re-record if anything in the
> interaction changes. And perhaps to have a CI setup somewhere that
> always uses record mode and hence verifies that the integration still
> works.
>
> Using it doesn't really involve writing mock code like existing mock
> tools do. It just requires saying what you want to capture:
>
> from capturemock import capturemock
>
> @capturemock("smtplib")
> def test_something_sending_email():
>     # etc
>
> @capturemock("datetime.datetime.now")
> def test_some_real_time_code():
>     # etc
>
> @capturemock("matplotlib")
> def test_my_graph_stuff():
>     # etc
>
> For more details (including what the recorded format looks like), see
> http://www.texttest.org/index.php?page=capturemock
> and obviously install from PyPI with "pip install capturemock" or similar.
>
> Any feedback much appreciated. I've been using it for some time, it's
> been hardwired into my TextTest tool until now, but it should be
> regarded as Alpha outside that context for obvious reasons...
>
> Regards,
> Geoff Bache
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>



More information about the testing-in-python mailing list