[TIP] [ANN] CaptureMock 0.1 - honestly not just yet another mocking framework!
geoff.bache at gmail.com
Fri Mar 25 13:15:00 PDT 2011
>> I should have addressed this in my message... The record/playback in
>> CaptureMock is not at all the same as that in Mocker.
> From the page I linked to in the docs, Mocker does indeed use real
> objects to record. It uses a proxy by default but you can turn it off
> with a flag to operate on the real object. I've never tried to use
> this feature though.
As far as I can see from those docs, it uses real objects to perform
specification checking, which isn't the same as capturing behaviour
from them. The docs are somewhat terse and it's possible I'm missing
something though, I don't really follow what he means by "these checks
are performed by default with proxies and patched objects".
The record/replay aspect of it just seems to be this idea of switching
to "replay mode" halfway through executing a test after encoding all
your expectations. It's an implementation detail rather than an
externally observable behaviour. CaptureMock is record/replay in the
sense that GUI test tools are, with the important difference that
"record" doesn't need a user to press all the buttons again :)
>> As for the webservice, would it not save some time and effort to have
>> the real test and the mock test be the same test, and the mock
>> information updated directly from the "real test"?
> It would be hard and perhaps impossible to test destructive web
> service operations like deleting or changing data. Even if the web
> service were in a sandbox, you'd have to rely on the state of the
> external sandbox. Additionally, it's hard to reproduce exception
> handling -- i.e. how your code handles exceptions returned from the
> web service.
In a situation like this, I would set up the state I want once,
capture the behaviour using it, and then roll back the changes, if
It basically doesn't matter if recording (using the real service) is
very slow because I don't need to do it very often, and don't use it
when testing interactively. It matters a bit more if it can't be fully
automated, though even if I have to mess around a bit by hand when I
have to record it's still useful to be able to update the mock from
the real behaviour.
That said, I've often edited captured mock information by hand in
order to simulate error conditions, because setting up e.g. network
errors for real is just not worth the effort. Then you don't have the
benefit of easy re-record, but it is at least possible to create such
tests from other captured ones without too much effort.
I'm just writing my first presentation about CaptureMock (yes, more
conference-driven development...). In it, I suggested that it's good
to create mocks, if running with the real code is
1) too slow
2) too hard to analyse the results automatically (e.g. plotting graphs)
4) too hard to setup
5) not written yet
Being able to re-record mocks from real code is very useful in the
first 2 cases, useful with care for (3) because you may not get
exactly the same test on re-recording, possibly useful for (4),
depending a bit on the situation and isn't usable in case (5). I'd say
we're discussing case (4) here.
More information about the testing-in-python