[TIP] mocking a file in /proc

Free Ekanayaka free at ekanayaka.io
Wed May 10 23:58:19 PDT 2017


Robert Collins <robertc at robertcollins.net> writes:

> I'm just going to leave this here.
>
> https://pypi.python.org/pypi/effect
>
> :)

+1

Make your side-effects part of your API contract. And possibly provide
fakes for that. Do that for all your code except the very boundaries of
the program (network, filesystem, processes).

To then then test the little code that deals with the boundaries, use
systemfixtures or similar techniques. I consider this still TDD, there's
really no dogma that says that a unit test can't open a file a spawn a
process (typically a fake fast version of the real executable), and if
you do that properly there's no significant performance hit at all
(modulo milliseconds) that matters for the TDD cycle.

The whole test suite of systemfixtures itself, which does open files and
span subprocesses quite a lot takes 1.190s on Travis which is not a
particulary fast substrate:

https://travis-ci.org/testing-cabal/systemfixtures/jobs/229254446

To create fake executables, there's:

http://systemfixtures.readthedocs.io/en/latest/?badge=latest#executables

Or you can run the real ones if that's fast/convenient enough.

Else already mentioned in this thread, after all that you do integration
tests for the very few parts that are left that still rely on fakes
making assumption on the real system you are interacting with, if that
system is expected to not have stable APIs (this is a case-by-case
tradeoff-based decision).

Hope it helps.

>
> On 10 May 2017 at 06:02, Michael Williamson <mike at zwobble.org> wrote:
>> I tend to view TDD as orthogonal to whether you're doing functional,
>> integration or unit testing. In fact, I often write a failing
>> functional test to start, and then write a failing test at a lower
>> level.
>>
>> As to whether functional tests are necessary: whenever you write a unit
>> test that mocks out interaction with an external system (whether using
>> actual mocks or just sample data), you're making an assumption about
>> how that external system will behave (in this case, the files
>> under /proc). The degree of functional testing necessary depends on how
>> confident you are in those assumptions. If you know that the values you
>> read from /proc will always be the same, then it probably suffices to
>> put those specific values in the tests without actually testing against
>> the real thing. On the other hand, if it's likely that different
>> systems will have different values in those files, or that those values
>> might change over time (say, due to different versions of the operating
>> system), then functional testing is useful to ensure that your
>> assumptions are still valid.
>>
>> What I find most useful in that situation is trying to split the
>> tests: one set of potentially slow and hard-to-run tests that test the
>> assumptions you've made about the external systems, one set of fast
>> unit tests that uses those assumptions, and a small number of end-to-end
>> tests to make sure that you've wired the two parts of the system
>> correctly.
>>
>> Michael
>>
>> On Tue, 9 May 2017 19:18:30 +0200
>> Gregory Salvan <apieum at gmail.com> wrote:
>>
>>> 2) I would like to learn what is the standard way to deal with such
>>> situations. If there is a "standard way" at all... I mean, what am I
>>> supposed to do from the point of view of TDD?
>>>
>>>
>>> I feel confusion about what I've said.
>>> Let me be crystal clear: in the point of view of TDD *you never open
>>> a real file to test something in python* ! never
>>> This rely to python implementation and your aim is not to test python.
>>>
>>> The common way to do this in TDD is to use dependency injection and
>>> probably you'll see a strategy pattern emerging (or a partial one).
>>>
>>>
>>> Out of TDD, you test what you want, it's a different practice with a
>>> different purpose.
>>>
>>>
>>>
>>> 2017-05-09 16:36 GMT+02:00 Tim Ottinger <tottinge at gmail.com>:
>>>
>>> > I think that functional tests are fine, but they don't replace
>>> > microtests.
>>> >
>>> Microtests are fine, too but they don't replace functional tests.
>>> > Both of those are great, but don't replace humans when it comes to
>>> > matters of taste, experience, and judgment. ;-)
>>> >
>>> >
>>> > On Tue, May 9, 2017 at 9:35 AM Tim Ottinger <tottinge at gmail.com>
>>> > wrote:
>>> >
>>> >> Assuming you are doing a unit test (aka microtest):
>>> >>
>>> >> One easy way to avoid file system dependencies is to separate out a
>>> >> function that opens the file you specify.
>>> >> It becomes a very trivial function, of course.
>>> >>
>>> >> In the test, you stub that function to return a file-like object
>>> >> of the fake content you need in order for the test to work.
>>> >> This works just fine whether reading or writing.
>>> >>
>>> >> Of course, if you want to fake a higher level, you can split out a
>>> >> function that opens the file and parses the data -- then mock that
>>> >> function out.
>>> >>
>>> >> If nothing else is easy/quick/handy, you can always extract and
>>> >> mock your way out of (nearly) any situation.
>>> >>
>>> >>
>>> >>
>>> >> On Tue, May 9, 2017 at 8:34 AM Gregory Salvan <apieum at gmail.com>
>>> >> wrote:
>>> >>
>>> >>> I'm not advocating against functionnal tests in my answer, but if
>>> >>> you want my opinion about them it's best explained here:
>>> >>> http://blog.thecodewhisperer.com/permalink/integrated-tests-are-a-scam
>>> >>>
>>> >>> Then I've answered for TDD and BDD and not for real live system,
>>> >>> the purpose is different. By the way if you have cases where it's
>>> >>> relevant to test open() in real live system, I'm curious.
>>> >>> Once you've tested open() is called correctly (by injecting a
>>> >>> mock of open()) what's the point to test if it works in a real
>>> >>> live system otherwise testing if open() implementation works
>>> >>> correctly?
>>> >>>
>>> >>> 2017-05-09 14:31 GMT+02:00 Matt Wheeler <m at funkyhat.org>:
>>> >>>
>>> >>>> On Tue, 9 May 2017, 11:50 Gregory Salvan, <apieum at gmail.com>
>>> >>>> wrote:
>>> >>>>
>>> >>>>> It's to generic to have complex examples, but you really can
>>> >>>>> test everything without writing data in files, and it's not
>>> >>>>> really relevant to test if data are really written as it's the
>>> >>>>> same as testing if there's no errors in python "open"
>>> >>>>> implementation. I think we can trust python developpers for
>>> >>>>> that ;)
>>> >>>>>
>>> >>>>
>>> >>>> No, it's not just testing that open() works correctly, it's also
>>> >>>> testing that it's being called correctly in a real* live system.
>>> >>>> You seem to be advocating against functional tests as a concept?
>>> >>>>
>>> >>>> *certainly more real than a system involving mocking open() in a
>>> >>>> unit test.
>>> >>>>
>>> >>>>> --
>>> >>>>
>>> >>>> --
>>> >>>> Matt Wheeler
>>> >>>> http://funkyh.at
>>> >>>>
>>> >>>
>>> >>> _______________________________________________
>>> >>> testing-in-python mailing list
>>> >>> testing-in-python at lists.idyll.org
>>> >>> http://lists.idyll.org/listinfo/testing-in-python
>>> >>>
>>> >>
>>
>> _______________________________________________
>> testing-in-python mailing list
>> testing-in-python at lists.idyll.org
>> http://lists.idyll.org/listinfo/testing-in-python
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python



More information about the testing-in-python mailing list