[TIP] Best practice for failing tests

Chris Jerdonek chris.jerdonek at gmail.com
Thu Mar 22 12:26:01 PDT 2018


On Thu, Mar 22, 2018 at 6:26 AM, Bruno Oliveira <nicoddemus at gmail.com> wrote:
> Hi James,
>
> On Thu, Mar 22, 2018 at 9:20 AM James Cooke <me at jamescooke.info> wrote:
>>
>> In terms of best practice (what you would like to see if you were reading
>> my test suite), what would you recommend for this test and why? I can see
>> some options...
>>
>> * Mark the test as xfail?
>>
>> * Adjust the test to pass when stale data is returned?
>>
>> * Remove the test from the suite and say the functionality is untested?
>
>
> IMHO the correct approach is marking the test as
> `@pytest.mark.xfail(strict=True, reason='TODO: fix fake, issue #1234')`. One
> of the reasons `@pytest.mark.xfail` exists is to convey the meaning that
> "this test is failing right now for known reasons, but we want to fix it
> eventually", as opposed to `@pytest.mark.skipif` which means "we should skip
> this test because of <reasons>", with "reasons" usually being something
> environment related, like unsupported OS, missing libraries, etc, and that
> is OK and not something which will eventually be fixed.

Note that in James's case the alternative would be the unconditional
@pytest.mark.skip() rather than the conditional skipif() because there
aren't cases where the test would be passing. The pytest docs give "no
way of currently testing this" as a sample reason for skipping a test:
https://docs.pytest.org/en/documentation-restructure/how-to/skipping.html#skipping-test-functions

Unconditionally skipping tests is also useful in cases where the test
isn't implemented yet, but you eventually do want to implement it
(e.g. an empty or incomplete test function that might not necessarily
raise an exception in its current state). Another reason is flaky
tests that sometimes fail and sometimes succeed, so that xfail()
wouldn't work as a substitute.

--Chris


>
> `strict=True` means that, if the fake is fixed (either by you or someone
> else), then the test suite will *fail*, which will alert you of the fact and
> then you can remove the `xfail` marker. :)
>
> Best Regards,
> Bruno.
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>



More information about the testing-in-python mailing list