[TIP] Best practice for failing tests

Bruno Oliveira nicoddemus at gmail.com
Thu Mar 22 06:26:41 PDT 2018


Hi James,

On Thu, Mar 22, 2018 at 9:20 AM James Cooke <me at jamescooke.info> wrote:

> In terms of best practice (what you would like to see if you were reading
> my test suite), what would you recommend for this test and why? I can see
> some options...
>
> * Mark the test as xfail?
>
> * Adjust the test to pass when stale data is returned?
>
> * Remove the test from the suite and say the functionality is untested?
>

IMHO the correct approach is marking the test as
`@pytest.mark.xfail(strict=True, reason='TODO: fix fake, issue #1234')`.
One of the reasons `@pytest.mark.xfail` exists is to convey the meaning
that "this test is failing right now for known reasons, but we want to fix
it eventually", as opposed to `@pytest.mark.skipif` which means "we should
skip this test because of <reasons>", with "reasons" usually being
something environment related, like unsupported OS, missing libraries, etc,
and that is OK and not something which will eventually be fixed.

`strict=True` means that, if the fake is fixed (either by you or someone
else), then the test suite will *fail*, which will alert you of the fact
and then you can remove the `xfail` marker. :)

Best Regards,
Bruno.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20180322/4b520ef8/attachment.htm>


More information about the testing-in-python mailing list