[TIP] Best practice for failing tests

Bruno Oliveira nicoddemus at gmail.com
Thu Mar 22 12:34:34 PDT 2018

Hi Chris,

On Thu, Mar 22, 2018 at 4:26 PM Chris Jerdonek <chris.jerdonek at gmail.com>

> Note that in James's case the alternative would be the unconditional
> @pytest.mark.skip() rather than the conditional skipif() because there
> aren't cases where the test would be passing. The pytest docs give "no
> way of currently testing this" as a sample reason for skipping a test:
> https://docs.pytest.org/en/documentation-restructure/how-to/skipping.html#skipping-test-functions

Thanks for pointing that out, we probably should update the reason to
something more meaningful.

The previous section (
shows a clearer distinction between `skip` and `xfail`.

> Unconditionally skipping tests is also useful in cases where the test
> isn't implemented yet, but you eventually do want to implement it
> (e.g. an empty or incomplete test function that might not necessarily
> raise an exception in its current state).


> Another reason is flaky
> tests that sometimes fail and sometimes succeed, so that xfail()
> wouldn't work as a substitute.

Indeed, but in James' case I think `xfail(strict=True)` is the way to go,
because it will *fail* the test if it suddenly starts passing because the
fake now works as expected, which is a good thing because then that
opportunity can be used to remove the marker. A skip will stay in there
forever until someone takes notice by some other means.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20180322/4924825e/attachment.htm>

More information about the testing-in-python mailing list