[TIP] Best practice for failing tests
nicoddemus at gmail.com
Thu Mar 22 12:34:34 PDT 2018
On Thu, Mar 22, 2018 at 4:26 PM Chris Jerdonek <chris.jerdonek at gmail.com>
> Note that in James's case the alternative would be the unconditional
> @pytest.mark.skip() rather than the conditional skipif() because there
> aren't cases where the test would be passing. The pytest docs give "no
> way of currently testing this" as a sample reason for skipping a test:
Thanks for pointing that out, we probably should update the reason to
something more meaningful.
The previous section (
shows a clearer distinction between `skip` and `xfail`.
> Unconditionally skipping tests is also useful in cases where the test
> isn't implemented yet, but you eventually do want to implement it
> (e.g. an empty or incomplete test function that might not necessarily
> raise an exception in its current state).
> Another reason is flaky
> tests that sometimes fail and sometimes succeed, so that xfail()
> wouldn't work as a substitute.
Indeed, but in James' case I think `xfail(strict=True)` is the way to go,
because it will *fail* the test if it suddenly starts passing because the
fake now works as expected, which is a good thing because then that
opportunity can be used to remove the marker. A skip will stay in there
forever until someone takes notice by some other means.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the testing-in-python