[TIP] Best practice for failing tests

Chris Jerdonek chris.jerdonek at gmail.com
Thu Mar 22 05:39:42 PDT 2018


On Thu, Mar 22, 2018 at 5:16 AM, James Cooke <me at jamescooke.info> wrote:
> I have a failing test where the test adds new content and queries for all
> content with the no stale flag passed, but the test fails because the new
> content is not returned. I know that this works in production against a full
> cluster, it's the limitation of the fake that breaks the test.
>
> In terms of best practice (what you would like to see if you were reading my
> test suite), what would you recommend for this test and why? I can see some
> options...
>
> * Mark the test as xfail?
>
> * Adjust the test to pass when stale data is returned?
>
> * Remove the test from the suite and say the functionality is untested?

What occurs to me first is to write the test as you eventually want it
to pass, but mark it as skipped (e.g. using @unittest.skip) with a
TODO note.

The reason is that the correct solution IMO is to extend the fake to
cover this new functionality, but that's something you haven't
implemented yet.

Strictly speaking, I wouldn't consider it an "expected" failure
because in the end you're really wanting / expecting it to pass.

--Chris


> Or maybe something else?
>
> Thanks for any suggestions.
>
> James
>
> ps. Ideally I'd be able to extend the fake to cover this new functionality
> by my Go skill is just not good enough :(
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>



More information about the testing-in-python mailing list