[TIP] Best practice for failing tests
me at jamescooke.info
Thu Mar 22 05:16:34 PDT 2018
A small (hopefully simple) about best practice for failing tests.
I'm building a library that communicates with an external service. That
external service is run in production as a cluster with multiple nodes
taking various responsibilities. As a result of this complication, the
developer of the external service provides a fake of the edge of the
service that I'm testing against. (That's a fake using the TotT
definition as per
However, the fake is not perfect - one tiny piece of functionality is
missing in terms of how it keeps data up to date when new content is
added. There is a flag provided in one of the API endpoints that allows
for no stale data to be returned, but the fake does not implement this -
it returns stale data even when that flag is passed.
So my question is how should I test against this in pytest?
I have a failing test where the test adds new content and queries
for all content with the no stale flag passed, but the test fails
because the new content is not returned. I know that this works in
production against a full cluster, it's the limitation of the fake
that breaks the test.
In terms of best practice (what you would like to see if you were
reading my test suite), what would you recommend for this test and why?
I can see some options...
* Mark the test as xfail?
* Adjust the test to pass when stale data is returned?
* Remove the test from the suite and say the functionality is untested?
Or maybe something else?
Thanks for any suggestions.
ps. Ideally I'd be able to extend the fake to cover this new
functionality by my Go skill is just not good enough :(
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the testing-in-python