[TIP] decorators in testing : warning support for TestCase
olemis at gmail.com
Mon Apr 6 08:50:43 PDT 2009
On Mon, Apr 6, 2009 at 10:17 AM, Douglas Philips <dgou at mac.com> wrote:
> On or about 2009 Apr 6, at 9:21 AM, Olemis Lang indited:
>> - The recent thread about higlighting the tests that were previously
>> run but didnt show up at testing time ... this is not precisely an
>> error, but perhaps means that something weird was going on, or
>> maybe not ;).
> Perhaps, but I don't think that should be part of the regression
> runner itself.
ok, just an example ;)
>> - The same use case we are talking about ... if setUp fails there are
>> mainly three options to consider :
>> * signal a failed test case ... but IMHO this *SHOULD NOT* be used
>> since the test itself didnot fail ... it just couldnt be
>> performed ... so a better approach is ...
> Agreed that is not an option.
>> * ... to signal a test failure, indicating that the test code was
>> not accurate and needs to be fixed. But maybe this is not the
>> case. Perhaps under certain circumstances the lack of resources
>> is not a test failure at all, but a consequence of a specific
>> config ... and in this case maybe it is better ...
> Yes, and we use "Skip" in this case, though in our version of the
> unittest framework we use an eligibility test before even getting to
> setUp, and that catches most of these "skip in setUp" kinds of cases.
It's valuable however I think Titus request was about being aware of
these "skip in setUp" and other kinds of cases, and maybe
>> * ... to log a warning message ... ;)
and the question is ...
> One other thing we do in setUp is protect our code with asserts to
> indicate that we need to go back and write more test code. For
> example, if the product specs says: "Device can do X", we need to test
> X. However, if the current hardware can't do X, we assert that to
> protect our test from producing an false-postive/PASS. Sure, we could
> speculatively write code to test X, but if we can't run that code, it
> is highly suspect and likely to require bug fixes when X is actually
> made available. Instead of wasting time on code we can't run, we
> protect our tests with asserts. Our philosophy over all is that a test
> FAILure is a bug in the device-under-test. A test ERRor is a bug in
> the test itself, not the device.
However some people under certain circumstances might consider that a
test error is an error in the test code itself, a failure is an unmet
condition with respect to the SUT, and finally warnings: something you
might want to know even if the test does not actually fail.
In fact, in the specific example I told you about some warnings are :
«PEP 306 doesnot support string interpolation in error messages»
However the contracts semantics work just fine, and therefore there's
no error or failure at all
PS: I've just eaten my tasty ellipsis ... crunch, crunch ... oops !
there they are back again
Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/
Se retira el BDFL ... ¿El fin de Python 3k?
More information about the testing-in-python