[TIP] LNCS: Accuracy of test cases generated by pythoscope WAS: Test coverage and parsing.
constant.beta at gmail.com
Mon Oct 12 11:17:57 PDT 2009
On Mon, Oct 12, 2009 at 6:14 PM, Olemis Lang <olemis at gmail.com> wrote:
> The workflow I envisioned to use `pythoscope` was to follow the
> aforementioned steps but, once tests are generated, review all the
> test cases in order to confirm that the assertions in there are really
> correct. Otherwise, a bug was found and thus the test case has to be
> modified somehow in order to meet the reqs.
You can certainly do that. Unit tests, by their very nature, are
simpler to read than the system code, so it may be easier to spot bugs
there. By a virtue of the whole process being automated a test
generator can also point you to corner cases you didn't think about.
For example, if you see a test that contains:
assertRaises(BadException, lambda: your_function("some bad value"))
you know there's something wrong with your_function (or not, if
raising BadException was intentional).
> - At what extent `pythoscope` helps with test coverage ?
It should make it higher, or at least keep it at the same level. ;-) I
think Ryan's answer sums it up nicely.
More information about the testing-in-python