[TIP] LNCS: Accuracy of test cases generated by pythoscope WAS: Test coverage and parsing.

Ryan Freckleton ryan.freckleton at gmail.com
Mon Oct 12 10:22:42 PDT 2009

On Mon, Oct 12, 2009 at 10:14 AM, Olemis Lang <olemis at gmail.com> wrote:
> @Ryan : I'm posting your message to the list. Hope you don't mind ...

Not at all, I *meant* to send it to the list, just pressed the wrong button :)


>> Assuming that your legacy code is perfect (hah!) then
>> pythoscope will generate executable tests that match this model.
> That's the good one ! You'll receive tests that check that your legacy
> code does exactly what the code is executing. I mean the softw does
> the following :
>  - Builds a model of the computations performed by target softw
>  - Infers some rules and conditions that *SHOULD* be met, but
>    using the same code
>  - Generates test cases to check that the conditions inferred are
>    really satisfied.
> Isn't it obvious that tests should always pass ? (pls, CMIIW ... this
> is just my naive idea ... ) Besides if tests fail then the only thing
> yo can ensure is that the test generator has failed or (if newer or
> modified versions of legacy are in use ;o) that the new system behaves
> somehow different ...

Correct, or the behavior of the system is non-deterministic in a way
that isn't detectable by the test generator (e.g. the software runs
into deadlock conditions that cause the test to fail). I wouldn't
categorize this as a "test generator failure" per se.

> The workflow I envisioned to use `pythoscope` was to follow the
> aforementioned steps but, once tests are generated, review all the
> test cases in order to confirm that the assertions in there are really
> correct. Otherwise, a bug was found and thus the test case has to be
> modified somehow in order to meet the reqs.
> BTW this is much better than manual TCs
> :o)

Definitely, in this case you'll be guaranteeing what is sometimes
called "structural" coverage. That is, you have test cases that
exercise each unit of code.

> Another Q:
>  - At what extent `pythoscope` helps with test coverage ?

It helps dramatically with physical coverage and indirectly with
functional coverage (depending on how good your entry points are and
how well you elaborate on the stubs). These are the two aspects of
coverage that are the closest to developers and therefore the most
directly useful.

Other aspects of coverage add or detract to the software's value, but
aren't automatically covered when you use Pythoscope (or any testing
tool, for that matter)

You also have to cover the possible data inputs, the platforms that
the software will run on/interact with and the software's operation.

The software's operation includes such characteristics as :
- Capability
- Reliability
- Usability
- Security
- Scalability
- Performance
- Instability
- Compatibility
- Supportability
- Testability
- Maintainability
- Portability
- Localizibility

All of these things become easier to guarantee once you have excellent
physical and functional coverage, though.

--Ryan E. Freckleton

More information about the testing-in-python mailing list