[TIP] LNCS: Accuracy of test cases generated by pythoscope WAS: Test coverage and parsing.

Olemis Lang olemis at gmail.com
Mon Oct 12 09:14:21 PDT 2009


@Ryan : I'm posting your message to the list. Hope you don't mind ...

On Mon, Oct 12, 2009 at 10:32 AM, Ryan Freckleton
<ryan.freckleton at gmail.com> wrote:
> 2009/10/12 Olemis Lang <olemis at gmail.com>:
> <snip>
>>
>> And here is where the meteorite falls ! ( especially because maybe I
>> don't fully understand how things work ;o)
>>
>> `pythoscope` may be considered as an MB test generator that considers
>> as input the (legacy) code itself (... or an equivalent representation
>> like ASTs ;o) and employs that model to generate test cases. If this
>> is correct (CMIIW) then users are facing the situation mentioned
>> above: «tests and the implementation are not sufficiently independent»
>> and «any errors in that model (i.e. legacy code) will generate
>> incorrect implementation code (i.e. they are both the same thing), but
>> the tests will contain the same errors, so no test failures will be
>> found»
>>
>> So ...
>>
>> Q:
>>    - Is it possible that something like that be happening
>>      if using `pythoscope` ?
>>    - If true ... Is there any way (provided by pythoscope or not ;o)
>>      to overcome this ?
>>
[...]
>
> Looking on the wikipedia page for Model-Based-Testing, it looks like
> the test cases are created from a model of the system's desired
> behavior.

Well it *SHOULD* be created from a model of the system's desired
behavior; but sometimes that's not what happens, and at some extent
this is the case ... ;-/

The system's desired behavior may not be the softw real behavior.
Otherwise why are we testing it, anyway ?

> Assuming that your legacy code is perfect (hah!) then
> pythoscope will generate executable tests that match this model.
>

That's the good one ! You'll receive tests that check that your legacy
code does exactly what the code is executing. I mean the softw does
the following :

  - Builds a model of the computations performed by target softw
  - Infers some rules and conditions that *SHOULD* be met, but
    using the same code
  - Generates test cases to check that the conditions inferred are
    really satisfied.

Isn't it obvious that tests should always pass ? (pls, CMIIW ... this
is just my naive idea ... ) Besides if tests fail then the only thing
yo can ensure is that the test generator has failed or (if newer or
modified versions of legacy are in use ;o) that the new system behaves
somehow different ...

> In either case unit tests are still very valuable as developer tools
> and bug detectors, even if you don't assume that your code is perfect.
> They allow refactoring, and the detection of bugs within single units
> of code.
>

... just like in this case, but not that the outcomes are more or less accurate

The workflow I envisioned to use `pythoscope` was to follow the
aforementioned steps but, once tests are generated, review all the
test cases in order to confirm that the assertions in there are really
correct. Otherwise, a bug was found and thus the test case has to be
modified somehow in order to meet the reqs.

BTW this is much better than manual TCs
:o)

> The remaining question is "how do you verify that your legacy code
> matches your model of desired behavior?" There isn't a way to do this
> automatically :(, you have to determine what's valuable to your
> stakeholders and develop tests that will give you information about
> whether those valuable characteristics are present in the software.
>

Ahhh ! There we have it .

My conclusions : `pythoscope` is very useful, but better take care of
what you'r doing ;o)

Another Q:
  - At what extent `pythoscope` helps with test coverage ?

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:



More information about the testing-in-python mailing list