[TIP] LNCS: Accuracy of test cases generated by pythoscope WAS: Test coverage and parsing.

Olemis Lang olemis at gmail.com
Mon Oct 12 06:34:38 PDT 2009


This is a fork ... and is related to these threads

  - «Test coverage and parsing.»
  - ... another thread about unreliable test code and test suites that
get mad ...

2009/10/9 Michał Kwiatkowski <constant.beta at gmail.com>:
> 2009/10/6 Olemis Lang <olemis at gmail.com>:
>
[...]
>
>> 1- It seems that CI environments are compulsory after adopting this
>> approach. I mean, how could you know if test data is good enough to
>> cover all relevant scenarios? What better tool than CI systems to
>> measure such metrics ;o) ?
>
> It's a general statement about all tests: you can't use them to prove
> your code works, no matter how many of them you have. Still, IMHO
> having 1000 diverse test cases auto-generated from a declarative spec
> (and different set of 1000 each time!) is better than having a few
> written manually. That's true for at least those programs that are
> mostly pure (i.e. don't use side effects) and parsers and compilers
> are a great example of that category.
>

BTW & OT : This is a something I wanted to ask since long time agoi,
but ... you know ... I'm shy

It's about `pythoscope`. I'll start assuming that everything written
in section «2.3 Models: build or borrow?» of Mark Utting's book [1]_
is valid at some extent (something that could be reconsidered, of
course ;o). Especially I'll focus on these suggestions an comments
they make :

{{{
By now, you have seen that a good model is the most essential thing to have
before we can start generating tests using model-based testing.
}}}

+1 ... IMHO

{{{
... does the testing team have
to build their own model for testing purposes, or can they borrow (reuse) the
models that the development team has already developed ?
}}}

general question introducing the whole section

{{{
At first glance, it seems very attractive to just reuse the development
model, without change, as the input for the model-based testing process.
This would give us 100 percent reuse and would obviously save time. How-
ever, this level of reuse is rarely possible, and when it is possible,
it is often a
very bad idea.
}}}

+1 ... IMHO

They also mention that (and this is the most important subject ;o)

{{{
Probably the only people who write development models that describe
the full dynamic behavior of the SUT are those who intend to generate code
automatically from the model rather than develop the code by hand
(...) In this case, using the same model for test generation would be
a very bad idea because it would mean that the tests and the
implementation of the SUT are both being derived from the same source.
So any errors in that model will generate incorrect implementation
code, but the tests will contain the same errors, so no test failures
will be found. The problem is that the tests and the implementation
are not sufficiently independent.
}}}

And here is where the meteorite falls ! ( especially because maybe I
don't fully understand how things work ;o)

`pythoscope` may be considered as an MB test generator that considers
as input the (legacy) code itself (... or an equivalent representation
like ASTs ;o) and employs that model to generate test cases. If this
is correct (CMIIW) then users are facing the situation mentioned
above: «tests and the implementation are not sufficiently independent»
and «any errors in that model (i.e. legacy code) will generate
incorrect implementation code (i.e. they are both the same thing), but
the tests will contain the same errors, so no test failures will be
found»

So ...

Q:
    - Is it possible that something like that be happening
      if using `pythoscope` ?
    - If true ... Is there any way (provided by pythoscope or not ;o)
      to overcome this ?

Well, so far is enough : Britney is calling !
C'u soon :o)

.. [1] Practical model-based testing : a tools approach, Mark Utting,
Bruno Legeard.
        (ISBN-13: 978-0-12-372501-1)

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:



More information about the testing-in-python mailing list