[TIP] Test coverage and parsing.

Olemis Lang olemis at gmail.com
Mon Oct 12 06:10:20 PDT 2009


2009/10/9 Michał Kwiatkowski <constant.beta at gmail.com>:
> Hi again,
>

:)

> Sorry for late reply, it's a crazy start of the semester.
>

Don't worry . I will never forgive you !

XD

> 2009/10/6 Olemis Lang <olemis at gmail.com>:
>
>> So I suppose that I'd need to implement
>> one such model for SQL (isn't it ?) so that SQL statements be
>> generated instead of unstructured data.
>
> That's right, and you can build your generators on top of existing
> ones, provided by QuickCheck, or in Python's case: PeckCheck. Some
> built-in generators include things like a_boolean or an_int (Arbitrary
> Bool and Arbitrary Integer respectively for Haskell). You can also
> build new generators out of existing ones using combinator functions,
> like a_list or a_choice (called listof and oneof in Haskell).
>

Well, I think I got the idea. It seems that I will only need those
combinator functions so far.

>>  - Since I've found nothing about that, and you mentioned composable
>>    operators and other features: Is it part of QuickCheck or is it an
>>    extension? If so, which one ?
>>  - Is it available too in the Py version?
>
> If you look at the actual code for PeckCheck it's a little more than
> 200 lines.

Well I just skipped the part when you illustrated that in your
previous example ... sorry

> I think the idea is more important here than the actual
> implementation. And the idea is this: Given that you have code written
> in a functional style (i.e. without side effects) you can express
> properties of the code in a mathematical-like notation using data
> generators for doing actual checking.

The only warning I want to make about this is that I don't like the
workflow where people create models and generate both code and tests
using it ...

> The produced test suite will be
> closer to the textual specification and you can adjust the scale of
> your tests more easily (by which I mean how many test cases for each
> spec you want to run). It's kind of like fuzzy testing, but with a bit
> less fuzz. ;-)
>

well but special attention must be given to coverage and regression
testing . I found some gaps in there

> Now, the idea is simple, but you need a good framework to execute it.
> Here's where all the versions differ. For example, Haskell leverages
> its knowledge of types, making specifications less verbose.
>
>> 1- It seems that CI environments are compulsory after adopting this
>> approach. I mean, how could you know if test data is good enough to
>> cover all relevant scenarios? What better tool than CI systems to
>> measure such metrics ;o) ?
>
> It's a general statement about all tests: you can't use them to prove
> your code works, no matter how many of them you have.

Yes that's right. But when manual tests are written then there's a
*good* judgment about whether test cases are useful & correct or not.
OTOH if they are generated then you just can't say ...

«Softw's rock solid, I generated 1,000,000 test and they all passed»

... since you don't actually control all the details of the generator
then what if 83 % of your tests are performed with irrelevant or
redundant data ? That's why I mentioned CI environments, because they
seem to be perfect to display and track reports about distribution of
test data, created by the test generator (e.g. QuickCheck 's xxx |
`collect` AFAICR function ). But should schemas (better a single and
language agnostic schema) be available so that test generators & CI
tools be able to exchange the data, track testgen coverage and build
reports out of it ?

I think that would be nice ...

> Still, IMHO
> having 1000 diverse test cases auto-generated from a declarative spec
> (and different set of 1000 each time!) is better than having a few
> written manually.

Certainly ... but IMHO only if test data is good enough. Besides
there's an open subject : regression testing

> In the particular case of the QuickCheck-like frameworks, right
> distribution of generated values must be taken into consideration.
> There are always some corner cases and you want them to be tested
> reliably, not randomly (say in tenth of all test runs). That's another
> area where a good framework can help the programmer express the
> specifications right.
>

;o)

>> 2- This approach and these libs are both very useful. Shouldn't we
>> build at least a few of them and enhance support for this in CI tools
>> ?
>
> I agree. Specs for things like XML, SQL or JSON are there, but having
> executable specs would be even better. :-)

:)

> Seriously though, right now
> it sounds like a pipe dream

why exactly ? (I'm a neophyte, can't see the light at the end of the
tunnel :-/ )

> to have full executable specs, but we (as
> a more general "development community") could do better.
>

+0.8

> One example of that, from the Java world, is the Technology
> Compatibility Kit - a comprehensive test suite used for checking
> compatibility of different JVM implementations.
>

Didn't know about it

>> I like it ! Damn it ! I couldn't sleep last night :P
>>
>> Thnx very, very, very much !
>
> The please is mine. It's always nice to meet another test infected person. :-)
>

YES !
It's the T-virus and it spreads fast as hell ... Take care !

-- 
Regards,

Olemis.

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:



More information about the testing-in-python mailing list