[TIP] - acceptness test frame work?

Laurent Ploix laurent.ploix at gmail.com
Thu Sep 10 16:06:20 PDT 2009

2009/9/11 Pekka Klärck <peke at iki.fi>

> 2009/9/10 Laurent Ploix <laurent.ploix at gmail.com>:
> > 2009/9/10 Pekka Klärck <peke at iki.fi>
> >> 2009/7/29 Laurent Ploix <laurent.ploix at gmail.com>:
> >> >
> >> > As far as I understand:
> >> > - Robot will take a number of scenarios, and for each of them, it will
> >> > stop
> >> > at the first error/failure.
> >>
> >> It's true that the framework stops executing a single tests when a
> >> keyword it executes fails. These keyword can, however, internally do
> >> as many checks it wants and decide when to stop. We are also planning
> >> to add support to continue execution after failures in the future [1].
> >>
> > Well, for me, this looks like a very annoying thing. I prefer to see the
> > fixture (library) being in charge of accessing the tested software (doing
> > sg, extracting data from it), and the framework being in charge of
> checking
> > the validity of the returned data.
> How can any generic framework know what data is valid without someone
> telling it?

Because, in the scenario that you describe, you tell the framework which
data you expect.


fixture Add
op1   |   op2   | result?
1      |     2     |     3

> > In other terms, I would like to do sg like:
> > - Do something
> > - Extract data x, y, z and check that x=1, y=2, z=3
> > ... with the scenario describing the expected data, the fixture
> extracting
> > data x, y, z, and the framework telling me if that's correct data or not
> > (with a report).
> You can do all this with Robot Framework but the actual verification
> is always done on libraries. The framework has many built-in keywords
> for verifications, but even when you use them the framework is still
> just executing keywords and catching possible exceptions.

Understood. But I don't find that convenient.

Let me take an example.

You test a software that calculates lots of data. You want to see what data
is wrong.

I find I much more convenient to have
- a fixture that extracts calculated data (but does not verification),
- a scenario that tells you what data you want to extract and what result
you expect
- a framework that matches both and create a report

Then, you get many good things for free:
- You don't have to code any verification logic in the fixture. Fixtures
tend to be simpler (i.e. : get the input, do action, get outputs)
- You can see how many failures you get; which ones, etc... (it's meaningful
to see how many failures you have. Having one wrong data or all of them does
not involve same type of debug)

My 2 cents

> Cheers,
>    .peke
> --
> Agile Tester/Developer/Consultant :: http://eliga.fi
> Lead Developer of Robot Framework :: http://robotframework.org

Laurent Ploix

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.idyll.org/pipermail/testing-in-python/attachments/20090911/df221dda/attachment.htm 

More information about the testing-in-python mailing list