[TIP] - acceptness test frame work?

Laurent Ploix laurent.ploix at gmail.com
Fri Sep 11 02:19:12 PDT 2009


2009/9/11 Pekka Klärck <peke at iki.fi>

> 2009/9/11 Laurent Ploix <laurent.ploix at gmail.com>:
> > 2009/9/11 Pekka Klärck <peke at iki.fi>
> >> 2009/9/10 Laurent Ploix <laurent.ploix at gmail.com>:
> >> > 2009/9/10 Pekka Klärck <peke at iki.fi>
> >> >> 2009/7/29 Laurent Ploix <laurent.ploix at gmail.com>:
> >> >> >
> >> >> > As far as I understand:
> >> >> > - Robot will take a number of scenarios, and for each of them, it
> >> >> > will
> >> >> > stop
> >> >> > at the first error/failure.
> >> >>
> >> >> It's true that the framework stops executing a single tests when a
> >> >> keyword it executes fails. These keyword can, however, internally do
> >> >> as many checks it wants and decide when to stop. We are also planning
> >> >> to add support to continue execution after failures in the future
> [1].
> >> >>
> >> > Well, for me, this looks like a very annoying thing. I prefer to see
> the
> >> > fixture (library) being in charge of accessing the tested software
> >> > (doing
> >> > sg, extracting data from it), and the framework being in charge of
> >> > checking
> >> > the validity of the returned data.
> >>
> >> How can any generic framework know what data is valid without someone
> >> telling it?
> >
> >
> > Because, in the scenario that you describe, you tell the framework which
> > data you expect.
> > like
> > fixture Add
> > op1   |   op2   | result?
> > 1      |     2     |     3
>
> Even here you tell to the framework that the correct result is 3.
> Personally I don't see what's the benefit in the framework itself
> verifying the result compared to it just calling a keyword in a
> library/fixture that verifies it. The biggest benefit of the latter
> approach is that the test data is always in the same format regardless
> of the test style (workflow, data-driven, BDD), which makes it a lot
> easier to create other tools that understand the data. I believe the
> biggest reason there still is no generic test data editor for Fitnesse
> is that the data can be in so many formats. Tool support is really
> important when you start having more tests, and that's why we are
> currently working actively to make RIDE [1] better.
>
> [1] http://code.google.com/p/robotframework-ride
>
> >> > In other terms, I would like to do sg like:
> >> > - Do something
> >> > - Extract data x, y, z and check that x=1, y=2, z=3
> >> > ... with the scenario describing the expected data, the fixture
> >> > extracting
> >> > data x, y, z, and the framework telling me if that's correct data or
> not
> >> > (with a report).
> >>
> >> You can do all this with Robot Framework but the actual verification
> >> is always done on libraries. The framework has many built-in keywords
> >> for verifications, but even when you use them the framework is still
> >> just executing keywords and catching possible exceptions.
> >
> > Understood. But I don't find that convenient.
> > Let me take an example.
> > You test a software that calculates lots of data. You want to see what
> data
> > is wrong.
> > I find I much more convenient to have
> > - a fixture that extracts calculated data (but does not verification),
> > - a scenario that tells you what data you want to extract and what result
> > you expect
> > - a framework that matches both and create a report
> > Then, you get many good things for free:
> > - You don't have to code any verification logic in the fixture. Fixtures
> > tend to be simpler (i.e. : get the input, do action, get outputs)
> > - You can see how many failures you get; which ones, etc... (it's
> meaningful
> > to see how many failures you have. Having one wrong data or all of them
> does
> > not involve same type of debug)
>
> We seen to have different preferences on what's important. I don't
> care too much about adding the verification to the fixture as it's
> normally enough to do something like "if result != expected: raise
> Exception(error_message)". On the other hand I consider the fact that
> the fixture is strongly coupled to the test data format a really big
> problem. It's not just that creating editors for the data is harder
> when there are many data formats, but it also means that you cannot
> reuse fixtures as freely because they are compatible only with one
> kind of tests.
>
> I got a feeling that without a concrete examples this discussion is
> not going to get too much further. I agree to disagree on what's
> important on framework design, and also believe it's just a good thing
> there are different frameworks with different approaches.
>
> Cheers,
>    .peke
>

Yes, I agree to disagree :-) and it's good to have different tools for
different problems.
I wish Robot Framework good success.

I hope to come with a proposal some day, if I can open source an internal
development.

-- 
Laurent Ploix

http://lauploix.blogspot.com/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.idyll.org/pipermail/testing-in-python/attachments/20090911/fdf90d8a/attachment.html 


More information about the testing-in-python mailing list