<br><br><div class="gmail_quote">2009/9/11 Pekka Klärck <span dir="ltr"><<a href="mailto:peke@iki.fi">peke@iki.fi</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
2009/9/11 Laurent Ploix <<a href="mailto:laurent.ploix@gmail.com">laurent.ploix@gmail.com</a>>:<br>
<div><div></div><div class="h5">> 2009/9/11 Pekka Klärck <<a href="mailto:peke@iki.fi">peke@iki.fi</a>><br>
>> 2009/9/10 Laurent Ploix <<a href="mailto:laurent.ploix@gmail.com">laurent.ploix@gmail.com</a>>:<br>
>> > 2009/9/10 Pekka Klärck <<a href="mailto:peke@iki.fi">peke@iki.fi</a>><br>
>> >> 2009/7/29 Laurent Ploix <<a href="mailto:laurent.ploix@gmail.com">laurent.ploix@gmail.com</a>>:<br>
>> >> ><br>
>> >> > As far as I understand:<br>
>> >> > - Robot will take a number of scenarios, and for each of them, it<br>
>> >> > will<br>
>> >> > stop<br>
>> >> > at the first error/failure.<br>
>> >><br>
>> >> It's true that the framework stops executing a single tests when a<br>
>> >> keyword it executes fails. These keyword can, however, internally do<br>
>> >> as many checks it wants and decide when to stop. We are also planning<br>
>> >> to add support to continue execution after failures in the future [1].<br>
>> >><br>
>> > Well, for me, this looks like a very annoying thing. I prefer to see the<br>
>> > fixture (library) being in charge of accessing the tested software<br>
>> > (doing<br>
>> > sg, extracting data from it), and the framework being in charge of<br>
>> > checking<br>
>> > the validity of the returned data.<br>
>><br>
>> How can any generic framework know what data is valid without someone<br>
>> telling it?<br>
><br>
><br>
> Because, in the scenario that you describe, you tell the framework which<br>
> data you expect.<br>
> like<br>
> fixture Add<br>
> op1 | op2 | result?<br>
> 1 | 2 | 3<br>
<br>
</div></div>Even here you tell to the framework that the correct result is 3.<br>
Personally I don't see what's the benefit in the framework itself<br>
verifying the result compared to it just calling a keyword in a<br>
library/fixture that verifies it. The biggest benefit of the latter<br>
approach is that the test data is always in the same format regardless<br>
of the test style (workflow, data-driven, BDD), which makes it a lot<br>
easier to create other tools that understand the data. I believe the<br>
biggest reason there still is no generic test data editor for Fitnesse<br>
is that the data can be in so many formats. Tool support is really<br>
important when you start having more tests, and that's why we are<br>
currently working actively to make RIDE [1] better.<br>
<br>
[1] <a href="http://code.google.com/p/robotframework-ride" target="_blank">http://code.google.com/p/robotframework-ride</a><br>
<div><div></div><div class="h5"><br>
>> > In other terms, I would like to do sg like:<br>
>> > - Do something<br>
>> > - Extract data x, y, z and check that x=1, y=2, z=3<br>
>> > ... with the scenario describing the expected data, the fixture<br>
>> > extracting<br>
>> > data x, y, z, and the framework telling me if that's correct data or not<br>
>> > (with a report).<br>
>><br>
>> You can do all this with Robot Framework but the actual verification<br>
>> is always done on libraries. The framework has many built-in keywords<br>
>> for verifications, but even when you use them the framework is still<br>
>> just executing keywords and catching possible exceptions.<br>
><br>
> Understood. But I don't find that convenient.<br>
> Let me take an example.<br>
> You test a software that calculates lots of data. You want to see what data<br>
> is wrong.<br>
> I find I much more convenient to have<br>
> - a fixture that extracts calculated data (but does not verification),<br>
> - a scenario that tells you what data you want to extract and what result<br>
> you expect<br>
> - a framework that matches both and create a report<br>
> Then, you get many good things for free:<br>
> - You don't have to code any verification logic in the fixture. Fixtures<br>
> tend to be simpler (i.e. : get the input, do action, get outputs)<br>
> - You can see how many failures you get; which ones, etc... (it's meaningful<br>
> to see how many failures you have. Having one wrong data or all of them does<br>
> not involve same type of debug)<br>
<br>
</div></div>We seen to have different preferences on what's important. I don't<br>
care too much about adding the verification to the fixture as it's<br>
normally enough to do something like "if result != expected: raise<br>
Exception(error_message)". On the other hand I consider the fact that<br>
the fixture is strongly coupled to the test data format a really big<br>
problem. It's not just that creating editors for the data is harder<br>
when there are many data formats, but it also means that you cannot<br>
reuse fixtures as freely because they are compatible only with one<br>
kind of tests.<br>
<br>
I got a feeling that without a concrete examples this discussion is<br>
not going to get too much further. I agree to disagree on what's<br>
important on framework design, and also believe it's just a good thing<br>
there are different frameworks with different approaches.<br>
<br>
Cheers,<br>
.peke<br>
</blockquote></div><br>Yes, I agree to disagree :-) and it's good to have different tools for different problems.<div><br></div><div>I wish Robot Framework good success.</div><div><br></div><div>I hope to come with a proposal some day, if I can open source an internal development.<br clear="all">
<br>-- <br>Laurent Ploix<br><br><a href="http://lauploix.blogspot.com/">http://lauploix.blogspot.com/</a><br>
</div>