[TIP] - acceptness test frame work?

yoav glazner yoavglazner at gmail.com
Tue Jul 28 01:23:59 PDT 2009


On Tue, Jul 28, 2009 at 12:10 AM, Laurent Ploix <laurent.ploix at gmail.com>wrote:

>
> Could you explain what made you consider pyFit and Robot an "overhead
> compare to the benefit" ? That may help people suggest other alternatives...
> (and maybe consider building another framework if they agree)
>
> laurent
>

I'll start saying what are the benefits:
* easy to read test that anyone can understand (better in pyFit less in
Robot)
* integration with Hudson (Continues Integration) - (good in PyFit , Robot
and Nose...)
* fancy reports on the tests (this is not that important to us)
* Connecting between a text/table based Test Case and the real Code test.

overhead:
* When a test fails in Pyfit its not clear why. if it failed with exception
then its very unreadable.
* PyFit requires annoying metadata to be written on each table.
* Robot does provide better failure analysis but doesn't provide the clear
PyFit test case definition.
* Mapping keywords in Robot is not hard but making the test-case look like a
test case in Word requires a lot of keywords and work.

none of the frameworks provided in kind of stub object to help testing
without the GUI layer.

Thanks,
Yoav.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.idyll.org/pipermail/testing-in-python/attachments/20090728/3a15c246/attachment.html 


More information about the testing-in-python mailing list