[TIP] - acceptness test frame work?
yoavglazner at gmail.com
Tue Jul 28 01:23:59 PDT 2009
On Tue, Jul 28, 2009 at 12:10 AM, Laurent Ploix <laurent.ploix at gmail.com>wrote:
> Could you explain what made you consider pyFit and Robot an "overhead
> compare to the benefit" ? That may help people suggest other alternatives...
> (and maybe consider building another framework if they agree)
I'll start saying what are the benefits:
* easy to read test that anyone can understand (better in pyFit less in
* integration with Hudson (Continues Integration) - (good in PyFit , Robot
* fancy reports on the tests (this is not that important to us)
* Connecting between a text/table based Test Case and the real Code test.
* When a test fails in Pyfit its not clear why. if it failed with exception
then its very unreadable.
* PyFit requires annoying metadata to be written on each table.
* Robot does provide better failure analysis but doesn't provide the clear
PyFit test case definition.
* Mapping keywords in Robot is not hard but making the test-case look like a
test case in Word requires a lot of keywords and work.
none of the frameworks provided in kind of stub object to help testing
without the GUI layer.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the testing-in-python