[TIP] Test a system with paramatrized state, parametrized function input, and results.
sdoherty at avvasi.com
Wed May 5 13:35:23 PDT 2010
Thanks a lot for your help on this. As per our conversation on irc I've posted a rendition of the abstract system setup to demonstrate py.test being used in the way I was hoping. If anyone has any questions or suggestions I'd be interested to hear them.
This is the link with the code:
Putting all these files in the same directory and running py.test there should run it.
From: holger krekel [holger at merlinux.eu]
Sent: April 29, 2010 6:11 PM
To: Stuart Doherty
Cc: testing-in-python at lists.idyll.org
Subject: Re: [TIP] Test a system with paramatrized state, parametrized function input, and results.
On Thu, Apr 29, 2010 at 16:11 -0400, Stuart Doherty wrote:
> Hi All,
> I've been chatting with Ronnie and Holger about this on irc and it was suggested I bring the issue up on the list.
I had the more specific py.test mailing list in mind at
http://codespeak.net/mailman/listinfo/py-dev but "testing in python"
is a fine place as well.
> I'm trying to evaluate py.test for our integration and system test needs.
> We're developing a fairly substantial QA lab to test the mobile network
> equipment we're making... but nevermind all that, we can distill things down
> to abstract software components for now.
Thanks for taking the time - much easier to understand than
trying to abstract out domain details myself :)
> So, I have a system under test (SUT). The SUT can be initialized/kicked-off with some set of config values. The state of the system can then be modified through a function call. After the state has been modified there are different functions we can call to probe the system to ask it about its state. I want to be able to conduct a collection of tests where the input to each individual test run is:
> 1. The initialization/config of the system
> 2. The argument(s) to the function(s) that change state on the system.
> 3. The values expected to be returned from the probe functions.
> I know I could do this kind of thing when I was doing some Django development. I was able to inheret from unittest.TestCase, and then any fixture loading I did at the class level was only done once. It would be this kind of behaviour I think I want, except the fixture to load, and the values I verify against, woudl be input to the test runner.
I am sure there are various ways.
> I put an example of a system:
> And how the py.test test module might look:
> Neither would actually work in py.test. Its just to give you a sense for what I'm trying to do.
Here is a quick and compact example of how this could work with py.test:
* pytest_generate_tests: for all test functions which receives 'data'
multiple test function invocations are generated for multiple
* pytest_funcarg__system: provides a 'system' object to all test functions
needing it. It does so by retrieving the 'data' object and calling a
defined 'setup_system' method to perform per-class system initialization
according to the data set.
* setup_system: creates a per-class system object from a data set
and modifies it.
* the test functions: receive 'system' and 'data' and can work with it
* pytest_generate_tests/pytest_funcarg__system can be defined outside
the test module so that people writing tests can you use the minimal
remaining API ("implement setup_system() ; work with 'data' and
'system' objects; modify the domain-specific XML for configuring
new test data sets ...) and do not need to worry about advanced
* FWIW you could define setup_system() and test modules at module level.
Note that the test functions do not depend on 'self', their parameters
are fully passed in.
Using the module level people would not need to know about funny things
like classes, inheritance, classmethods and 'self' arguments :)
HTH. see you here or on IRC.
> Does anybody have any thoughts on this?
> P.S. If you want some more detail on our actual application, here goes:
> -We make mobile network gear.
> -We want to have a regression system setup where we can run a series of network capture files (PCAP format) into a clean system.
> -For each PCAP we want to probe different tables on the system database, and verify that the entries are what we expect.
> -Clean out system, rinse and repeat for a series of pcap/resultant-value combinations.
> -Assertions on incorrect values, etc.
> -A subset of tests woudl be used as a pre-svn-commit step for developers.
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
More information about the testing-in-python