[TIP] fixture ordering and dynamically adding fixtures to Testcases

Floris Bruynooghe flub at devork.be
Tue Jan 30 22:41:47 PST 2018


Hi Arun,

Sounds like you're looking to do load testing.  This is a rather
different scenario then what pytest was designed to be helpful for.  Why
don't you create a small custom module/script which does the load test
in the way you desire.  The values in the markers could be arguments to
your loadtest function.  Your "generate tests" could be a simple loop
which you break out of as soon as you get a failure.

Which properties of the unit-testing framework provided by pytest were
you interested in which made you try to use it for this purpose?

I must add all of this is my opinion upon reading your description,
maybe other people will see things differently.

Kind Regards,
Floris


arun kali raja <arunsep886 at gmail.com> writes:

> Hello Floris,
>
> Sorry for a much much delayed response..
>
> Thanks for clarifying on the fixture part..
>
> I am trying to use pytest for some dimensioning test cases.
>
> So I have a system under test and need to figure out what's the max
> 'number' it can handle.. the 'number' will be inturn multiple.. it can be
> msgs/sec or sessions etc..
>
> And the way the test will figure out what is the 'number' can also be
> different.. one algorithm I can use is a 'ramp' where user specifies what's
> the start and end and delta to be increased at each step. There may be
> different algorithms I can use..
>
> Now say I have specified start as 100 and end as 1000 and delta as 50.. and
> when I reached 650 it has failed.. I don't want to run successive delta
> increments..
>
>  And also based on the 'number' type the user wants to dimension I want
> some pre conditions to be verified before test actually starts.. I was
> planning to do these:
>
> 1. Use a marker in the test case to say what is the algorithm to use and
> what are the parameters for the algorithm. Another pytest marker to say
> what 'number' the user is trying dimension.
>
> 2. Use pytest_generate_test to  parameterize the test according to the
> algorithm values.
>
> 3. Use pytest namespace to store result of every iteration and in a
> function fixture decide whether to continue with next iteration or not..
>
> 4. Suppose the user says dimension based on num of msgs i want a fixture to
> be run before the test to ensure that the data generation tool is correctly
> generating that rate..  if the user says session count then may be another
> predefined fixture will do the respective job..
>
> I have one tricky use case where I can use a tree search to find the  right
> value of 'number' .. so all 'number's are stored in a binary tree and the
> test finds the right values trying out each value.. i am was how to use
> pytest_runtest_loop for this.. but unfortunately pytest currently needs to
> know the next item before hand itself..
>
> Thanks
> B.Arun kali raja..
>
> On 18-Jan-2018 02:56, "Floris Bruynooghe" <flub at devork.be> wrote:
>
> Hi Arun,
>
> It sounds like you are hacking around the pytest internals, if you do
> things this way things will break for you, badly.  Maybe not in the next
> pytest release but they will eventually.  I'd strongly advice you not to
> do the things you describe, they're all unsupported.
>
> You're also rather vague on what you're actually trying to achieve, why
> do you have to provide "chains of fixtures"?  And also asking the user
> to parametrize your fixtures sounds like the wrong way, but we don't
> know what you're actually trying to do.  So if you described what your
> original problem is you're trying to solve rather than the mechanisms
> you're thinking of using maybe we can help you better.
>
> Regards,
> Floris
>
>
> On Tue 16 Jan 2018 at 22:10 +0530, arun kali raja wrote:
>
>> Hi Bruno/Floris,
>>
>> Thanks a lot for the insight.
>>
>> i was trying out few things and found these..
>>
>> 1. Ordering fixtures:
>>      Suppose the user wants to impose the ordering of fixtures in a
> certain
>> way then he can do it via the *pytest_generate_tests(metafunc)* hook:
>>
>>     the *metafunc* object has a member variable *fixturenames*.. that is
>> the list of fixtures that are gona be executed for this test.
>>
>>     the user can reorder it as he wishes.
>>
>>     In my case i have some autoUse fixtures, but they have some dependency
>> fixtures  which are not autoUse.. so fixtureA(autoUse) is dependent on
>> fixtureB( not autoUse). So only if fixtureB is available i want to include
>> fixtureA otherwise i dont want to use fixtureA at all.. i Can achieve this
>> by checking contents of the list and making appropriate decisions to
> remove
>> invalid fixtures too..
>>
>>    Now say i want to call all my session fixtures , then module, then
> class
>> and then function fixtures, how do i know which is what? bcoz the
>> *fixturenames* variable is just a list of fixture names(strings)..
>>
>>    the solution is to check the fixture information from the
>> *metafunc._arg2fixturedefs
>> *dictionary. the dictionary has fixture name as key and a tuple as a
> value.
>> the tuple has a object FixtureDef which scope related information.
>>
>> Question:
>>     Is this a right way to do? or just a hacky way to make things work?
>>
>> Curious question:
>>    why is the values in the *_arg2fixturedefs* a tuple?? can a fixture
> have
>> more than one definitions??
>>
>> 2. Adding fixtures dynamically at run time.
>>
>>     I am trying to create a library of fixtures which will provide a
>> standard set of functionalities.. the user has to add those in his
> testcase
>> and then parameterise them according to his usecase.
>>
>>     Instead of the selecting the fixtures i want to give flexibility to
>> specify the usecase in a marker with some parameters and then the
>> dynamically add the fixtures.
>>
>>     I think here too *pytest_generate_tests* is the right place to do such
>> things...
>>
>>     I was able to read the markers using *metafunc.function*  member
>> variable and then add fixtures to the *metafunc.fixturenames* member
>> variable..
>>
>>     And the fixture got called at the time of execution too..
>>
>>    Question:
>>         Suppose i want to create a fixture chain according to my use case
> i
>> can do it by inserting them one after the other in the *fixturenames. *But
>> suppose i want to access value returned by fixtureA in fixtureB i am not
>> sure how to do it.. Becase in that case i have to add fixtureA as a
>> dependency to fixtureB. I havent figured out that yet..
>>
>>     Curious Question:
>>        The *metafunc._arg2fixturedefs *variable i have mentioned above
>> seems to be populated at the *pytest_generate_tests *phase itself.. I am
>> not sure how appending the fixture name at *pytest_generate_tests *even
>> works.. i tried putting a breakpoint at *pytests_itemcollected() *hook..
>> but even there it seems that the *_arg2fixturedefs *doesnt  seem to have
>> the entry for the newly added fixture.. So i am not sure yet how it
> works..
>>
>>
>> Regards
>> Arun Kaliraja.B
>>
>>
>> On 16 January 2018 at 02:47, Floris Bruynooghe <flub at devork.be> wrote:
>>
>>> Hi,
>>>
>>> It seems Bruno's reply got lost somewhere...
>>>
>>> On Sat 13 Jan 2018 at 22:38 +0530, arun kali raja wrote:
>>> > 1. I have a .py file where i have defined two Autouse fixtures..
>>> > fixtureMod_1 is module scoped and fixtureFunc_1 is function scoped..
> Now
>>> in
>>> > conftest.py i have imported these two fixtures.. in my test_x.py i
> have a
>>> > another module scoped fixture say fixtureMod_2..The order of execution
> of
>>> > fixture seems to be fixtureMod_1,fixtureFunc_1 and then fixtureMod_2..
>>> isnt
>>> > module level fixtures supposed to be executed before function level
>>> > fixtures?? isnt that how scopes are ordered.. i am able to solve the
>>> issues
>>> > by giving fixtureMod_2  as a dependency to fixtureFunc_1.. but still i
> am
>>> > trying to understand why pytest respects the scope ordering in this
>>> case..
>>> > does the order of sourcing the fixture override the scope based
>>> ordering??
>>>
>>> If I understand this correct then you have something like this:
>>>
>>> foo.py:
>>>    @pytest.fixture
>>>    def fixtureMod_1():
>>>        pass
>>>
>>>    @pytest.fixture
>>>    def fixtureFunc_1():
>>>        pass
>>>
>>> conftest.py:
>>>    from foo import fixtureMod_1
>>>    from foo import fixtureFun_1
>>>
>>> test_x.py:
>>>
>>>    @pytest.fixture
>>>    def fixtureMod_2():
>>>        pass
>>>
>>>    def test_foo(fixtureMod_1, fixtureFunc_1, fixtureMod_2):
>>>        pass
>>>
>>> From pytest's point of view you have not specified any ordering here.
>>> The scopes are used to know when the setup and teardown of each fixture
>>> needs to run, e.g. with a function-scope the fixture will be created for
>>> just before a single test which needs it and destroyed right after this
>>> test has run.
>>>
>>> For a module scope the fixture gets only created once for
>>> all tests in a module.  So if multiple tests in test_x require either
>>> fixtureMod_1 or fixtureMod_2 their setup and teardown will only be
>>> executed once.
>>>
>>> So scopes have nothing to do with ordering.
>>>
>>> As you discovered if you declare dependencies on fixtures then the
>>> ordering will start to be defined:
>>>
>>> @pytest.fixture
>>> def fixtureFunc_1(fixtureMod_2):
>>>     pass
>>>
>>> This will ensure that fixtureMod_2 is created first.  It's good that
>>> this dependency needs to be explicit in your code.  However be careful
>>> in general to not over-use dependencies like this, e.g. if the only
>>> reason you declare fixtureMod_2 is to use as a dependency in
>>> fixtureFunc_1 then maybe fixtureMod_2 shouldn't be a fixture at all.
>>>
>>>
>>> As another aside btw, I always *strongy* advice against ever importing
>>> fixtures.  The semantics of it are very confusing (this was probably a
>>> design mistake) so instead always move the fixture to the right
>>> location, i.e. your conftest.py file.
>>>
>>> > 2. I have one usecase where i want to dynamically create a fixture
>>> chain..
>>> > say i mark by testcase with useCaseA and that translates to an ordering
>>> of
>>> > fixture.. can i add a fixture to a testcase at run time.. at
>>> > pytest_generate_test phase or pytest_configure phase?? i found a method
>>> > called add_marker to add a marker at runt time.. since fixtures can
> also
>>> be
>>> > added via markers i was trying to explore if can use that method but it
>>> > doesnt accept any parameters.. any way i can achieve this??
>>>
>>> Here I'm somewhat lost with your example, I think you have this:
>>>
>>> @pytest.mark.useCaseA
>>> def test_foo():
>>>     pass
>>>
>>> But I'm afraid you'll have to give me more info on how tha translates to
>>> fixtures in your implementation.
>>>
>>> With respect to dynamically requested fixtures, this is usually done
>>> using request.getfixturevalue() [0].  However, and this is where
>>> personal opinions differ, my advice is to avoid this if you possibly
>>> can.  And my opinion is also that this is always possible, e.g. a
>>> fixture can return instances of two different classes depending on the
>>> dynamic "thing".
>>>
>>>
>>> Hope this helps,
>>> Floris
>>>
>>>
>>> [0] https://docs.pytest.org/en/latest/builtin.html?highlight=
>>> getfixturevalue#_pytest.fixtures.FixtureRequest.getfixturevalue
>>>



More information about the testing-in-python mailing list