[TIP] Testing a daemon with coverage (nose)
markwaite at yahoo.com
Thu May 7 19:43:19 PDT 2009
Ben Finney <ben+python at benfinney.id.au> writes:
> holger krekel <holger at merlinux.eu> writes:
> > On Thu, May 07, 2009 at 10:45 +1000, Ben Finney wrote:
> > > So, if I make a change to fix one of the failures and inadvertantly
> > > break a test that previously succeeded, I can go an indeterminate
> > > number of code-test cycles before I find out about it?
> > well, you will all the time work on a failing test - most people i
> > know like to focus on fixing one particular test and then see if that
> > broke any other bits.
> If I make a change that breaks an existing test that used to pass, then
> I need to know about that as early as possible to find a better
> That implies that all the unit tests need to run when I make that
> change, not just the one I'm focussing on. The point of an automated
> test suite is that I don't need to remember which of them need to run.
In that context, I don't see the distinction between unit tests and
any other form of tests. I want to know the state of all tests on
every change. Some tests are long enough that I am not willing to
wait for those tests before I move to my next task. That is what I
want from continuous integration (run tests while I move forward).
In a perfect world, all tests would run instantly on every change and
I would not have to wait for any of the results. I don't have a
perfect world, so I seek ways to partition tests into sets which will
give me the more useful information sooner.
For me, the distinction between "unit tests", "acceptance tests",
"performance tests" and any other form of automated tests has not been
as useful as the distinction between "runs fast enough that it does
not interrupt my development work" and "runs slowly enough that it
would interrupt my development work".
There have been other partitions of tests that I dreamt of using, but
never found a practical way to implement. For example, what if there
were a way to map coverage results back to the test selector, so only
those tests which executed code in the modified file were chosen to
run on a change?
> > > That seems like the canonical reason why ???run the entire unit test
> > > suite every time??? is good practice, and I can't see why the above
> > > behaviour would be desirable.
> > As Laura points out, if tests take longer it is nice to not have to
> > wait before seeing if the intended-to-fix test works. Btw, py.test
> > runs unit, functional and integration tests and i think nosetests also
> > does runs more than pure unit tests.
> I think you mean it *can* run all those kinds of tests; my point is that
> it can *also* be more specific.
> Acceptance, integration, performance, etc. tests are *not* unit tests,
> and unit tests need to be executable as an isolated suite precisely
> because they're the type of tests one wants to run as often as possible.
Can you clarify how you've gained value from the distinction between
test types in this context? I'd like to understand how your concepts
might apply in my environment.
(amazed at the great discussions on this list. Thanks!)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the testing-in-python