[TIP] testmon - ideas to speed up execution of test suites

Ryne Everett ryneeverett at gmail.com
Mon Mar 16 12:58:09 PDT 2015


Have you seen https://github.com/eventbrite/nose-knows? I haven't tried it
or dug into the source but the approach sounds similar.

On Mon, Mar 16, 2015 at 3:46 PM, Robert Collins <robertc at robertcollins.net>
wrote:

> On 17 March 2015 at 07:43, Thomi Richards <thomi.richards at canonical.com>
> wrote:
> > Hi,
> >
> > On Sat, Mar 14, 2015 at 3:57 AM, Tibor Arpas <tibor.arpas at infinit.sk>
> wrote:
> >>
> >> E.g. it automatically selects only tests dependent on recent changes and
> >> deselects all the others. The concept is quite general, technically
> feasible
> >> and it's possible to finish this and also implement the same for other
> test
> >> runners if there is interest.
> >
> >
> >
> > This seems.. too good to be true. Would you mind explaining a bit about
> how
> > it works? Off the top of my head, I can't think of any reliable way to
> > determine a list of test cases that need to be run, given a diff of
> source
> > code. Am I missing something?
>
> There was a thread on twitter about this recently
> https://twitter.com/ncoghlan_dev/status/566373773830397952
>
> The idea goes back years :).
>
> The basic idea is to have an oracle that tells you working tree state
> -> tests to run
>
> Some oracles:
>  - naming conventions; name tests such that you can tell the modules
> they are relevant to.
>    pros: easy to maintain, helps mental association to the importance of
> layers.
>    cons: very easy to fail to run tests where unexpected layer
> associations have formed
>  - trace based: using a debug/profiling hook build a mapping of test X
> ran lines Y. Whenever you run the test again
>    update the database, and map backwards from diffs to the db. You
> can in principle also use this to decide what
>    tests need to run when changing a dependency, though I'm not aware
> of anyone doing that yet.
>    pros: much more reliable at determining what tests to run
>    cons: have to build the db first, and maintain it as e.g. lines
> move around, which makes first-time use expension
>  - stochastically: run some subset of tests randomly, perhaps biased
> by naming conventions or other data like most-recently changed. Aegis
> uses this to select tests to run.
>    pros: very simple
>    cons: reproducability, and lack of coverage
>
> There was an implementation of trace based selection put together for
> LP about 8 years ago, and I did a thing for bzr shortly after that -
> there are implementations all over the place ;). None in the Python
> world that are generally reusable and consumable until recently AFAIK
> though.
>
> -Rob
>
> --
> Robert Collins <rbtcollins at hp.com>
> Distinguished Technologist
> HP Converged Cloud
>
> _______________________________________________
> testing-in-python mailing list
> testing-in-python at lists.idyll.org
> http://lists.idyll.org/listinfo/testing-in-python
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idyll.org/pipermail/testing-in-python/attachments/20150316/0b326ab4/attachment.htm>


More information about the testing-in-python mailing list