[TIP] TiP BoF feedback & call for help
Mark Sienkiewicz
sienkiew at stsci.edu
Mon Mar 14 15:05:29 PDT 2011
> So who is wrong? me? us? jenkins? django? unittest ? Who to blame ?
> I'll love to run my tests more but 30 minutes feedback is just crazy.
>
You'll just have to start measuring things. Blaming django doesn't get
you anywhere even if it is accurate -- you have to understand what is
happening in your system so you can know what you can do different.
First, how do you know there is anything wrong? Your average throughput
works out to about 1.8 seconds per test. Is it possible that 20-30
minutes is actually _fast_ for what you are doing? If not, how do you know?
Are all the tests taking 1.8 seconds +/- 2% ? Or do 90% of your tests
take 0.1 seconds each and 10% of your tests take 17 seconds each? What
do the fast ones have in common? What do the slow ones have in common?
If you run a single test, how long does it take? If you run 10 tests,
does it take 10 times as long? Do 100 tests take 100 times as long?
If you're using unittest, how long does it spend in the test method, how
long in the class setup, how long in the overhead code?
Do your tests log in to the postgres server 500 times, or only once?
And are the postgres tests faster or slower than the same tests run on
sqlite?
Do you run all your tests in a single process? If not, how many
processes and how much process creation time? (I ran truss on my python
interpreter once and saw about 3000 file open calls just to start an
interactive interpreter.)
Once you collect some data (maybe even just a few minutes - you might
not need a whole test run), you have some clues. There is no point in
optimizing until you know what needs to be better. If you just poke
about at random, you might completely optimize something that is
responsible for 1% of your run time -- but you don't care if you save 18
seconds out of half an hour.
Mark
More information about the testing-in-python
mailing list