[khmer] Fwd: How to speed up the filter-below-abund script ?

Eric McDonald emcd.msu at gmail.com
Fri Mar 15 12:49:15 PDT 2013


Hi Alexis,

I'm glad to hear that you were able to get past the problem. Admittedly, I
am still somewhat puzzled by it.

As far as estimating hash table size is concerned, the Bloom filter uses 4
hash tables by default. The size of each hash table is the number of bytes
specified with the "-x" parameter. So, if you chose 32e9 as the hash table
size, then the entire Bloom filter would be 128 GB plus some additional
overhead. But, I don't think you need to choose that large of a hash table
in most cases. Somewhere (possibly in my inbox), Titus created a guide
about what size hash table to generally use with certain kinds of data. If
it is not already in the documentation, then it probably needs to be added.

With the scripts in the 'scripts' directory, you can use the '--help'
option to find out what arguments are supported. "-x" and "-N" are the
Bloom filter (hash tables) control parameters. (Note: Your memory usage is
mostly dependent upon these and _not_ on the size of your input files. Your
false positive rate is determined by the size of the Bloom filter and the
number of unique k-mers in your data. The number of unique k-mers scales as
a function of the size of the data set and the diversity of k-mers within
the data. As mentioned in the previous paragraph, Titus has created a guide
which accounts for part of this diversity (I believe) based on whether
something is a particular kind of genome or a metagenome. But, this is more
biology than I know and he can correct me if I misrepresented anything. :-)
So, you have controls on your memory consumption already. The '--savehash'
option works with the '-x' and '-N' options in 'normalize-by-median.py'. I
hope this makes sense and addresses your comments - please let us know if
it is still not clear.

Good luck with the partitioning! Hopefully that will go more smoothly for
you.

Thanks for you working with us to debug the problem,
  Eric



On Fri, Mar 15, 2013 at 11:05 AM, Alexis Groppi <
alexis.groppi at u-bordeaux2.fr> wrote:

>  Hi Eric,
>
> Good News : I may have found the solution of this tricky bug
>
> The bug come from the hastable construction with load-into-counting.py
> We have used the following parameters : load-into-counting.py -k 20* -x
> 32e9*
> With -x 32e9 the hashtable grows until it reaches the maximum ram
> available at the moment, independantly of the size of the fasta.keep file.
> But, in a manner I ignore, this file is not correct.
> I realise this by repeating the two steps load-into-counting.py and then
> filter-below-abund.py on the very small subsample of 100000 reads.
> ==> It generates a table.kh of 248.5 Go (!) and leads to the same error :
> Floating point exception(core dumped).
>
> I tried to performed these two steps on the whole data sets (~2.5 millions
> of reads) with load-into-counting.py -k 20* -x 5e7*
>
> ==> It works perfectly but I got a warning/error in the output file :
> ** ERROR: the counting hash is too small for
> ** this data set.  Increase hashsize/num ht.
>
> Finally I ran the two steps with load-into-counting.py -k 20 *-x 1e9*....
> And It works perfectly ! in a fews minutes (~6 mins) without any warning or
> error.
>
> In my opinion, it will be useful( if possible) to include a control on the
> hastable creation by the script load-into-counting.py.
> By the way, how this is managed via normalize-by-median.py and the
> --savehash option ?
>
> Now shifting to the next steps (partitioning). I hope in a more easy way ;)
>
> Thanks again for your responsiveness.
>
> Have nice Weekend
>
> Alexis
>
> Le 15/03/2013 02:02, Eric McDonald a écrit :
>
>  I cannot reproduce your problem with a fairly large amount of data - 5
> GB (50 million reads) of soil metagenomic data processed successfully with
> 'sandbox/filter-below-abund.py'.  (I think the characteristics of your data
> set are different though; I thought I noticed some sequences with 'N' in
> them - those would be discarded. If you have many of those then that could
> drastically reduce what is kept which might alter the read-process-write
> "rhythm" between your threads some.)
>
>  ... filtering 48400000
> done loading in sequences
> DONE writing.
> processed 48492066 / wrote 48441373 / removed 50693
> processed 3940396871 bp / wrote 3915266313 bp / removed 25130558 bp
> discarded 0.6%
>
>  When I have a fresh mind tomorrow, I will suggest some next steps. (Try
> to isolate which thread is dying, build a fresh Python 2.7 on a machine
> which has access to your data, etc....)
>
>
>
> On Thu, Mar 14, 2013 at 8:10 PM, Eric McDonald <emcd.msu at gmail.com> wrote:
>
>> Hi Alexis and Louise-Amélie,
>>
>>  Thank you both for the information. I am trying to reproduce your
>> problem with a large data set right now.
>> I agree that the problem may be a function of the amount of data.
>> However, if you were running out of memory, then I would expect to see a
>> segmentation fault rather than a FPE. I am still guessing this problem may
>> be threading-related (even if the number of workers is reduced to 1, there
>> is still the master thread which supplies the groups of sequences and the
>> writer thread which outputs the kept sequences). But, my guesses have not
>> proved to be that useful with your problem thus far, so take my latest
>> guess with a grain of salt. :-)
>>
>>  Depending on whether I am able to reproduce the problem, I have some
>> more ideas which I intend to try tomorrow. If you find anything else
>> interesting, I would like to know. But, I feel bad about how much time you
>> have wasted on this. Hopefully I will be able to reproduce the problem....
>>
>>  Thanks,
>>   Eric
>>
>>
>>
> --
>



-- 
Eric McDonald
HPC/Cloud Software Engineer
  for the Institute for Cyber-Enabled Research (iCER)
  and the Laboratory for Genomics, Evolution, and Development (GED)
Michigan State University
P: 517-355-8733
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idyll.org/pipermail/khmer/attachments/20130315/27a920b8/attachment-0002.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 29033 bytes
Desc: not available
URL: <http://lists.idyll.org/pipermail/khmer/attachments/20130315/27a920b8/attachment-0002.png>


More information about the khmer mailing list