<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi Titus,<br>
<br>
After digital normalization and filter-below-abund, upon your advice
I performed do.partition.py on 2 sets of data (approx 2.5 millions
of reads (75 nt)) :<br>
<br>
<tt>/khmer-BETA/scripts/do-partition.py -k 20 -x 1e9
/ag/khmer/Sample_174/174r1_prinseq_good_bFr8.fasta.keep.below.graphbase
/ag/khmer/Sample_174/174r1_prinseq_good_bFr8.fasta.keep.below</tt><tt><br>
</tt><tt>and</tt><tt><br>
</tt><tt>/khmer-BETA/scripts/do-partition.py -k 20 -x 1e9
/ag/khmer/Sample_174/174r2_prinseq_good_1lIQ.fasta.keep.below.graphbase
/ag/khmer/Sample_174/174r2_prinseq_good_1lIQ.fasta.keep.below</tt><tt><br>
</tt><br>
For the first one I got a
174r1_prinseq_good_bFr8.fasta.keep.below.graphbase.info with the
information : 33 subsets total<br>
Thereafter 33 files .pmap from 0.pmap to 32.pmap regurlarly were
created and finally I got unique file
174r1_prinseq_good_bFr8.fasta.keep.below.part (all the .pmap files
were deleted)<br>
This treatment lasted approx 56 hours.<br>
<br>
For the second set (174r2), do-partition.py is started since 32
hours but I only got the
174r2_prinseq_good_1lIQ.fasta.keep.below.graphbase.info with the
information : 35 subsets total<br>
And nothing more...<br>
<br>
Is this duration "normal" ?<br>
(The parameters for the threads are by default (4 threads))<br>
33 subsets and only one file at the end ?<br>
Should I stop do-partition.py on the second set and re run it with
more threads ?<br>
<br>
Thanks for your help<br>
<br>
Alexis<br>
<br>
<br>
<br>
<div class="moz-signature">-- <br>
<img src="cid:part1.00010201.08060302@u-bordeaux2.fr" border="0"></div>
</body>
</html>