[khmer] khmer on 454 metagenomics data
alexis.groppi at u-bordeaux.fr
Mon Apr 14 05:08:42 PDT 2014
Good guess !
On the same files but by omitting the --threads 12 option (default = 1)
, the script worked perfectly !
PS : I wrote a comment on github for the follow up
Le 13/04/2014 17:42, C. Titus Brown a écrit :
> On Thu, Apr 10, 2014 at 11:08:03AM +0200, Alexis GROPPI wrote:
>> Thanks for these helpful advices.
>> The first step (digital normalisation) ran perfectly.
>> I'm now trying to compute the abundance histogram.
>> I launched :
>> abundance-dist-single.py -N 4 -x 48e9 --threads 12
>> But after 24 hours, the script is still running but without any results.
>> Is that normal ?
> This is a bug that I ran into, too. Not sure when it popped up, my guess is
> between 0.8 and 1.0. Thanks for the report -- see
> https://github.com/ged-lab/khmer/issues/384 to track.
>> Le 04/04/2014 17:01, C. Titus Brown a ?crit :
>>> On Apr 4, 2014, at 10:49 AM, Alexis Groppi <alexis.groppi at u-bordeaux.fr> wrote:
>>>> We want to analyse 454 metagenomics data (570 000 reads of ~700 nt per sample).
>>>> My questions are :
>>>> 1/ Given that khmer is rather short-read/Illumina oriented, are we mistaken to try and apply it to our long 454 reads?
>>>> 2/ Is there an actual benefit in feeding .fastq files to khmer (in our case separate .qual files at the moment, but that can be changed), or does it really only consider the sequence data? ie. are the fasta files sufficient ?
>>>> How do you define what data needs pre-normalization or what data can go straight to artifact removal? In the Iowa corn example, you do not start by a normalize/filter pass, how come?
>>>> 3/ Thus, should the pipeline for our data be like :
>>>> DIGINORM (normalize-by-median -- filter-abund -- normalize-by-median) -- ARTIFACT REMOVAL (load-graph -- partition-graph, ...etc)
>>>> is the step DIGINORM useless in our case ?
>>>> Thanks for your help
>>>> And thanks for your great job on khmer 1.0 !
>>> Hi Alexis,
>>> Good questions?
>>> I would suggest doing only a single pass of digital normalization, since the impact of both errors and low coverage will be different with longer reads. So, something like normalizing to a coverage of 5, in a single pass. Do not do any trimming (filter-abund) as this will potentially discard a lot of your sequences; trimming cuts off the ends of sequences, and is best applied to high-coverage short reads. Your main diagnostic tool here will be a k-mer abundance histogram after normalization: do you see that a bunch of real coverage has been lowered to 1 or 2, or are you primarily seeing normalization of coverage to an average of 5 with no increase in the number of sequences with a coverage of 1 or 2?
>>> Partitioning should work just fine, but I would not do artifact removal (filter-stoptags, etc.). If you get a big blob where everything hangs together, there are some things you can do with stoptags: briefly, run the knot finding stuff, but then feed the resulting stoptags into partition-graph with the -S parameter. This will prevent partitioning across highly connected k-mers.
>>> HTH! Please feel free to ask more ;)
>>> khmer mailing list
>>> khmer at lists.idyll.org
>> CBiB - Universit? de Bordeaux <http://www.u-bordeaux.fr>
>> Dr Alexis Groppi <mailto:alexis.groppi at u-bordeaux2.fr>
>> Directeur adjoint du CBiB - Charg? de mission du CGFB
>> 146, rue L?o Saignat - Case 68 - 33076 Bordeaux Cedex
>> T. +33 5 57 57 12 18
>> P. +33 6 35 95 04 87
>> www.cbib.u-bordeaux2.fr <http://www.cbib.u-bordeaux2.fr>
>> khmer mailing list
>> khmer at lists.idyll.org
CBiB - Université de Bordeaux <http://www.u-bordeaux.fr>
Dr Alexis Groppi <mailto:alexis.groppi at u-bordeaux2.fr>
Directeur adjoint du CBiB - Chargé de mission du CGFB
146, rue Léo Saignat - Case 68 - 33076 Bordeaux Cedex
T. +33 5 57 57 12 18
P. +33 6 35 95 04 87
-------------- next part --------------
An HTML attachment was scrubbed...
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 47582 bytes
Desc: not available
More information about the khmer