[khmer] khmer on 454 metagenomics data

Fields, Christopher J cjfields at illinois.edu
Thu Apr 10 08:21:48 PDT 2014

Nature of the data?  454 has known homopolymer issues, I could see that introducing a decent amount of noise.

Just curious, but any reason you want to use this approach?  We had a very similar data set (454 metagenome, but a low diversity one, about 200K reads) and simply ran Newbler, it took a few minutes and didn’t require a ton of memory.  Mem-wise I think we were under 4G, but I can check if you want to know.  It’s possible you would need more mem/time based on the diversity of your sample and the # of reads, but you would have an assembly in a shorter amount of time than running khmer.


On Apr 10, 2014, at 4:08 AM, Alexis GROPPI <alexis.groppi at u-bordeaux.fr<mailto:alexis.groppi at u-bordeaux.fr>> wrote:


Thanks for these helpful advices.
The first step (digital normalisation) ran perfectly.
I'm now trying to compute the abundance histogram.
I launched  :
abundance-dist-single.py -N 4 -x 48e9 --threads 12 $PBS_O_WORKDIR/LCBL_1/LCBL_1.fasta.normalized $PBS_O_WORKDIR/LCBL_1/LCBL_1.fasta.normalized.histogram

But after 24 hours, the script is still running but without any results.
Is that normal ?

Thanks again


Le 04/04/2014 17:01, C. Titus Brown a écrit :

On Apr 4, 2014, at 10:49 AM, Alexis Groppi <alexis.groppi at u-bordeaux.fr><mailto:alexis.groppi at u-bordeaux.fr> wrote:


We want to analyse 454 metagenomics data (570 000 reads of ~700 nt per sample).
My questions are :
1/ Given that khmer is rather short-read/Illumina oriented, are we mistaken to try and apply it to our long 454 reads?
2/ Is there an actual benefit in feeding .fastq files to khmer (in our case separate .qual files at the moment, but that can be changed), or does it really only consider the sequence data? ie. are the fasta files sufficient ?
How do you define what data needs pre-normalization or what data can go straight to artifact removal? In the Iowa corn example, you do not start by a normalize/filter pass, how come?
3/ Thus, should the pipeline for our data be like :
DIGINORM (normalize-by-median --  filter-abund -- normalize-by-median) -- ARTIFACT REMOVAL (load-graph -- partition-graph, ...etc)
is the step DIGINORM  useless in our case ?

Thanks for your help

And thanks for your great job on khmer 1.0 !

Hi Alexis,


Good questions…

I would suggest doing only a single pass of digital normalization, since the impact of both errors and low coverage will be different with longer reads.  So, something like normalizing to a coverage of 5, in a single pass.  Do not do any trimming (filter-abund) as this will potentially discard a lot of your sequences; trimming cuts off the ends of sequences, and is best applied to high-coverage short reads.  Your main diagnostic tool here will be a k-mer abundance histogram after normalization: do you see that a bunch of real coverage has been lowered to 1 or 2, or are you primarily seeing normalization of coverage to an average of 5 with no increase in the number of sequences with a coverage of 1 or 2?

Partitioning should work just fine, but I would not do artifact removal (filter-stoptags, etc.).  If you get a big blob where everything hangs together, there are some things you can do with stoptags: briefly, run the knot finding stuff, but then feed the resulting stoptags into partition-graph with the -S parameter.  This will prevent partitioning across highly connected k-mers.

HTH!  Please feel free to ask more ;)


khmer mailing list
khmer at lists.idyll.org<mailto:khmer at lists.idyll.org>


Dr Alexis Groppi<mailto:alexis.groppi at u-bordeaux2.fr>
Directeur adjoint du CBiB - Chargé de mission du CGFB
146, rue Léo Saignat - Case 68 - 33076 Bordeaux Cedex
T. +33 5 57 57 12 18
P. +33 6 35 95 04 87

khmer mailing list
khmer at lists.idyll.org<mailto:khmer at lists.idyll.org>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.idyll.org/pipermail/khmer/attachments/20140410/d8613337/attachment.htm>

More information about the khmer mailing list