Difference between revisions of "Quality Control and Preprocessing Exercise"

From wiki
Jump to: navigation, search
 
(7 intermediate revisions by the same user not shown)
Line 14: Line 14:
  
 
* FastQC: http://www.bioinformatics.babraham.ac.uk/projects/fastqc/
 
* FastQC: http://www.bioinformatics.babraham.ac.uk/projects/fastqc/
module load FASTQC
+
* Fastq-mcf, part of the ea-utils suite: https://code.google.com/p/ea-utils/wiki/FastqMcf
* FastqMcf: https://code.google.com/p/ea-utils/wiki/FastqMcf
+
  module load FASTQC ea-utils
  module load ea-utils
 
  
 
* The data set you'll be using is downloaded from ENA (http://www.ebi.ac.uk/ena/data/view/SRP019027).
 
* The data set you'll be using is downloaded from ENA (http://www.ebi.ac.uk/ena/data/view/SRP019027).
Line 23: Line 22:
 
= View data set =
 
= View data set =
  
  cd $HOME/i2rda_data/Quality_control_and_preprocessing
+
First we go into the appropriate directory:
 +
  cd $HOME/i2rda_data/01_Quality_Control_and_Preprocessing
 +
 
 +
Then we have a look at the first 10 lines of each read file: note they are compressed so we need <code>zcat</code> instead of the normal <code>cat</code>.
 
  zcat Read_1.fastq.gz |head
 
  zcat Read_1.fastq.gz |head
 
  zcat Read_2.fastq.gz |head
 
  zcat Read_2.fastq.gz |head
 +
 +
where:
 +
* zcat outputs gzip-compress files to the screen
 +
* | is the pipe operator which takes output, converts it to input
 +
* head, only prints first ten lines of input.
  
 
= Assessment of data quality =
 
= Assessment of data quality =
 +
 
Run FastQC on the raw data:
 
Run FastQC on the raw data:
  
Line 34: Line 42:
  
 
where:
 
where:
* --nogroup disables grouping of bases for reads >50bp. All reports will show data for every base in the read.
+
* --nogroup, for visualisation purposes, prevents grouping of bases after read length of 50bp, so reports will show data for every base in the read.
  
 
Look at the FastQC results and answer the following questions:
 
Look at the FastQC results and answer the following questions:
Line 46: Line 54:
 
Trim reads using Fastq-Mcf:
 
Trim reads using Fastq-Mcf:
  
  fastq-mcf -o Read_1_q30l50.fastq -o Read_2_q30l50.fastq \
+
  fastq-mcf -o Read_1_q32l50.fastq.qz -o Read_2_q32l50.fastq.qz -q 32 -l 50 \
-q 30 -l 50 \
+
  --qual-mean 32 adapters.fasta Read_1.fastq.gz Read_2.fastq.gz
  --qual-mean 30 adapters.fasta Read_1.fastq Read_2.fastq
 
  
 
<ins>where</ins>:
 
<ins>where</ins>:
Line 54: Line 61:
 
* -q quality threshold causing base removal
 
* -q quality threshold causing base removal
 
* -l Minimum remaining sequence length
 
* -l Minimum remaining sequence length
* --qual-mean - Minimum mean quality score
+
* --qual-mean - Minimum mean quality score, taking the other pair into account
 +
 
 +
As you can see fastq-mcf is able to deal with multiple files in the one command.
 +
 
 +
Question:
 +
* How do you interpret the output of the fastq-mcf command?
  
 
= Reassessment of data quality =
 
= Reassessment of data quality =
 
Run FastQC on the trimmed reads:
 
Run FastQC on the trimmed reads:
  
  fastqc --nogroup Read_1_q30l50.fastq Read_2_q30l50.fastq
+
  fastqc --nogroup Read_1_q32l50.fastq.gz Read_2_q32l50.fastq.gz
  firefox Read*q30l50*fastqc.html
+
  firefox Read*q32l50*fastqc.html
  
 
Look at the FastQC results and answer the following questions:
 
Look at the FastQC results and answer the following questions:
Line 66: Line 78:
 
* What is the length of the reads?
 
* What is the length of the reads?
 
* Did qualities improve?
 
* Did qualities improve?
 +
 +
A custom utility such as <code>fqzinfo</code> can give succint information about <code>fastq.gz</code> reads, to understand its output, type
 +
 +
fqzinfo
 +
 +
Then run it again, this time specifying the fastq.gz files you are interested in. Or, try all of them (will take longer of course):
 +
 +
fqzinfo *.fastq.gz
 +
 +
<ins>Question</ins>:
 +
* Did we lose much raw data in this clipping process?
 +
 +
= If you have time to spare =
 +
 +
* Run fastq-mcf again but this time using a differnt quality threshold, say 28.
 +
* Run FastQC on the new fastq files and then use multiqc to compare your unfiltered and two alternatively filtered fastq pairs.
 +
multiqc
 +
firefox multiqc.html &
 +
* It may be that the reduction in quality is small, but that many more reads and bases are retained, which would be good news.

Latest revision as of 15:05, 14 May 2017

Motivation

NGS can be affected by a range of artefacts that arise during the library preparation and sequencing processes including:

  • low base quality
  • contamination with adapter sequences
  • biases in base composition

Aims

In this part you will learn to:

  • assess the intrinsic quality of raw reads using metrics generated by the sequencing platform (e.g. quality scores)
  • pre-process data, i.e. trimming the poor quality bases and adapters from raw reads

You will use the following tools, which are available through the module load/unload system:

module load FASTQC ea-utils

View data set

First we go into the appropriate directory:

cd $HOME/i2rda_data/01_Quality_Control_and_Preprocessing

Then we have a look at the first 10 lines of each read file: note they are compressed so we need zcat instead of the normal cat.

zcat Read_1.fastq.gz |head
zcat Read_2.fastq.gz |head

where:

  • zcat outputs gzip-compress files to the screen
  • | is the pipe operator which takes output, converts it to input
  • head, only prints first ten lines of input.

Assessment of data quality

Run FastQC on the raw data:

fastqc --nogroup Read_1.fastq.gz Read_2.fastq.gz
firefox Read_*_fastqc.html &

where:

  • --nogroup, for visualisation purposes, prevents grouping of bases after read length of 50bp, so reports will show data for every base in the read.

Look at the FastQC results and answer the following questions:

  • What is the quality encoding?
  • How many reads are present in each fastq file?
  • What is the length of the reads?
  • Are there any adapter sequences observed?
  • Which parameters you think should be used for trimming the reads?

Pre-processing of data

Trim reads using Fastq-Mcf:

fastq-mcf -o Read_1_q32l50.fastq.qz -o Read_2_q32l50.fastq.qz -q 32 -l 50 \
--qual-mean 32 adapters.fasta Read_1.fastq.gz Read_2.fastq.gz

where:

  • -o output file
  • -q quality threshold causing base removal
  • -l Minimum remaining sequence length
  • --qual-mean - Minimum mean quality score, taking the other pair into account

As you can see fastq-mcf is able to deal with multiple files in the one command.

Question:

  • How do you interpret the output of the fastq-mcf command?

Reassessment of data quality

Run FastQC on the trimmed reads:

fastqc --nogroup Read_1_q32l50.fastq.gz Read_2_q32l50.fastq.gz
firefox Read*q32l50*fastqc.html

Look at the FastQC results and answer the following questions:

  • How many reads are present in each fastq file?
  • What is the length of the reads?
  • Did qualities improve?

A custom utility such as fqzinfo can give succint information about fastq.gz reads, to understand its output, type

fqzinfo

Then run it again, this time specifying the fastq.gz files you are interested in. Or, try all of them (will take longer of course):

fqzinfo *.fastq.gz

Question:

  • Did we lose much raw data in this clipping process?

If you have time to spare

  • Run fastq-mcf again but this time using a differnt quality threshold, say 28.
  • Run FastQC on the new fastq files and then use multiqc to compare your unfiltered and two alternatively filtered fastq pairs.
multiqc
firefox multiqc.html &
  • It may be that the reduction in quality is small, but that many more reads and bases are retained, which would be good news.