Nextflow 2

During this day we will make more complex pipelines and separate the main code from the configuration. Then we will focus on how to reuse and share your code.

_images/nextflow_logo_deep.png

Decoupling resources, parameters and nextflow script

When making a complex pipelines it is convenient to keep the definition of resources needed, the default parameters and the main script separately from each other. This can be achieved using two additional files:

  • nextflow.config

  • params.config

The nextflow.config file allows to indicate resources needed for each class of processes. This is achieved labeling processes in the nextflow.config file:

includeConfig "$baseDir/params.config"

process {
    memory='0.6G'
    cpus='1'
    time='6h'

    withLabel: 'onecpu' {
        memory='0.6G'
        cpus='1'
    }

    withLabel: 'bigmem' {
        memory='0.7G'
        cpus='1'
    }
}

process.container = 'biocorecrg/c4lwg-2018:latest'
singularity.cacheDir = "$baseDir/singularity"

The first row indicates to use the information stored in the params.config file (described later). Then follows the definition of the default resources for a process:

Then we specify resources needed for a class of processes labeled bigmem (i.e., the default options will be overridden for these processes):

withLabel: 'bigmem' {
        memory='0.7G'
        cpus='1'
}

In the script /test2/test2.nf file, there are two processes to run two programs:

  • fastQC - a tool that calculates a number of quality control metrics on single fastq files;

  • multiQC - an aggregator of results from bioinformatics tools and samples for generating a single html report.

#!/usr/bin/env nextflow


/* 
 * This code enables the new dsl of Nextflow. 
 */

nextflow.enable.dsl=2


/* 
 * NextFlow test pipe
 * @authors
 * Luca Cozzuto <lucacozzuto@gmail.com>
 * 
 */

/*
 * Input parameters: read pairs
 * Params are stored in the params.config file
 */

version                 = "1.0"
// this prevents a warning of undefined parameter
params.help             = false

// this prints the input parameters
log.info """
BIOCORE@CRG - N F TESTPIPE  ~  version ${version}
=============================================
reads                           : ${params.reads}
"""

// this prints the help in case you use --help parameter in the command line and it stops the pipeline
if (params.help) {
    log.info 'This is the Biocore\'s NF test pipeline'
    log.info 'Enjoy!'
    log.info '\n'
    exit 1
}

/*
 * Defining the output folders.
 */
fastqcOutputFolder    = "ouptut_fastqc"
multiqcOutputFolder   = "ouptut_multiQC"


/* Reading the file list and creating a "Channel": a queue that connects different channels.
 * The queue is consumed by channels, so you cannot re-use a channel for different processes. 
 * If you need the same data for different processes you need to make more channels.
 */
 
Channel
    .fromPath( params.reads )  											 // read the files indicated by the wildcard                            
    .ifEmpty { error "Cannot find any reads matching: ${params.reads}" } // if empty, complains
    .set {reads_for_fastqc} 											 // make the channel "reads_for_fastqc"


/*
 * Process 1. Run FastQC on raw data. A process is the element for executing scripts / programs etc.
 */
process fastQC {
    publishDir fastqcOutputFolder  			// where (and whether) to publish the results
    tag { "${reads}" }  							// during the execution prints the indicated variable for follow-up
    label 'big_mem' 

    input:
    path reads   							// it defines the input of the process. It sets values from a channel

    output:									// It defines the output of the process (i.e. files) and send to a new channel
   	path "*_fastqc.*"

    script:									// here you have the execution of the script / program. Basically is the command line
    """
        fastqc ${reads} 
    """
}

/*
 * Process 2. Run multiQC on fastQC results
 */
process multiQC {
    publishDir multiqcOutputFolder, mode: 'copy' 	// this time do not link but copy the output file

    input:
    path (inputfiles)

    output:
    path("multiqc_report.html") 					// do not send the results to any channel

    script:
    """
    multiqc .
    """
}

workflow {
	fastqc_out = fastQC(reads_for_fastqc)
	multiQC(fastqc_out.collect())
}


workflow.onComplete { 
	println ( workflow.success ? "\nDone! Open the following report in your browser --> ${multiqcOutputFolder}/multiqc_report.html\n" : "Oops .. something went wrong" )
}

You can see that the process fastQC is labeled ‘bigmem’.

The last two rows of the config file indicate which containers to use. In this example, – and by default, if the repository is not specified, – a container is pulled from the DockerHub. In case of using a singularity container, you can indicate where to store the local image using the singularity.cacheDir option:

process.container = 'biocorecrg/c4lwg-2018:latest'
singularity.cacheDir = "$baseDir/singularity"

Let’s now launch the script test2.nf.

     cd test2;
     nextflow run test2.nf

     N E X T F L O W  ~  version 20.07.1
     Launching `test2.nf` [distracted_edison] - revision: e3a80b15a2
     BIOCORE@CRG - N F TESTPIPE  ~  version 1.0
     =============================================
     reads                           : /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/nextflow/test2/../testdata/*.fastq.gz
     executor >  local (2)
     [df/2c45f2] process > fastQC (B7_input_s_chr19.fastq.gz) [  0%] 0 of 2
     [-        ] process > multiQC                            -
     Error executing process > 'fastQC (B7_H3K4me1_s_chr19.fastq.gz)'

     Caused by:
       Process `fastQC (B7_H3K4me1_s_chr19.fastq.gz)` terminated with an error exit status (127)

     Command executed:

       fastqc B7_H3K4me1_s_chr19.fastq.gz

     Command exit status:
       127

     executor >  local (2)
     [df/2c45f2] process > fastQC (B7_input_s_chr19.fastq.gz) [100%] 2 of 2, failed: 2 ✘
     [-        ] process > multiQC                            -
     Error executing process > 'fastQC (B7_H3K4me1_s_chr19.fastq.gz)'

     Caused by:
       Process `fastQC (B7_H3K4me1_s_chr19.fastq.gz)` terminated with an error exit status (127)

     Command executed:

       fastqc B7_H3K4me1_s_chr19.fastq.gz

     Command exit status:
       127

     Command output:
       (empty)

     Command error:
       .command.sh: line 2: fastqc: command not found

     Work dir:
       /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/nextflow/test2/work/c5/18e76b2e6ffd64aac2b52e69bedef3

     Tip: when you have fixed the problem you can continue the execution adding the option `-resume` to the run command line

We will get a number of errors since no executable is found in our environment / path. This is because the executables are stored in our docker image and we have to tell Nextflow to use the docker image, using the -with-docker parameter.

     nextflow run test2.nf -with-docker

     N E X T F L O W  ~  version 20.07.1
     Launching `test2.nf` [boring_hamilton] - revision: e3a80b15a2
     BIOCORE@CRG - N F TESTPIPE  ~  version 1.0
     =============================================
     reads                           : /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/nextflow/test2/../testdata/*.fastq.gz
     executor >  local (3)
     [22/b437be] process > fastQC (B7_H3K4me1_s_chr19.fastq.gz) [100%] 2 of 2 ✔
     [1a/cfe63b] process > multiQC                              [  0%] 0 of 1
     executor >  local (3)
     [22/b437be] process > fastQC (B7_H3K4me1_s_chr19.fastq.gz) [100%] 2 of 2 ✔
     [1a/cfe63b] process > multiQC                              [100%] 1 of 1 ✔

This time it worked because Nextflow used the image specified in the nextflow.config file and containing the executables.

Now let’s take a look at the params.config file:

params {

	reads		= "$baseDir/../../testdata/*.fastq.gz"
	email		= "myemail@google.com"

}

As you can see, we indicated two pipeline parameters, reads and email; when running the pipeline, they can be overridden using --reads and --email.

Now, let’s examine the folders generated by the pipeline.

ls  work/2a/22e3df887b1b5ac8af4f9cd0d88ac5/

total 0
drwxrwxr-x 3 ec2-user ec2-user  26 Apr 23 13:52 .
drwxr-xr-x 2 root     root     136 Apr 23 13:51 multiqc_data
drwxrwxr-x 3 ec2-user ec2-user  44 Apr 23 13:51 ..

We observe that Docker runs as “root”. This can be problematic and generates security issues. To avoid this we can add this line of code within the process section of the config file:

containerOptions = { workflow.containerEngine == "docker" ? '-u $(id -u):$(id -g)': null}

This will tell Nextflow that if it is run with Docker, it has to produce files that belong to a user rather than the root.

Publishing final results

The script test2.nf generates two new folders, output_fastqc and output_multiQC, that contain the result of the pipeline output. We can indicate which process and output can be considered the final output of the pipeline using the publishDir directive that has to be specified at the beginning of a process.

In our pipeline we define these folders here:

#!/usr/bin/env nextflow


/* 
 * This code enables the new dsl of Nextflow. 
 */

nextflow.enable.dsl=2


/* 
 * NextFlow test pipe
 * @authors
 * Luca Cozzuto <lucacozzuto@gmail.com>
 * 
 */

/*
 * Input parameters: read pairs
 * Params are stored in the params.config file
 */

version                 = "1.0"
// this prevents a warning of undefined parameter
params.help             = false

// this prints the input parameters
log.info """
BIOCORE@CRG - N F TESTPIPE  ~  version ${version}
=============================================
reads                           : ${params.reads}
"""

// this prints the help in case you use --help parameter in the command line and it stops the pipeline
if (params.help) {
    log.info 'This is the Biocore\'s NF test pipeline'
    log.info 'Enjoy!'
    log.info '\n'
    exit 1
}

/*
 * Defining the output folders.
 */
fastqcOutputFolder    = "ouptut_fastqc"
multiqcOutputFolder   = "ouptut_multiQC"


/* Reading the file list and creating a "Channel": a queue that connects different channels.
 * The queue is consumed by channels, so you cannot re-use a channel for different processes. 
 * If you need the same data for different processes you need to make more channels.
 */
 
Channel
    .fromPath( params.reads )  											 // read the files indicated by the wildcard                            
    .ifEmpty { error "Cannot find any reads matching: ${params.reads}" } // if empty, complains
    .set {reads_for_fastqc} 											 // make the channel "reads_for_fastqc"


/*
 * Process 1. Run FastQC on raw data. A process is the element for executing scripts / programs etc.
 */
process fastQC {
    publishDir fastqcOutputFolder  			// where (and whether) to publish the results
    tag { "${reads}" }  							// during the execution prints the indicated variable for follow-up
    label 'big_mem' 

    input:
    path reads   							// it defines the input of the process. It sets values from a channel

    output:									// It defines the output of the process (i.e. files) and send to a new channel
   	path "*_fastqc.*"

    script:									// here you have the execution of the script / program. Basically is the command line
    """
        fastqc ${reads} 
    """
}

/*
 * Process 2. Run multiQC on fastQC results
 */
process multiQC {
    publishDir multiqcOutputFolder, mode: 'copy' 	// this time do not link but copy the output file

    input:
    path (inputfiles)

    output:
    path("multiqc_report.html") 					// do not send the results to any channel

    script:
    """
    multiqc .
    """
}

workflow {
	fastqc_out = fastQC(reads_for_fastqc)
	multiQC(fastqc_out.collect())
}


workflow.onComplete { 
	println ( workflow.success ? "\nDone! Open the following report in your browser --> ${multiqcOutputFolder}/multiqc_report.html\n" : "Oops .. something went wrong" )
}

You can see that the default mode to publish the results in Nextflow is soft linking. You can change this behaviour specifying the mode as indicated in the multiQC process.

Note

IMPORTANT: You can also “move” the results but this is not suggested for files that will be needed for other processes. This will likely disrupt your pipeline

To access the output files via the web they can be copied to your S3 bucket . Your bucket is mounted in /mnt:

ls /mnt

/mnt/nf-class-bucket-1

Note

In this class, each student has its own bucket, with the number correponding to the number of the AWS instance.

Let’s copy the multiqc_report.html file in the S3 bucket and change the privileges:

cp output_multiQC/multiqc_report.html /mnt/nf-class-bucket-1

sudo chmod 775 /mnt/nf-class-bucket-1/multiqc_report.html

Now you will be able to see this html file via the browser (change the bucket number to correspond to your instance):

http://nf-class-bucket-1.s3.eu-central-1.amazonaws.com/multiqc_report.html

Adding help section to a pipeline

Here we describe another good practice: the use of the --help parameter. At the beginning of the pipeline we can write:

#!/usr/bin/env nextflow


/* 
 * This code enables the new dsl of Nextflow. 
 */

nextflow.enable.dsl=2


/* 
 * NextFlow test pipe
 * @authors
 * Luca Cozzuto <lucacozzuto@gmail.com>
 * 
 */

/*
 * Input parameters: read pairs
 * Params are stored in the params.config file
 */

version                 = "1.0"
// this prevents a warning of undefined parameter
params.help             = false

// this prints the input parameters
log.info """
BIOCORE@CRG - N F TESTPIPE  ~  version ${version}
=============================================
reads                           : ${params.reads}
"""

// this prints the help in case you use --help parameter in the command line and it stops the pipeline
if (params.help) {
    log.info 'This is the Biocore\'s NF test pipeline'
    log.info 'Enjoy!'
    log.info '\n'
    exit 1
}

/*
 * Defining the output folders.
 */
fastqcOutputFolder    = "ouptut_fastqc"
multiqcOutputFolder   = "ouptut_multiQC"


/* Reading the file list and creating a "Channel": a queue that connects different channels.
 * The queue is consumed by channels, so you cannot re-use a channel for different processes. 
 * If you need the same data for different processes you need to make more channels.
 */
 
Channel
    .fromPath( params.reads )  											 // read the files indicated by the wildcard                            
    .ifEmpty { error "Cannot find any reads matching: ${params.reads}" } // if empty, complains
    .set {reads_for_fastqc} 											 // make the channel "reads_for_fastqc"


/*
 * Process 1. Run FastQC on raw data. A process is the element for executing scripts / programs etc.
 */
process fastQC {
    publishDir fastqcOutputFolder  			// where (and whether) to publish the results
    tag { "${reads}" }  							// during the execution prints the indicated variable for follow-up
    label 'big_mem' 

    input:
    path reads   							// it defines the input of the process. It sets values from a channel

    output:									// It defines the output of the process (i.e. files) and send to a new channel
   	path "*_fastqc.*"

    script:									// here you have the execution of the script / program. Basically is the command line
    """
        fastqc ${reads} 
    """
}

/*
 * Process 2. Run multiQC on fastQC results
 */
process multiQC {
    publishDir multiqcOutputFolder, mode: 'copy' 	// this time do not link but copy the output file

    input:
    path (inputfiles)

    output:
    path("multiqc_report.html") 					// do not send the results to any channel

    script:
    """
    multiqc .
    """
}

workflow {
	fastqc_out = fastQC(reads_for_fastqc)
	multiQC(fastqc_out.collect())
}


workflow.onComplete { 
	println ( workflow.success ? "\nDone! Open the following report in your browser --> ${multiqcOutputFolder}/multiqc_report.html\n" : "Oops .. something went wrong" )
}

so that launching the pipeline with --help will show you just the parameters and the help.

nextflow run test2.nf --help

N E X T F L O W  ~  version 20.07.1
Launching `test2.nf` [mad_elion] - revision: e3a80b15a2
BIOCORE@CRG - N F TESTPIPE  ~  version 1.0
=============================================
reads                           : /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/nextflow/test2/../testdata/*.fastq.gz
This is the Biocore's NF test pipeline
Enjoy!

EXERCISE

  • Look at the very last EXERCISE of the day before. Change the script and the config file using the label for handling failing processes.

Solution

The process should become:

#!/usr/bin/env nextflow

nextflow.enable.dsl=2

// this can be overridden by using --inputfile OTHERFILENAME
params.inputfile = "$baseDir/../../../testdata/test.fa"

// the "file method" returns a file system object given a file path string
sequences_file = file(params.inputfile)

// check if the file exists
if( !sequences_file.exists() ) exit 1, "Missing genome file: ${sequences_file}"

/*
 * Process 1 for splitting a fasta file in multiple files
 */
process splitSequences {
    input:
    path sequencesFile

    output:
    path ('seq_*')

    // simple awk command
    script:
    """
    awk '/^>/{f="seq_"++d} {print > f}' < ${sequencesFile}
    """
}

/*
 * Process 2 for reversing the sequences
 */
process reverseSequence {
    tag { "${seq}" }

    publishDir "output"
    label 'ignorefail'
    
    input:
    path seq

    output:
    path "all.rev"

    script:
    """
    	cat ${seq} | awk '{if (\$1~">") {print \$0} else system("echo " \$0 " |rev")}' > all.rev
    """
}

workflow flow1 {
    take: sequences

    main:
    splitted_seq        = splitSequences(sequences)
    rev_single_seq      = reverseSequence(splitted_seq)
}

workflow flow2 {
    take: sequences

    main:
    splitted_seq        = splitSequences(sequences).flatten()
    rev_single_seq      = reverseSequence(splitted_seq)
}

workflow {
   flow1(sequences_file)
   flow2(sequences_file)
}

and the nextflow.config file would become:

process {
    withLabel: 'ignorefail' {
        errorStrategy = 'ignore'
    }
}


  • Now look at test2.nf.

Change this script and the config file using the label for handling failing processes by retrying 3 times and incrementing time.

You can specify a very low time (5, 10 or 15 seconds) for the fastqc process so it would fail at beginning.

Solution

The code should become:

#!/usr/bin/env nextflow


/* 
 * This code enables the new dsl of Nextflow. 
 */

nextflow.enable.dsl=2


/* 
 * NextFlow test pipe
 * @authors
 * Luca Cozzuto <lucacozzuto@gmail.com>
 * 
 */

/*
 * Input parameters: read pairs
 * Params are stored in the params.config file
 */

version                 = "1.0"
// this prevents a warning of undefined parameter
params.help             = false

// this prints the input parameters
log.info """
BIOCORE@CRG - N F TESTPIPE  ~  version ${version}
=============================================
reads                           : ${params.reads}
"""

// this prints the help in case you use --help parameter in the command line and it stops the pipeline
if (params.help) {
    log.info 'This is the Biocore\'s NF test pipeline'
    log.info 'Enjoy!'
    log.info '\n'
    exit 1
}

/*
 * Defining the output folders.
 */
fastqcOutputFolder    = "ouptut_fastqc"
multiqcOutputFolder   = "ouptut_multiQC"


/* Reading the file list and creating a "Channel": a queue that connects different channels.
 * The queue is consumed by channels, so you cannot re-use a channel for different processes. 
 * If you need the same data for different processes you need to make more channels.
 */
 
Channel
    .fromPath( params.reads )  											 // read the files indicated by the wildcard                            
    .ifEmpty { error "Cannot find any reads matching: ${params.reads}" } // if empty, complains
    .set {reads_for_fastqc} 											 // make the channel "reads_for_fastqc"


/*
 * Process 1. Run FastQC on raw data. A process is the element for executing scripts / programs etc.
 */
process fastQC {
    publishDir fastqcOutputFolder  			// where (and whether) to publish the results
    tag { "${reads}" }  							// during the execution prints the indicated variable for follow-up
    label 'keep_trying'

    input:
    path reads   							// it defines the input of the process. It sets values from a channel

    output:									// It defines the output of the process (i.e. files) and send to a new channel
   	path "*_fastqc.*"

    script:									// here you have the execution of the script / program. Basically is the command line
    """
        fastqc ${reads} 
    """
}


/*
 * Process 2. Run multiQC on fastQC results
 */
process multiQC {
    publishDir multiqcOutputFolder, mode: 'copy' 	// this time do not link but copy the output file

    input:
    path (inputfiles)

    output:
    path("multiqc_report.html") 					// do not send the results to any channel

    script:
    """
    multiqc .
    """
}

workflow {
	fastqc_out = fastQC(reads_for_fastqc)
	multiQC(fastqc_out.collect())
}


workflow.onComplete { 
	println ( workflow.success ? "\nDone! Open the following report in your browser --> ${multiqcOutputFolder}/multiqc_report.html\n" : "Oops .. something went wrong" )
}

while the nextflow.config file would be:

includeConfig "$baseDir/params.config"

process {
    memory='0.6G'
    cpus='1'
    time='6h'

    withLabel: 'keep_trying' { 
        time = { 10.second * task.attempt }
        errorStrategy = 'retry' 
        maxRetries = 3	
    } 	

}

process.container = 'biocorecrg/c4lwg-2018:latest'
singularity.cacheDir = "$baseDir/singularity"


Using public pipelines

As an example, we will use our software Master Of Pores published in 2019 in Frontiers in Genetics .

This repository contains a collection of pipelines for processing nanopore’s raw data (both cDNA and dRNA-seq), detecting putative RNA modifications and estimating RNA polyA tail sizes.

Clone the pipeline together with the submodules. The submodules contain Nextflow modules that will be described later.

git clone --depth 1 --recurse-submodules https://github.com/biocorecrg/MOP2.git

Cloning into 'MoP2'...
remote: Enumerating objects: 113, done.
remote: Counting objects: 100% (113/113), done.
remote: Compressing objects: 100% (99/99), done.
remote: Total 113 (delta 14), reused 58 (delta 3), pack-reused 0
Receiving objects: 100% (113/113), 21.87 MiB | 5.02 MiB/s, done.
Resolving deltas: 100% (14/14), done.
Submodule 'BioNextflow' (https://github.com/biocorecrg/BioNextflow) registered for path 'BioNextflow'
Cloning into '/Users/lcozzuto/aaa/MoP2/BioNextflow'...
remote: Enumerating objects: 971, done.
remote: Counting objects: 100% (641/641), done.
remote: Compressing objects: 100% (456/456), done.
remote: Total 971 (delta 393), reused 362 (delta 166), pack-reused 330
Receiving objects: 100% (971/971), 107.51 MiB | 5.66 MiB/s, done.
Resolving deltas: 100% (560/560), done.
Submodule path 'BioNextflow': checked out '0473d7f177ce718477b852b353894b71a9a9a08b'

Let’s inspect the folder MoP2.

ls MoP2

anno           conf            deeplexicon             docs       INSTALL.sh
mop_consensus  mop_preprocess  nextflow.global.config  README.md  TODO.md
BioNextflow    data            docker                  img        local_modules.nf
mop_mod        mop_tail        outdirs.nf              terraform

There are different pipelines bundled in a single repository: mop_preprocess, mop_mod, mop_tail and mop_consensus. Let’s inspect the folder mop_preprocess that contains the Nextflow pipeline mop_preprocess.nf. This pipeline allows to pre-process raw fast5 files that are generated by a Nanopore instruments. Notice the presence of the folder bin. This folder contains the number of custom scripts that can be used by the pipeline without storing them inside containers. This provides a practical solution for using programs with restrictive licenses that prevent the code redistribution.

cd MoP2
ls mop_preprocess/bin/

RNA_to_DNA_fq.py        extract_sequence_from_fastq.py  fast5_type.py
bam2stats.py            fast5_to_fastq.py

The basecaller Guppy cannot be redistributed, so we had to add an INSTALL.sh script that has to be run by the user for downloading the Guppy executable and placing it inside the bin folder.

sh INSTALL.sh

INSTALLING GUPPY VERSION 3.4.5
[...]
ont-guppy_3.4.5_linux64.tar. 100%[============================================>] 363,86M  5,59MB/s    in 65s

2021-11-04 18:38:58 (5,63 MB/s) - ‘ont-guppy_3.4.5_linux64.tar.gz’ saved [381538294/381538294]

x ont-guppy/bin/
x ont-guppy/bin/guppy_basecall_server
x ont-guppy/bin/guppy_basecaller
[...]

We can check what is inside bin.

cd mop_preprocess

ls bin/

MINIMAP2_LICENSE                        libboost_system.so.1.66.0
bam2stats.py                            libboost_thread.so
extract_sequence_from_fastq.py          libboost_thread.so.1.66.0
fast5_to_fastq.py                       libcrypto.so
fast5_type.py                           libcrypto.so.1.0.1e
guppy_aligner                           libcrypto.so.10
guppy_barcoder                          libcurl.so
[...]

It is always a good idea to bundle your pipeline with a little test dataset so that other can test the pipeline once it is installed. This also useful for continuous integration (CI), when each time when a commit to GitHub triggers a test run that sends you an alert in case of failure. Let’s inspect the params.config file that points to a small dataset contained in the repository (the data and anno folders):

params {
    conffile            = "final_summary_01.txt"
    fast5               = "$baseDir/../data/**/*.fast5"
    fastq               = ""

    reference           = "$baseDir/../anno/yeast_rRNA_ref.fa.gz"
    annotation          = ""
    ref_type            = "transcriptome"

    pars_tools          = "drna_tool_splice_opt.tsv"
    output              = "$baseDir/output"
    qualityqc           = 5
    granularity         = 1

    basecalling         = "guppy"
    GPU                 = "OFF"
    demultiplexing      = "NO"
    demulti_fast5       = "NO"

    filtering           = "nanoq"

    mapping             = "graphmap"
    counting            = "nanocount"
    discovery           = "NO"

    cram_conv           = "YES"
    subsampling_cram    = 50

    saveSpace           = "NO"

    email               = "lucacozzuto@crg.es"
}

Let’s now run the pipeline following the instructions in the README. As you can see we need to go inside one folder for running just one pipeline.

cd mop_preprocess
nextflow run mop_preprocess.nf -with-docker -bg -profile local > log.txt

We can now inspect the log.txt file

tail -f log.txt

N E X T F L O W  ~  version 21.10.6
Launching `mop_preprocess.nf` [furious_church] - revision: bbe0976770


╔╦╗╔═╗╔═╗  ╔═╗┬─┐┌─┐┌─┐┬─┐┌─┐┌─┐┌─┐┌─┐┌─┐
║║║║ ║╠═╝  ╠═╝├┬┘├┤ ├─┘├┬┘│ ││  ├┤ └─┐└─┐
╩ ╩╚═╝╩    ╩  ┴└─└─┘┴  ┴└─└─┘└─┘└─┘└─┘└─┘

====================================================
BIOCORE@CRG Master of Pores 2. Preprocessing - N F  ~  version 2.0
====================================================

conffile.                 : final_summary_01.txt

fast5                     : /Users/lcozzuto/aaa/MOP2/mop_preprocess/../data/**/*.fast5
fastq                     :

reference                 : /Users/lcozzuto/aaa/MOP2/mop_preprocess/../anno/yeast_rRNA_ref.fa.gz
annotation                :

granularity.              : 1

ref_type                  : transcriptome
pars_tools                : drna_tool_splice_opt.tsv

output                    : /Users/lcozzuto/aaa/MOP2/mop_preprocess/output

GPU                       : OFF

basecalling               : guppy
demultiplexing            : NO
demulti_fast5             : NO

filtering                 : nanoq
mapping                   : graphmap

counting                  : nanocount
discovery                 : NO

cram_conv                 : YES
subsampling_cram          : 50


saveSpace                 : NO
email                     : lucacozzuto@crg.es

Sending the email to lucacozzuto@crg.es

----------------------CHECK TOOLS -----------------------------
basecalling : guppy
> demultiplexing will be skipped
mapping : graphmap
filtering : nanoq
counting : nanocount
> discovery will be skipped
--------------------------------------------------------------
[73/6734e3] Submitted process > preprocess_flow:checkRef (Checking yeast_rRNA_ref.fa.gz)
[a0/75728f] Submitted process > flow1:GUPPY_BASECALL:baseCall (mod---1)
[68/4836ed] Submitted process > flow1:GUPPY_BASECALL:baseCall (wt---2)
[af/1f666e] Submitted process > flow1:NANOQ_FILTER:filter (wt---2)
[eb/4163e4] Submitted process > preprocess_flow:RNA2DNA (wt---2)
[51/2c755e] Submitted process > preprocess_flow:GRAPHMAP:map (wt---2)
[f5/a236b1] Submitted process > flow1:NANOQ_FILTER:filter (mod---1)
[9a/de49df] Submitted process > preprocess_flow:MinIONQC (wt)
[23/665791] Submitted process > preprocess_flow:MinIONQC (mod)
[1b/88879b] Submitted process > preprocess_flow:RNA2DNA (mod---1)
[79/a1ee98] Submitted process > preprocess_flow:concatenateFastQFiles (wt)
[57/02c2aa] Submitted process > preprocess_flow:concatenateFastQFiles (mod)
[22/6f493a] Submitted process > preprocess_flow:FASTQC:fastQC (wt.fq.gz)
[ad/b320ed] Submitted process > preprocess_flow:GRAPHMAP:map (mod---1)
[df/38fcda] Submitted process > preprocess_flow:FASTQC:fastQC (mod.fq.gz)
[65/66ff77] Submitted process > preprocess_flow:SAMTOOLS_CAT:catAln (mod)
[7f/21426f] Submitted process > preprocess_flow:SAMTOOLS_CAT:catAln (wt)
[c9/b71a9d] Submitted process > preprocess_flow:SAMTOOLS_SORT:sortAln (mod)
[6d/8582b7] Submitted process > preprocess_flow:SAMTOOLS_SORT:sortAln (wt)
[c0/12d9d7] Submitted process > preprocess_flow:bam2stats (wt)
[0c/161864] Submitted process > preprocess_flow:NANOPLOT_QC:MOP_nanoPlot (wt)
[de/778750] Submitted process > preprocess_flow:AssignReads (wt)
[32/ea79c9] Submitted process > preprocess_flow:SAMTOOLS_INDEX:indexBam (wt)
[51/e85eb2] Submitted process > preprocess_flow:bam2stats (mod)
[16/4a17f8] Submitted process > preprocess_flow:SAMTOOLS_INDEX:indexBam (mod)
[20/e6b19f] Submitted process > preprocess_flow:AssignReads (mod)
[5b/81b33d] Submitted process > preprocess_flow:NANOPLOT_QC:MOP_nanoPlot (mod)
[8c/d3efe9] Submitted process > preprocess_flow:countStats (wt)
[0a/84b180] Submitted process > preprocess_flow:bam2Cram (wt)
[95/5fee6f] Submitted process > preprocess_flow:NANOCOUNT:nanoCount (wt)
[15/710624] Submitted process > preprocess_flow:joinAlnStats (joining aln stats)
[3c/287861] Submitted process > preprocess_flow:bam2Cram (mod)
[50/f50978] Submitted process > preprocess_flow:NANOCOUNT:nanoCount (mod)
[d4/49c944] Submitted process > preprocess_flow:countStats (mod)
[db/ec149f] Submitted process > preprocess_flow:joinCountStats (joining count stats)
[61/7c5e3d] Submitted process > preprocess_flow:MULTIQC:makeReport
Pipeline BIOCORE@CRG Master of Pore - preprocess completed!
Started at  2022-05-09T11:50:28.676+02:00
Finished at 2022-05-09T12:09:07.543+02:00
Time elapsed: 18m 39s
Execution status: OK

You noticed we specify a profile here. This indicates where to launch the pipeline while several possiblities are available (like cluster, local computer etc). We will show this in detail later. If you skip this it will use the default configuration that is likely to heavy for our simple environment

EXERCISE

  • Look at the documentation of Master Of Pores and change the default mapper and filtering. Try to skip some step.

Solution

The params can be set on the fly like this

nextflow run mop_preprocess.nf -with-docker -bg --mapping minimap2 --filtering nanofilt  > log.txt


Using Singularity

We recommend to use Singularity instead of Docker in a HPC environments. This can be done using the Nextflow parameter -with-singularity without changing the code.

Nextflow will take care of pulling, converting and storing the image for you. This will be done only once and then Nextflow will use the stored image for further executions.

Within an AWS main node both Docker and Singularity are available. While within the AWS batch system only Docker is available.

nextflow run test2.nf -with-singularity -bg > log

tail -f log
N E X T F L O W  ~  version 20.10.0
Launching `test2.nf` [soggy_miescher] - revision: 5a0a513d38

BIOCORE@CRG - N F TESTPIPE  ~  version 1.0
=============================================
reads                           : /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/test2/../../testdata/*.fastq.gz

Pulling Singularity image docker://biocorecrg/c4lwg-2018:latest [cache /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/test2/singularity/biocorecrg-c4lwg-2018-latest.img]
[da/eb7564] Submitted process > fastQC (B7_H3K4me1_s_chr19.fastq.gz)
[f6/32dc41] Submitted process > fastQC (B7_input_s_chr19.fastq.gz)
...

Let’s inspect the folder singularity:

ls singularity/
biocorecrg-c4lwg-2018-latest.img

This singularity image can be used to execute the code outside the pipeline exactly the same way as inside the pipeline.

Sometimes we can be interested in launching only a specific job, because it might failed or for making a test. For that, we can go to the corresponding temporary folder; for example, one of the fastQC temporary folders:

cd work/da/eb7564*/

Inspecting the .command.run file shows us this piece of code:

...

nxf_launch() {
    set +u; env - PATH="$PATH" SINGULARITYENV_TMP="$TMP" SINGULARITYENV_TMPDIR="$TMPDIR" singularity exec /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/test2/singularity/biocorecrg-c4lwg-2018-latest.img /bin/bash -c "cd $PWD; /bin/bash -ue /home/ec2-user/git/CoursesCRG_Containers_Nextflow_May_2021/nextflow/test2/work/da/eb756433aa0881d25b20afb5b1366e/.command.sh"
}
...

This means that Nextflow is running the code by using the singularity exec command.

Thus we can launch this command outside the pipeline (locally):

bash .command.run

Started analysis of B7_H3K4me1_s_chr19.fastq.gz
Approx 5% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 10% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 15% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 20% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 25% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 30% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 35% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 40% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 45% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 50% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 55% complete for B7_H3K4me1_s_chr19.fastq.gz
Approx 60% complete for B7_H3K4me1_s_chr19.fastq.gz
...

If you have to submit a job to a HPC you need to use the corresponding program, qsub or sbatch.

qsub .command.run