Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pull nf-core and update pmultiqc #152

Merged
merged 20 commits into from
Apr 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# ![nf-core/quantms](docs/images/nf-core/quantms_logo_light.png#gh-light-mode-only) ![nf-core/quantms](docs/images/nf-core/quantms_logo_dark.png#gh-dark-mode-only)
# ![nf-core/quantms](docs/images/nf-core-quantms_logo_light.png#gh-light-mode-only) ![nf-core/quantms](docs/images/nf-core-quantms_logo_dark.png#gh-dark-mode-only)

[![GitHub Actions CI Status](https://github.com/nf-core/quantms/workflows/nf-core%20CI/badge.svg)](https://github.com/nf-core/quantms/actions?query=workflow%3A%22nf-core+CI%22)
[![GitHub Actions Linting Status](https://github.com/nf-core/quantms/workflows/nf-core%20linting/badge.svg)](https://github.com/nf-core/quantms/actions?query=workflow%3A%22nf-core+linting%22)
Expand Down Expand Up @@ -77,7 +77,7 @@ DIA-LFQ:
3. Download the pipeline and test it on a minimal dataset with a single command:

```console
nextflow run nf-core/quantms -profile test,YOURPROFILE --input project.sdrf.tsv --database protein.fasta
nextflow run nf-core/quantms -profile test,YOURPROFILE --input project.sdrf.tsv --database protein.fasta --outdir <OUTDIR>
```

Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile (`YOURPROFILE` in the example command above). You can chain multiple config profiles in a comma-separated string.
Expand All @@ -97,7 +97,7 @@ DIA-LFQ:
<!-- TODO nf-core: Update the example "typical command" below used to run the pipeline -->

```console
nextflow run nf-core/quantms -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input project.sdrf.tsv --database database.fasta
nextflow run nf-core/quantms -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input project.sdrf.tsv --database database.fasta --outdir <OUTDIR>
```

## Documentation
Expand Down
18 changes: 11 additions & 7 deletions conf/dev.config
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
/*
* -------------------------------------------------
* Nextflow config file for running with nightly dev containers
* -------------------------------------------------
* Only overwrites the container. See dev/ folder for building instructions.
* Use as follows:
* nextflow run nf-core/quantms -profile dev,<docker/singularity>
*/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for running with nightly dev. containers (mainly for OpenMS)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Only overwrites the container. E.g. uses the OpenMS nightly executable and thirdparty
containers. TODO Currently does nothing, we need to set it up.
Use as follows:
nextflow run nf-core/quantms -profile test,<docker/singularity> [--outdir <OUTDIR>]
-------------------------------------------------------------------------------------------
*/

params {
config_profile_name = 'Development profile'
Expand Down
33 changes: 9 additions & 24 deletions conf/modules.config
Original file line number Diff line number Diff line change
Expand Up @@ -160,8 +160,8 @@ process {
]
}

// EPIFILTER
withName: 'NFCORE_QUANTMS:QUANTMS:TMT:.*:EPIFILTER' {
// IDFILTER on PROTEIN LEVEL
withName: 'NFCORE_QUANTMS:QUANTMS:TMT:PROTEININFERENCE:IDFILTER' {
ext.args = [
"-score:prot \"$params.protein_level_fdr_cutoff\"",
"-delete_unreferenced_peptide_hits",
Expand All @@ -175,27 +175,22 @@ process {
]
}

//IDCONFLICTRESOLVER
withName: 'NFCORE_QUANTMS:QUANTMS:TMT:PROTEINQUANT:IDCONFLICTRESOLVER' {
ext.args = "-debug $params.conflict_resolver_debug"
}

//PROTEINQUANTIFIER
// PROTEINQUANTIFIER
withName: 'NFCORE_QUANTMS:QUANTMS:TMT:PROTEINQUANT:PROTEINQUANTIFIER' {
ext.args = "-debug $params.proteinquant_debug"
ext.args = "-debug 0"
}

// MSstatsConverter
// MSSTATSCONVERTER
withName: 'NFCORE_QUANTMS:QUANTMS:TMT:PROTEINQUANT:MSSTATSCONVERTER' {
ext.args = "-debug $params.msstatsconverter_debug"
ext.args = "-debug 0"
}
}

if (params.protein_inference_bayesian) {
if (params.protein_inference_method == "bayesian") {
process {
// EPIFANY
withName: 'NFCORE_QUANTMS:QUANTMS:.*:EPIFANY' {
ext.args = "-debug $params.protein_inference_debug"
ext.args = "-keep_best_psm_only false -debug $params.protein_inference_debug"
publishDir = [
path: { "${params.outdir}/epifany" },
mode: params.publish_dir_mode,
Expand All @@ -221,16 +216,6 @@ if (params.protein_inference_bayesian) {

process {

// INDEXPEPTIDES
withName: 'NFCORE_QUANTMS:QUANTMS:.*:INDEXPEPTIDES' {
publishDir = [
path: { "${params.outdir}/indexpeptides" },
mode: params.publish_dir_mode,
pattern: '*.log',
saveAs: { filename -> filename.equals('versions.yml') ? null : filename }
]
}

// IDFILTER
withName: 'NFCORE_QUANTMS:QUANTMS:.*:ID:PSMFDRCONTROL:IDFILTER' {
ext.args = "-score:pep \"$params.psm_pep_fdr_cutoff\""
Expand All @@ -244,7 +229,7 @@ process {

// PROTEOMICSLFQ
withName: 'NFCORE_QUANTMS:QUANTMS:LFQ:PROTEOMICSLFQ' {
ext.args = "-debug $params.inf_quant_debug"
ext.args = "-debug $params.plfq_debug"
}

// DIA-NN
Expand Down
16 changes: 9 additions & 7 deletions conf/test.config
Original file line number Diff line number Diff line change
@@ -1,24 +1,26 @@
/*
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for running minimal tests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for running minimal tests (ISO)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Defines input files and everything required to run a fast and simple pipeline test.
Use as follows:
nextflow run nf-core/quantms -profile test,<docker/singularity>
nextflow run nf-core/quantms -profile test,<docker/singularity> [--outdir <OUTDIR>]
----------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------
*/

params {
config_profile_name = 'Test profile'
config_profile_description = 'Minimal test dataset to check pipeline function'
config_profile_name = 'Test profile DDA ISO'
config_profile_description = 'Minimal test dataset to check pipeline function of the isotopic labelling branch of the pipeline'

// Limit resources so that this can run on GitHub Actions
max_cpus = 2
max_memory = '6.GB'
max_time = '6.h'

outdir = "./results_iso"

// Input data
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/tmt_ci/PXD000001.sdrf.tsv'

Expand Down
24 changes: 15 additions & 9 deletions conf/test_dia.config
Original file line number Diff line number Diff line change
@@ -1,20 +1,26 @@
/*
* -------------------------------------------------
* Nextflow config file for running tests
* -------------------------------------------------
* Defines bundled input files and everything required
* to run a fast and simple test. Use as follows:
* nextflow run nf-core/quantms -profile test,<docker/singularity/podman>
*/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for running minimal tests (DIA)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Defines input files and everything required to run a fast and simple test.
Use as follows:
nextflow run nf-core/quantms -profile test_dia,<docker/singularity> [--outdir <OUTDIR>]
------------------------------------------------------------------------------------------------
*/

params {
config_profile_name = 'Test profile'
config_profile_description = 'Minimal test dataset to check pipeline function'
config_profile_name = 'Test profile for DIA'
config_profile_description = 'Minimal test dataset to check pipeline function for the data-independent acquisition pipeline branch.'

// Limit resources so that this can run on GitHub Actions
max_cpus = 2
max_memory = 6.GB
max_time = 48.h

outdir = './results_dia'

// Input data
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/lfq_ci/PXD026600/PXD026600.sdrf.tsv'
database = 'ftp://massive.ucsd.edu/MSV000087597/sequence/REF_EColi_K12_UPS1_combined.fasta'
Expand Down
8 changes: 5 additions & 3 deletions conf/test_full.config
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,16 @@
Defines input files and everything required to run a full size pipeline test.
Use as follows:
nextflow run nf-core/quantms -profile test_full,<docker/singularity> --outdir <OUTDIR>
nextflow run nf-core/quantms -profile test_full,<docker/singularity> [--outdir <OUTDIR>]
----------------------------------------------------------------------------------------
*/

params {
config_profile_name = 'Full test profile'
config_profile_description = 'Full test dataset to check pipeline function and sanity of results'
config_profile_name = 'Full test profile DDA ISO'
config_profile_description = 'Full test dataset in isotopic labelling mode to check pipeline function and sanity of results'

outdir = "./results_iso_full"

// Input data for full size test
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/tmt_ci/PXD000001.sdrf.tsv'
Expand Down
26 changes: 16 additions & 10 deletions conf/test_lfq.config
Original file line number Diff line number Diff line change
@@ -1,20 +1,26 @@
/*
* -------------------------------------------------
* Nextflow config file for running tests
* -------------------------------------------------
* Defines bundled input files and everything required
* to run a fast and simple test. Use as follows:
* nextflow run nf-core/quantms -profile test,<docker/singularity/podman>
*/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for running minimal tests (LFQ)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Defines input files and everything required to run a fast and simple test.
Use as follows:
nextflow run nf-core/quantms -profile test_lfq,<docker/singularity> [--outdir <OUTDIR>]
------------------------------------------------------------------------------------------------
*/

params {
config_profile_name = 'Test profile'
config_profile_description = 'Minimal test dataset to check pipeline function'
config_profile_name = 'Test profile for DDA LFQ'
config_profile_description = 'Minimal test dataset to check pipeline function of the label-free quantification branch of the pipeline'

// Limit resources so that this can run on GitHub Actions
max_cpus = 2
max_memory = 6.GB
max_time = 48.h

outdir = "./results_lfq"

// Input data
labelling_type = "label free sample"
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/lfq_ci/BSA/BSA_design_urls.tsv'
Expand All @@ -25,5 +31,5 @@ params {
enable_qc = true
add_triqler_output = true
protein_level_fdr_cutoff = 1.0
acqusition_method = "dda"
acquisition_method = "dda"
}
24 changes: 14 additions & 10 deletions conf/test_localize.config
Original file line number Diff line number Diff line change
@@ -1,22 +1,26 @@
/*
* -------------------------------------------------
* Nextflow config file for running tests with
* modification localization
* -------------------------------------------------
* Defines bundled input files and everything required
* to run a fast and simple test. Use as follows:
* nextflow run nf-core/quantms -profile test_localize,<docker/singularity/podman>
*/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Nextflow config file for running minimal tests (LFQ) with mod. localization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Defines input files and everything required to run a fast and simple test.
Use as follows:
nextflow run nf-core/quantms -profile test_localize,<docker/singularity> [--outdir <OUTDIR>]
----------------------------------------------------------------------------------------------------
*/

params {
config_profile_name = 'Test phospho-localization profile'
config_profile_description = 'Minimal test dataset to check pipeline function for phospho-localization, SDRF parsing and ConsensusID.'
config_profile_name = 'Test PTM-localization profile'
config_profile_description = 'Minimal test dataset to check pipeline function for PTM-localization, SDRF parsing and ConsensusID.'

// Limit resources so that this can run on Travis
max_cpus = 2
max_memory = 6.GB
max_time = 1.h

outdir = "./results_localize"

// Input data
input = 'https://raw.githubusercontent.com/nf-core/test-datasets/proteomicslfq/testdata/phospho/test_phospho.sdrf'
database = 'https://raw.githubusercontent.com/nf-core/test-datasets/quantms/testdata/lfq_ci_phospho/pools_crap_targetdecoy.fasta'
Expand Down
9 changes: 5 additions & 4 deletions modules/local/diannsearch/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ process DIANNSEARCH {
file(diann_config)

output:
path "report.tsv", emit: report
path "report.stats.tsv", emit: report_stat
path "report.log.txt", emit: log
path "diann_report.tsv", emit: report
path "diann_report.stats.tsv", emit: report_stat
path "diann_report.log.txt", emit: log
path "versions.yml", emit: version
path "*.tsv"

Expand Down Expand Up @@ -51,13 +51,14 @@ process DIANNSEARCH {
${mbr} \\
--reannotate \\
${normalize} \\
--out diann_report.tsv \\
--verbose $params.diann_debug \\
> diann.log
cat <<-END_VERSIONS > versions.yml
"${task.process}":
DIA-NN: 1.8.0
DIA-NN: \$(diann 2>&1 | grep "DIA-NN" | grep -oP "(\\d*\\.\\d+\\.\\d+)|(\\d*\\.\\d+)")
END_VERSIONS
"""
}
6 changes: 3 additions & 3 deletions modules/local/diannsearch/meta.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,15 +30,15 @@ output:
- report:
type: file
description: Main report file. A text table containing precursor and protein IDs, as well as plenty of associated information. Most column names are self-explanatory.
pattern: "report.tsv"
pattern: "diann_report.tsv"
- report_stat:
type: file
description: Contains a number of QC metrics which can be used for data filtering, e.g. to exclude failed runs, or as a readout for method optimization.
pattern: "report.stats.tsv"
pattern: "diann_report.stats.tsv"
- log:
type: file
description: DIA-NN log file
pattern: "report.log.txt"
pattern: "diann_report.log.txt"
- version:
type: file
description: File containing software version
Expand Down
2 changes: 1 addition & 1 deletion modules/local/librarygeneration/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ process LIBRARYGENERATION {
cat <<-END_VERSIONS > versions.yml
"${task.process}":
DIA-NN: 1.8.0
DIA-NN: \$(diann 2>&1 | grep "DIA-NN" | grep -oP "(\\d*\\.\\d+\\.\\d+)|(\\d*\\.\\d+)")
END_VERSIONS
"""
}
2 changes: 1 addition & 1 deletion modules/local/librarygeneration/meta.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,4 +45,4 @@ output:
description: File containing software version
pattern: "*.{version.txt}"
authors:
- "@Chengxin Dai"
- "@daichengxin"
5 changes: 3 additions & 2 deletions modules/local/openms/consensusid/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,12 @@ process CONSENSUSID {
label 'process_medium'
// TODO could be easily parallelized
label 'process_single_thread'
label 'openms'

conda (params.enable_conda ? "openms::openms=2.8.0" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/openms-thirdparty:2.8.0--h9ee0642_0' :
'quay.io/biocontainers/openms-thirdparty:2.8.0--h9ee0642_0' }"
'https://depot.galaxyproject.org/singularity/openms:2.8.0--h7ca0330_1' :
'quay.io/biocontainers/openms:2.8.0--h7ca0330_1' }"

input:
tuple val(meta), path(id_file), val(qval_score)
Expand Down
6 changes: 4 additions & 2 deletions modules/local/openms/decoydatabase/main.nf
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
process DECOYDATABASE {
label 'process_very_low'
label 'openms'

conda (params.enable_conda ? "openms::openms=2.8.0" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
'https://depot.galaxyproject.org/singularity/openms-thirdparty:2.8.0--h9ee0642_0' :
'quay.io/biocontainers/openms-thirdparty:2.8.0--h9ee0642_0' }"
'https://depot.galaxyproject.org/singularity/openms:2.8.0--h7ca0330_1' :
'quay.io/biocontainers/openms:2.8.0--h7ca0330_1' }"

input:
path(db_for_decoy)
Expand Down
Loading