Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync with nf-core #170

Merged
merged 9 commits into from
May 2, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 8 additions & 6 deletions .github/workflows/awsfulltest.yml
Original file line number Diff line number Diff line change
@@ -1,23 +1,25 @@
name: nf-core AWS full size tests
# This workflow is triggered on published releases.
# It can be additionally triggered manually with GitHub actions workflow dispatch button.
# It runs the -profile 'test_lfq' on AWS batch
# It runs the -profiles 'test_lfq' 'test_tmt' and 'test_dia' on AWS batch

on:
release:
types: [published]
workflow_dispatch:

jobs:
run-tower:
name: Run AWS full tests
if: github.repository == 'nf-core/quantms'
runs-on: ubuntu-latest
# Do a full-scale run with data from each acquisition/quantification mode
strategy:
matrix:
mode: ["lfq", "tmt", "dia"]
steps:
- name: Launch workflow via tower
uses: nf-core/tower-action@v3
# TODO nf-core: You can customise AWS full pipeline tests as required
# Add full size test data (but still relatively small datasets for few samples)
# on the `test_lfq.config` test runs with only one set of parameters

with:
workspace_id: ${{ secrets.TOWER_WORKSPACE_ID }}
Expand All @@ -26,9 +28,9 @@ jobs:
workdir: s3://${{ secrets.AWS_S3_BUCKET }}/work/quantms/work-${{ github.sha }}
parameters: |
{
"outdir": "s3://${{ secrets.AWS_S3_BUCKET }}/quantms/results-${{ github.sha }}"
"outdir": "s3://${{ secrets.AWS_S3_BUCKET }}/quantms/results-${{ github.sha }}/mode_${{ matrix.mode }}"
}
profiles: test_lfq,aws_tower
profiles: test_${{ matrix.mode }},aws_tower
nextflow_config: |
process.errorStrategy = 'retry'
process.maxRetries = 3
30 changes: 16 additions & 14 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [1.0.0] nfcore/quantms - [18/03/2022] - Havana
## [1.0] nfcore/quantms - [05/02/2022] - Havana

Initial release of nf-core/quantms, created with the [nf-core](https://nf-co.re/) template.

### `Added`

- New pipeline for DDA-LFQ data analysis
- New pipeline for DDA-ISO data analysis
- New datasets for DDA-LFQ and DDA-ISO data analsysis
- New datasets for DDA-LFQ and DDA-ISO data analysis
- Documentation added for DDA pipeline
- First pipeline for DIA-LFQ data analsysis
- First pipeline for DIA-LFQ data analysis

### `Fixed`

Expand All @@ -23,16 +23,18 @@ Initial release of nf-core/quantms, created with the [nf-core](https://nf-co.re/

The pipeline is using Nextflow DSL2, each process will be run with its own [Biocontainer](https://biocontainers.pro/#/registry). This means that on occasion it is entirely possible for the pipeline to be using different versions of the same tool. However, the overall software dependency changes compared to the last release have been listed below for reference.

| Dependency | Version |
| ---------------- | ---------- |
| `comet` | 2021010 |
| `msgf+` | 2022.01.07 |
| `openms` | 2.8.0 |
| `sdrf-pipelines` | 0.0.21 |
| `percolator` | 3.5 |
| `pmultiqc` | 0.0.10 |
| `luciphor` | 2020_04_03 |
| `dia-nn` | 1.8.1 |
| `msstats` | 4.2.0 |
| Dependency | Version |
| --------------------- | ---------- |
| `thermorawfileparser` | 1.3.4 |
| `comet` | 2021010 |
| `msgf+` | 2022.01.07 |
| `openms` | 2.8.0 |
| `sdrf-pipelines` | 0.0.21 |
| `percolator` | 3.5 |
| `pmultiqc` | 0.0.11 |
| `luciphor` | 2020_04_03 |
| `dia-nn` | 1.8.1 |
| `msstats` | 4.2.0 |
| `msstatstmt` | 2.2.0 |

### `Deprecated`
13 changes: 11 additions & 2 deletions modules/local/preprocess_expdesign.nf
Original file line number Diff line number Diff line change
@@ -1,19 +1,28 @@
// Fixing file endings only necessary if the experimental design is user-specified
// TODO can we combine this with another step? Feels like a waste to spawn a worker for this.
// Maybe the renaming can be done in the rawfileconversion step? Or check if the OpenMS tools
// accept different file endings already?
process PREPROCESS_EXPDESIGN {
label 'process_very_low'
label 'process_single_thread'

container "frolvlad/alpine-bash"

input:
path design

output:
path "experimental_design.tsv", emit: ch_expdesign
path "process_experimental_design.tsv", emit: process_ch_expdesign
path "config.tsv", emit: ch_config

script:

"""
# since we know that we will need to convert from raw to mzML for all tools that need the design (i.e., OpenMS tools)
# we edit the design here and change the endings.
sed 's/.raw\\t/.mzML\\t/I' $design > experimental_design.tsv
a=\$(grep -n '^\$' $design | head -n1| awk -F":" '{print \$1}'); sed -e ''"\${a}"',\$d' $design > process_experimental_design.tsv

# here we extract the filenames and fake an empty config (since the config values will be deduced from the workflow params)
a=\$(grep -n '^\$' $design | head -n1| awk -F":" '{print \$1}'); sed -e ''"\${a}"',\$d' $design > config.tsv
"""
}
1 change: 1 addition & 0 deletions modules/local/thermorawfileparser/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ process THERMORAWFILEPARSER {
tag "$meta.id"
label 'process_low'
label 'process_single_thread'
label 'error_retry'

conda (params.enable_conda ? "conda-forge::mono bioconda::thermorawfileparser=1.3.4" : null)
container "${ workflow.containerEngine == 'singularity' && !task.ext.singularity_pull_docker_container ?
Expand Down
8 changes: 4 additions & 4 deletions subworkflows/local/create_input_channel.nf
Original file line number Diff line number Diff line change
Expand Up @@ -20,14 +20,14 @@ workflow CREATE_INPUT_CHANNEL {
if (is_sdrf.toString().toLowerCase().contains("true")) {
SDRFPARSING ( ch_sdrf_or_design )
ch_versions = ch_versions.mix(SDRFPARSING.out.version)
ch_in_design = SDRFPARSING.out.ch_sdrf_config_file
ch_config = SDRFPARSING.out.ch_sdrf_config_file

ch_expdesign = SDRFPARSING.out.ch_expdesign
} else {
PREPROCESS_EXPDESIGN( ch_sdrf_or_design )
ch_in_design = PREPROCESS_EXPDESIGN.out.process_ch_expdesign
ch_config = PREPROCESS_EXPDESIGN.out.ch_config

ch_expdesign = PREPROCESS_EXPDESIGN.out.ch_expdesign
ch_expdesign = PREPROCESS_EXPDESIGN.out.ch_expdesign
}

Set enzymes = []
Expand All @@ -38,7 +38,7 @@ workflow CREATE_INPUT_CHANNEL {
wrapper.labelling_type = ""
wrapper.acquisition_method = ""

ch_in_design.splitCsv(header: true, sep: '\t')
ch_config.splitCsv(header: true, sep: '\t')
.map { create_meta_channel(it, is_sdrf, enzymes, files, wrapper) }
.branch {
ch_meta_config_dia: it[0].acquisition_method.contains("dia")
Expand Down