diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md index 0d9b7f8b..6a5ef862 100644 --- a/.github/CONTRIBUTING.md +++ b/.github/CONTRIBUTING.md @@ -1,12 +1,14 @@ -# nf-core/rnafusion Contributing Guidelines +# nf-core/rnafusion: Contributing Guidelines Hi there! Many thanks for taking an interest in improving nf-core/rnafusion. -We try to manage the required tasks for nf-core/rnafusion using GitHub issues, you probably came to this page when creating one. Please use the prefilled template to save time. +We try to manage the required tasks for nf-core/rnafusion using GitHub issues, you probably came to this page when creating one. Please use the pre-filled template to save time. However, don't be put off by this template - other more general issues and suggestions are welcome! Contributions to the code are even more welcome ;) -> If you need help using nf-core/rnafusion then the best place to go is the Gitter chatroom where you can ask us questions directly: https://gitter.im/nf-core/Lobby +> If you need help using or modifying nf-core/rnafusion then the best place to ask is on the pipeline channel on [Slack](https://nf-core-invite.herokuapp.com/). + + ## Contribution workflow If you'd like to write some code for nf-core/rnafusion, the standard workflow @@ -15,11 +17,31 @@ is as follows: 1. Check that there isn't already an issue about your idea in the [nf-core/rnafusion issues](https://github.com/nf-core/rnafusion/issues) to avoid duplicating work. - * Feel free to add a new issue here for the same reason. + * If there isn't one already, please create one so that others know you're working on this 2. Fork the [nf-core/rnafusion repository](https://github.com/nf-core/rnafusion) to your GitHub account 3. Make the necessary changes / additions within your forked repository -4. Submit a Pull Request against the master branch and wait for the code to be reviewed and merged. +4. Submit a Pull Request against the `dev` branch and wait for the code to be reviewed and merged. If you're not used to this workflow with git, you can start with some [basic docs from GitHub](https://help.github.com/articles/fork-a-repo/) or even their [excellent interactive tutorial](https://try.github.io/). -For further information/help, please consult the [nf-core/rnafusion documentation](https://github.com/nf-core/rnafusion#documentation) and don't hesitate to get in touch on [Gitter](https://gitter.im/nf-core/Lobby) + +## Tests +When you create a pull request with changes, [Travis CI](https://travis-ci.org/) will run automatic tests. +Typically, pull-requests are only fully reviewed when these tests are passing, though of course we can help out before then. + +There are typically two types of tests that run: + +### Lint Tests +The nf-core has a [set of guidelines](http://nf-co.re/guidelines) which all pipelines must adhere to. +To enforce these and ensure that all pipelines stay in sync, we have developed a helper tool which runs checks on the pipeline code. This is in the [nf-core/tools repository](https://github.com/nf-core/tools) and once installed can be run locally with the `nf-core lint ` command. + +If any failures or warnings are encountered, please follow the listed URL for more documentation. + +### Pipeline Tests +Each nf-core pipeline should be set up with a minimal set of test-data. +Travis CI then runs the pipeline on this data to ensure that it exists successfully. +If there are any failures then the automated tests fail. +These tests are run both with the latest available version of Nextflow and also the minimum required version that is stated in the pipeline code. + +## Getting help +For further information/help, please consult the [nf-core/rnafusion documentation](https://github.com/nf-core/rnafusion#documentation) and don't hesitate to get in touch on the pipeline channel on [Slack](https://nf-core-invite.herokuapp.com/). diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md new file mode 100644 index 00000000..405f88fe --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.md @@ -0,0 +1,31 @@ +Hi there! + +Thanks for telling us about a problem with the pipeline. Please delete this text and anything that's not relevant from the template below: + +#### Describe the bug +A clear and concise description of what the bug is. + +#### Steps to reproduce +Steps to reproduce the behaviour: +1. Command line: `nextflow run ...` +2. See error: _Please provide your error message_ + +#### Expected behaviour +A clear and concise description of what you expected to happen. + +#### System: + - Hardware: [e.g. HPC, Desktop, Cloud...] + - Executor: [e.g. slurm, local, awsbatch...] + - OS: [e.g. CentOS Linux, macOS, Linux Mint...] + - Version [e.g. 7, 10.13.6, 18.3...] + +#### Nextflow Installation: + - Version: [e.g. 0.31.0] + +#### Container engine: + - Engine: [e.g. Conda, Docker or Singularity] + - version: [e.g. 1.0.0] + - Image tag: [e.g. nfcore/rnafusion:1.0.0] + +#### Additional context +Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md new file mode 100644 index 00000000..1f025b77 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/feature_request.md @@ -0,0 +1,16 @@ +Hi there! + +Thanks for suggesting a new feature for the pipeline! Please delete this text and anything that's not relevant from the template below: + +#### Is your feature request related to a problem? Please describe. +A clear and concise description of what the problem is. +Ex. I'm always frustrated when [...] + +#### Describe the solution you'd like +A clear and concise description of what you want to happen. + +#### Describe alternatives you've considered +A clear and concise description of any alternative solutions or features you've considered. + +#### Additional context +Add any other context about the feature request here. diff --git a/.github/pull_request.md b/.github/PULL_REQUEST_TEMPLATE.md similarity index 94% rename from .github/pull_request.md rename to .github/PULL_REQUEST_TEMPLATE.md index 8f9f1202..2eb3f51f 100644 --- a/.github/pull_request.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -5,7 +5,7 @@ Please fill in the appropriate checklist below (delete whatever is not relevant) ## PR checklist - [ ] This comment contains a description of changes (with reason) - [ ] If you've fixed a bug or added code that should be tested, add tests! - - [ ] If necessary, also make a PR on the [nf-core/rnafusion branch on the nf-core/test-datasets repo]( https://github.com/nf-core/test-datasets/pull/newnf-core/rnafusion) + - [ ] If necessary, also make a PR on the [nf-core/rnafusion branch on the nf-core/test-datasets repo]( https://github.com/nf-core/test-datasets/pull/new/nf-core/rnafusion) - [ ] Ensure the test suite passes (`nextflow run . -profile test,docker`). - [ ] Make sure your code lints (`nf-core lint .`). - [ ] Documentation in `docs` is updated diff --git a/.github/bug_report.md b/.github/bug_report.md deleted file mode 100644 index d0405d12..00000000 --- a/.github/bug_report.md +++ /dev/null @@ -1,29 +0,0 @@ -**Describe the bug** -A clear and concise description of what the bug is. - -**To Reproduce** -Steps to reproduce the behavior: -1. Command line '...' -2. See error **Please provide your error message** - -**Expected behavior** -A clear and concise description of what you expected to happen. - -**System (please complete the following information):** - - Hardware: [e.g. HPC, Desktop, Cloud...] - - Executor: [e.g. slurm, local, awsbatch...] - - OS: [e.g. CentOS Linux, macOS, Linux Mint...] - - Version [e.g. 7, 10.13.6, 18.3...] - -**Nextflow (please complete the following information):** - - Version: [e.g. 0.31.0] - -**Container engine (please complete the following information):** - - Engine: [e.g. Conda, Docker or Singularity] - - version: [e.g. 1.0.0] - -**Container (please complete the following information):** - - tag: [e.g. 1.0.0] - -**Additional context** -Add any other context about the problem here. diff --git a/.github/feature_request.md b/.github/feature_request.md deleted file mode 100644 index 3616d75c..00000000 --- a/.github/feature_request.md +++ /dev/null @@ -1,16 +0,0 @@ -**Is your feature request related to a problem? Please describe.** - -A clear and concise description of what the problem is. -Ex. I'm always frustrated when [...] - -**Describe the solution you'd like** - -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** - -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** - -Add any other context about the feature request here. diff --git a/.github/markdownlint.yml b/.github/markdownlint.yml index b8881b50..e052a635 100644 --- a/.github/markdownlint.yml +++ b/.github/markdownlint.yml @@ -7,4 +7,3 @@ blanks-around-lists: false header-increment: false no-duplicate-header: siblings_only: true -no-bare-urls: false # tools only - the {{ jinja variables }} break URLs and cause this to error \ No newline at end of file diff --git a/.gitignore b/.gitignore index a3bd9df1..a0a8c962 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,7 @@ .nextflow* work/ results/ -tests/ +.DS_Store +tests/test_data +*.pyc .vscode/ \ No newline at end of file diff --git a/.travis.yml b/.travis.yml index 2f11d3b6..176e3505 100644 --- a/.travis.yml +++ b/.travis.yml @@ -1,5 +1,6 @@ sudo: required language: python +jdk: openjdk8 services: docker python: '3.6' cache: pip @@ -7,14 +8,13 @@ matrix: fast_finish: true before_install: - # PRs made to 'master' branch should always orginate from another repo or the 'dev' branch + # PRs to master are only ok if coming from dev branch - '[ $TRAVIS_PULL_REQUEST = "false" ] || [ $TRAVIS_BRANCH != "master" ] || ([ $TRAVIS_PULL_REQUEST_SLUG = $TRAVIS_REPO_SLUG ] && [ $TRAVIS_PULL_REQUEST_BRANCH = "dev" ])' + # Pull the docker image first so the test doesn't wait for this - docker pull nfcore/rnafusion:dev - - docker tag nfcore/rnafusion:dev nfcore/rnafusion:1.0.1 - -env: - - NXF_VER='0.32.0' # Specify a minimum NF version that should be tested and work - - NXF_VER='' # Plus: get the latest NF version and check, that it works + # Fake the tag locally so that the pipeline runs properly + # Looks weird when this is :dev to :dev, but makes sense when testing code for a release (:dev to :1.0.1) + - docker tag nfcore/rnafusion:dev nfcore/rnafusion:1.0.2 install: # Install Nextflow @@ -22,22 +22,25 @@ install: - wget -qO- get.nextflow.io | bash - sudo ln -s /tmp/nextflow/nextflow /usr/local/bin/nextflow # Install nf-core/tools + - pip install --upgrade pip - pip install nf-core # Install markdownlint-cli - sudo apt-get install npm && npm install -g markdownlint-cli - # Reset - - mkdir ${TRAVIS_BUILD_DIR}/tests && cd ${TRAVIS_BUILD_DIR} + # Reset + - mkdir ${TRAVIS_BUILD_DIR}/tests && cd ${TRAVIS_BUILD_DIR}/tests + +env: + - NXF_VER='0.32.0' # Specify a minimum NF version that should be tested and work + - NXF_VER='' # Plus: get the latest NF version and check that it works script: - # Create and download test data - - | - touch tests/genome.fa tests/genes.gtf - mkdir tests/star_index tests/databases - wget http://github.com/nf-core/test-datasets/raw/rnafusion/testdata/human/reads_1.fq.gz -O tests/reads_1.fq.gz - wget http://github.com/nf-core/test-datasets/raw/rnafusion/testdata/human/reads_2.fq.gz -O tests/reads_2.fq.gz # Lint the pipeline code - nf-core lint ${TRAVIS_BUILD_DIR} - # Lint markdown + # Lint the documentation - markdownlint ${TRAVIS_BUILD_DIR} -c ${TRAVIS_BUILD_DIR}/.github/markdownlint.yml - # Running the pipeline - - nextflow run ${TRAVIS_BUILD_DIR} -profile test,docker + # Test pipeline help page + - nextflow run ${TRAVIS_BUILD_DIR} --help + # Test downloading references help page + - nextflow run ${TRAVIS_BUILD_DIR}/download-references.nf --help + # Test downloading singularity images help page + - nextflow run ${TRAVIS_BUILD_DIR}/download-singularity-img.nf --help \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index cb1115cb..a771fce8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,17 @@ # nfcore/rnafusion +## nfcore/rnafusion version 1.0.2 - 2018/05/13 + +### Changed + +* Bumped nf-core template to 1.6 [#69](https://github.com/nf-core/rnafusion/pull/69) + +### Fixed + +* Fixed COSMIC parameters not wrapped in quotes [#75](https://github.com/nf-core/rnafusion/issues/75) +* Implemented output output for fusion tools [#72](https://github.com/nf-core/rnafusion/issues/72) +* Fixed reference download link for STAR-Fusion [#71](https://github.com/nf-core/rnafusion/issues/71) + ## nfcore/rnafusion version 1.0.1 - 2018/04/06 ### Added @@ -54,4 +66,4 @@ at [SciLifeLab/NGI-RNAfusion](https://github.com/SciLifeLab/NGI-RNAfusion). The * STAR-Fusion * Fusioncatcher * FusionInspector - * Custom tool for fusion comparison - generates intersection of detected fusion genes from all tools \ No newline at end of file + * Custom tool for fusion comparison - generates intersection of detected fusion genes from all tools diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index 21096193..09226d0d 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -34,7 +34,7 @@ This Code of Conduct applies both within project spaces and in public spaces whe ## Enforcement -Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team on the [Gitter channel](https://gitter.im/nf-core/Lobby). The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team on [Slack](https://nf-core-invite.herokuapp.com/). The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. diff --git a/Dockerfile b/Dockerfile index fa5ceba4..a8967ec0 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,10 +1,7 @@ FROM nfcore/base - -LABEL authors="rickard.hammaren@scilifelab.se, phil.ewels@scilifelab.se, martin.proks@scilifelab.se" \ - description="Docker image containing all requirements for nfcore/rnafusion pipeline" +LABEL authors="Martin Proks " \ + description="Docker image containing all requirements for nf-core/rnafusion pipeline" COPY environment.yml / RUN conda env create -f /environment.yml && conda clean -a -ENV PATH /opt/conda/envs/nf-core-rnafusion-1.0.1/bin:$PATH - -WORKDIR / \ No newline at end of file +ENV PATH /opt/conda/envs/nf-core-rnafusion-1.0.2/bin:$PATH diff --git a/Jenkinsfile b/Jenkinsfile index 399b83c7..45bf6d3b 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -10,8 +10,7 @@ pipeline { stage('Setup environment') { steps { sh "pip install nf-core" - sh "docker pull nfcore/rnafusion:dev" - sh "docker tag nfcore/rnafusion:dev nfcore/rnafusion:1.0.1" + sh "docker pull nfcore/rnafusion:1.0.2" } } stage('Lint markdown') { @@ -22,9 +21,9 @@ pipeline { stage('Build') { steps { // sh "nextflow run kraken,jenkins nf-core/rnafusion" - sh "nextflow run nf-core/rnafusion -r dev --help" - sh "nextflow run nf-core/rnafusion/download-references.nf -r dev --help" - sh "nextflow run nf-core/rnafusion/download-singularity-img.nf -r dev --help" + sh "nextflow run nf-core/rnafusion --help" + sh "nextflow run nf-core/rnafusion/download-references.nf --help" + sh "nextflow run nf-core/rnafusion/download-singularity-img.nf --help" } } } diff --git a/LICENSE b/LICENSE index cafb2810..8295c89b 100644 --- a/LICENSE +++ b/LICENSE @@ -1,6 +1,6 @@ MIT License -Copyright (c) 2019 nf-core +Copyright (c) Rickard Hammarén, Martin Proks Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal diff --git a/README.md b/README.md index f1de2888..c275193e 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,7 @@ # ![nf-core/rnafusion](https://raw.githubusercontent.com/nf-core/rnafusion/master/docs/images/rnafusion_logo.png) +**Nextflow rnafusion analysis pipeline, part of the nf-core community.**. + [![Build Status](https://travis-ci.org/nf-core/rnafusion.svg?branch=master)](https://travis-ci.org/nf-core/rnafusion) [![Nextflow](https://img.shields.io/badge/nextflow-%E2%89%A50.32.0-brightgreen.svg)](https://www.nextflow.io/) [![DOI](https://zenodo.org/badge/151721952.svg)](https://zenodo.org/badge/latestdoi/151721952) @@ -9,15 +11,9 @@ [![install with bioconda](https://img.shields.io/badge/install%20with-bioconda-brightgreen.svg)](http://bioconda.github.io/) [![Docker](https://img.shields.io/docker/automated/nfcore/rnafusion.svg)](https://hub.docker.com/r/nfcore/rnafusion) -**nfcore/rnafusion** uses RNA-seq data to detect fusions genes. - -The workflow processes RNA-sequencing data from FastQ files. It runs quality control on the raw data ([FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)), detects fusion genes ([STAR-Fusion](https://github.com/STAR-Fusion/STAR-Fusion), [Fusioncatcher](https://github.com/ndaniel/fusioncatcher), [Ericscript](https://sites.google.com/site/bioericscript/), [Pizzly](https://github.com/pmelsted/pizzly), [Squid](https://github.com/Kingsford-Group/squid)), gathers information ([FusionGDB](https://ccsm.uth.edu/FusionGDB/index.html), [Mitelman](https://cgap.nci.nih.gov/Chromosomes/Mitelman), [COSMIC](https://cancer.sanger.ac.uk/cosmic/fusion)), visualizes the fusions ([FusionInspector](https://github.com/FusionInspector/FusionInspector)), performs quality-control on the results ([MultiQC](http://multiqc.info)) and finally generates custom summary report witch scored fusions ([fusion-report](https://github.com/matq007/fusion-report)). - -> Live **demo** output **[here](https://matq007.github.io/fusion-report/example/)**. - -The pipeline works with both single-end and paired-end data, though not all fusion detection tools work with single-end data (Ericscript, Pizzly, Squid and FusionInspector). +## Introduction -The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It comes with docker / singularity containers making installation trivial and results highly reproducible. +The pipeline is built using [Nextflow](https://www.nextflow.io), a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It comes with docker containers making installation trivial and results highly reproducible. | Tool | Single-end reads | CPU (recommended) | RAM (recommended) | | --------------- |:----------------:|:-----------------:|:-----------------:| @@ -38,14 +34,15 @@ nextflow run nf-core/rnafusion --help The nf-core/rnafusion pipeline comes with documentation about the pipeline, found in the `docs/` directory: -1. [Installation](docs/installation.md) +1. [Installation](https://nf-co.re/usage/installation) 2. Pipeline configuration * [Download references for tools](docs/references.md) - * [Local installation](docs/configuration/local.md) - * [Adding your own system](docs/configuration/adding_your_own.md) + * [Local installation](https://nf-co.re/usage/local_installation) + * [Adding your own system config](https://nf-co.re/usage/adding_own_config) + * [Reference genomes](https://nf-co.re/usage/reference_genomes) 3. [Running the pipeline](docs/usage.md) 4. [Output and how to interpret the results](docs/output.md) -5. [Troubleshooting](docs/troubleshooting.md) +5. [Troubleshooting](https://nf-co.re/usage/troubleshooting) Use predefined configuration for desired Institution cluster provided at [nfcore/config](https://github.com/nf-core/configs) repository. @@ -65,7 +62,7 @@ bioRxiv 120295; doi: [https://doi.org/10.1101/120295](https://doi.org/10.1101/12 Páll Melsted, Shannon Hateley, Isaac Charles Joseph, Harold Pimentel, Nicolas L Bray, Lior Pachter, bioRxiv 166322; doi: [https://doi.org/10.1101/166322](https://doi.org/10.1101/166322) * **SQUID: transcriptomic structural variation detection from RNA-seq** Cong Ma, Mingfu Shao and Carl Kingsford, Genome Biology, 2018, doi: [https://doi.org/10.1186/s13059-018-1421-5](https://doi.org/10.1186/s13059-018-1421-5) * **Fusion-Inspector** download: [https://github.com/FusionInspector](https://github.com/FusionInspector) -* Martin Proks. (2019, March 26). matq007/fusion-report: **fusion-report:1.0** (Version 1.0). Zenodo. http://doi.org/10.5281/zenodo.2609227 +* Martin Proks. (2019, March 26). matq007/fusion-report: **fusion-report:1.0** (Version 1.0). Zenodo. [http://doi.org/10.5281/zenodo.2609227](http://doi.org/10.5281/zenodo.2609227) * **FastQC** download: [https://www.bioinformatics.babraham.ac.uk/projects/fastqc/](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) * **MultiQC** Ewels, P., Magnusson, M., Lundin, S., & Käller, M. (2016). MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics , 32(19), 3047–3048. [https://doi.org/10.1093/bioinformatics/btw354](https://doi.org/10.1093/bioinformatics/btw354) Download: [https://multiqc.info/](https://multiqc.info/) diff --git a/assets/email_template.html b/assets/email_template.html index b1f8f613..af566215 100644 --- a/assets/email_template.html +++ b/assets/email_template.html @@ -5,7 +5,7 @@ - + nf-core/rnafusion Pipeline Report diff --git a/assets/email_template.txt b/assets/email_template.txt index fab10f4d..7efad5be 100644 --- a/assets/email_template.txt +++ b/assets/email_template.txt @@ -17,23 +17,6 @@ ${errorReport} } %> -<% if (!success){ - out << """#################################################### -## nf-core/rnafusion execution completed unsuccessfully! ## -#################################################### -The exit status of the task that caused the workflow execution to fail was: $exitStatus. -The full error message was: - -${errorReport} -""" -} else { - out << "## nf-core/rnafusion execution completed successfully! ##" -} -%> - - - - The workflow was completed at $dateComplete (duration: $duration) The command used to launch the workflow was as follows: diff --git a/conf/multiqc_config.yaml b/assets/multiqc_config.yaml similarity index 95% rename from conf/multiqc_config.yaml rename to assets/multiqc_config.yaml index 441bd13c..bdd59b85 100644 --- a/conf/multiqc_config.yaml +++ b/assets/multiqc_config.yaml @@ -5,3 +5,5 @@ report_comment: > report_section_order: nf-core/rnafusion-software-versions: order: -1000 + +export_plots: true diff --git a/assets/sendmail_template.txt b/assets/sendmail_template.txt index fd1cd739..2d671220 100644 --- a/assets/sendmail_template.txt +++ b/assets/sendmail_template.txt @@ -1,11 +1,36 @@ To: $email Subject: $subject Mime-Version: 1.0 -Content-Type: multipart/related;boundary="nfmimeboundary" +Content-Type: multipart/related;boundary="nfcoremimeboundary" ---nfmimeboundary +--nfcoremimeboundary Content-Type: text/html; charset=utf-8 $email_html ---nfmimeboundary-- +<% +if (mqcFile){ +def mqcFileObj = new File("$mqcFile") +if (mqcFileObj.length() < mqcMaxSize){ +out << """ +--nfcoremimeboundary +Content-Type: text/html; name=\"multiqc_report\" +Content-Transfer-Encoding: base64 +Content-ID: +Content-Disposition: attachment; filename=\"${mqcFileObj.getName()}\" + +${mqcFileObj. + bytes. + encodeBase64(). + toString(). + tokenize( '\n' )*. + toList()*. + collate( 76 )*. + collect { it.join() }. + flatten(). + join( '\n' )} +""" +}} +%> + +--nfcoremimeboundary-- diff --git a/bin/scrape_software_versions.py b/bin/scrape_software_versions.py index f5a5a011..40b0d1b7 100755 --- a/bin/scrape_software_versions.py +++ b/bin/scrape_software_versions.py @@ -38,9 +38,14 @@ if match: results[k] = "v{}".format(match.group(1)) +# Remove software set to false in results +for k in results: + if not results[k]: + del(results[k]) + # Dump to YAML print (''' -id: 'nf-core/rnafusion-software-versions' +id: 'software_versions' section_name: 'nf-core/rnafusion Software Versions' section_href: 'https://github.com/nf-core/rnafusion' plot_type: 'html' @@ -49,5 +54,10 @@
''') for k,v in results.items(): - print("
{}
{}
".format(k,v)) + print("
{}
{}
".format(k,v)) print ("
") + +# Write out regexes as csv file: +with open('software_versions.csv', 'w') as f: + for k,v in results.items(): + f.write("{}\t{}\n".format(k,v)) diff --git a/conf/awsbatch.config b/conf/awsbatch.config index 79078c7b..14af5866 100644 --- a/conf/awsbatch.config +++ b/conf/awsbatch.config @@ -1,10 +1,15 @@ /* * ------------------------------------------------- - * Nextflow config file for AWS Batch + * Nextflow config file for running on AWS batch * ------------------------------------------------- - * Imported under the 'awsbatch' Nextflow profile in nextflow.config - * Uses docker for software depedencies automagically, so not specified here. + * Base config needed for running with -profile awsbatch */ +params { + config_profile_name = 'AWSBATCH' + config_profile_description = 'AWSBATCH Cloud Profile' + config_profile_contact = 'Alexander Peltzer (@apeltzer)' + config_profile_url = 'https://aws.amazon.com/de/batch/' +} aws.region = params.awsregion process.executor = 'awsbatch' diff --git a/conf/base.config b/conf/base.config index d77cf43c..ffc0f589 100644 --- a/conf/base.config +++ b/conf/base.config @@ -9,25 +9,17 @@ * run on the logged in environment. */ -params { - // Defaults only, expecting to be overwritten - max_memory = 128.GB - max_cpus = 16 - max_time = 240.h - igenomes_base = 's3://ngi-igenomes/igenomes/' -} - -process { +process{ - cpus = { check_max( 1, 'cpus' ) } + cpus = { check_max( 1 * task.attempt, 'cpus' ) } memory = { check_max( 8.GB * task.attempt, 'memory' ) } time = { check_max( 2.h * task.attempt, 'time' ) } - errorStrategy = { task.exitStatus in [143,137] ? 'retry' : 'finish' } + errorStrategy = { task.exitStatus in [143,137,104,134,139] ? 'retry' : 'finish' } maxRetries = 1 maxErrors = '-1' - // Process-specific resource requirements + // See https://www.nextflow.io/docs/latest/config.html#config-process-selectors withName: "multiqc|get_software_versions|summary" { memory = { check_max( 2.GB * task.attempt, 'memory' ) } cache = false @@ -81,3 +73,11 @@ process { container = "nfcore/rnafusion:star-fusion_v${params.star_fusion_version}" } } + +params { + // Defaults only, expecting to be overwritten + max_memory = 128.GB + max_cpus = 16 + max_time = 240.h + igenomes_base = 's3://ngi-igenomes/igenomes/' +} diff --git a/conf/test.config b/conf/test.config index 508006a7..6f076005 100644 --- a/conf/test.config +++ b/conf/test.config @@ -2,8 +2,8 @@ * ------------------------------------------------- * Nextflow config file for running tests * ------------------------------------------------- - * Testing profile for checking just the syntax - * of the pipeline. To run use: + * Defines bundled input files and everything required + * to run a fast and simple test. Use as follows: * nextflow run nf-core/rnafusion -profile test */ @@ -12,6 +12,9 @@ executor { } params { + config_profile_name = 'Test profile' + config_profile_description = 'Minimal test dataset to check pipeline function' + // Limit resources so that this can run on Travis max_cpus = 2 max_memory = 6.GB max_time = 48.h @@ -23,4 +26,4 @@ params { gtf = 'tests/genes.gtf' star_index = 'tests/star_index' databases = '/tests/databases' -} \ No newline at end of file +} diff --git a/docs/README.md b/docs/README.md index 421cf982..1a6b6ba9 100644 --- a/docs/README.md +++ b/docs/README.md @@ -2,13 +2,13 @@ The nf-core/rnafusion documentation is split into the following files: -1. [Installation](installation.md) -2. [Running the pipeline](usage.md) -3. Pipeline configuration +1. [Installation](https://nf-co.re/usage/installation) +2. Pipeline configuration * [Download references for tools](references.md) - * [Local installation](configuration/local.md) - * [Adding your own system](configuration/adding_your_own.md) - * [Reference genomes](configuration/reference_genomes.md) + * [Local installation](https://nf-co.re/usage/local_installation) + * [Adding your own system config](https://nf-co.re/usage/adding_own_config) + * [Reference genomes](https://nf-co.re/usage/reference_genomes) * [UPPMAX configuration](configuration/uppmax.md) +3. [Running the pipeline](usage.md) 4. [Output and how to interpret the results](output.md) -5. [Troubleshooting](troubleshooting.md) +5. [Troubleshooting](https://nf-co.re/usage/troubleshooting) diff --git a/docs/configuration/local.md b/docs/configuration/local.md index 744febf5..fad35bc0 100644 --- a/docs/configuration/local.md +++ b/docs/configuration/local.md @@ -46,4 +46,4 @@ Then transfer this file and run the pipeline with this path: ```bash nextflow run /path/to/nfcore-rnafusion -with-singularity /path/to/nfcore-rnafusion-1.0.1.img -``` \ No newline at end of file +``` diff --git a/docs/configuration/uppmax.md b/docs/configuration/uppmax.md index dbf8a4c2..1feeefc1 100644 --- a/docs/configuration/uppmax.md +++ b/docs/configuration/uppmax.md @@ -59,4 +59,4 @@ nextflow run /path/to/nfcore-rnafusion-1.0.1 -with-singularity /path/to/singular If you would prefer to use environment modules instead of singularity, you can use the old version of the configuration by specifying `-profile uppmax_modules` (we don't recommend this). -For pipeline development work on `milou`, use `-profile uppmax_devel` - this uses the milou [devel partition](http://www.uppmax.uu.se/support/user-guides/slurm-user-guide/#tocjump_030509106905141747_8) for testing the pipeline quickly. Please note that this is _not_ suitable for proper analysis runs - only tiny test datasets. \ No newline at end of file +For pipeline development work on `milou`, use `-profile uppmax_devel` - this uses the milou [devel partition](http://www.uppmax.uu.se/support/user-guides/slurm-user-guide/#tocjump_030509106905141747_8) for testing the pipeline quickly. Please note that this is _not_ suitable for proper analysis runs - only tiny test datasets. diff --git a/docs/references.md b/docs/references.md index b2c2de02..a6e0bffc 100644 --- a/docs/references.md +++ b/docs/references.md @@ -151,4 +151,4 @@ aws s3 --no-sign-request --region eu-west-1 sync s3://ngi-igenomes/igenomes/Homo ```bash fusion_report download --cosmic_usr --cosmic_passwd /output/databases -``` \ No newline at end of file +``` diff --git a/docs/usage.md b/docs/usage.md index 2d1b19be..60f8e044 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -2,10 +2,13 @@ ## Table of contents -* [Introduction](#general-nextflow-info) + + +* [Table of contents](#table-of-contents) +* [Introduction](#introduction) * [Running the pipeline](#running-the-pipeline) - * [using Docker](#running-the-pipeline-using-docker) - * [using Singularity](#running-the-pipeline-using-singularity) + * [Using Docker](#running-the-pipeline-using-docker) + * [Using Singularity](#running-the-pipeline-using-singularity) * [Updating the pipeline](#updating-the-pipeline) * [Reproducibility](#reproducibility) * [Main arguments](#main-arguments) @@ -16,6 +19,7 @@ * [`singularity`](#singularity) * [`test`](#test) * [`--reads`](#--reads) + * [`--singleEnd`](#--singleend) * [Tool flags](#tool-flags) * [`--star_fusion`](#--star_fusion) * [`--star_fusion_opt`](#--star_fusion_opt) @@ -28,7 +32,7 @@ * [`--debug`](#--debug) * [Visualization flags](#visualization-flags) * [`--fusion_inspector`](#--fusion_inspector) -* [References](#references) +* [Reference genomes](#reference-genomes) * [`--fasta`](#--fasta) * [`--gtf`](#--gtf) * [`--star_index`](#--star_index) @@ -37,31 +41,32 @@ * [`--ericscript_ref`](#--ericscript_ref) * [`--pizzly_fasta`](#--pizzly_fasta) * [`--pizzly_gtf`](#--pizzly_gtf) -* [Options](#options-flags) - * [`--genome`](#--genome) - * [`--read_length`](#--read_length) - * [`--singleEnd`](#--singleEnd) -* [Job Resources](#job-resources) -* [Automatic resubmission](#automatic-resubmission) -* [Custom resource requests](#custom-resource-requests) -* [AWS batch specific parameters](#aws-batch-specific-parameters) - * [`-awsbatch`](#-awsbatch) - * [`--awsqueue`](#--awsqueue) - * [`--awsregion`](#--awsregion) + * [`--genome` (using iGenomes)](#--genome-using-igenomes) + * [`--igenomesIgnore`](#--igenomesignore) +* [Job resources](#job-resources) + * [Automatic resubmission](#automatic-resubmission) + * [Custom resource requests](#custom-resource-requests) +* [AWS Batch specific parameters](#aws-batch-specific-parameters) + * [`--awsqueue`](#--awsqueue) + * [`--awsregion`](#--awsregion) * [Other command line parameters](#other-command-line-parameters) + * [`--read_length`](#--read_length) * [`--outdir`](#--outdir) * [`--email`](#--email) - * [`-name`](#-name-single-dash) - * [`-resume`](#-resume-single-dash) - * [`-c`](#-c-single-dash) + * [`-name`](#-name) + * [`-resume`](#-resume) + * [`-c`](#-c) + * [`--custom_config_version`](#--custom_config_version) + * [`--custom_config_base`](#--custom_config_base) * [`--max_memory`](#--max_memory) * [`--max_time`](#--max_time) * [`--max_cpus`](#--max_cpus) - * [`--plaintext_emails`](#--plaintext_emails) - * [`--sampleLevel`](#--sampleLevel) + * [`--plaintext_email`](#--plaintext_email) + * [`--monochrome_logs`](#--monochrome_logs) * [`--multiqc_config`](#--multiqc_config) + -## General Nextflow info +## Introduction Nextflow handles job submissions on SLURM or other environments, and supervises running the jobs. Thus the Nextflow process must run until the pipeline is finished. We recommend that you put the process running in the background through `screen` / `tmux` or similar tool. Alternatively you can run nextflow within a cluster job submitted your job scheduler. @@ -111,7 +116,7 @@ nextflow run nf-core/rnafusion First start by downloading singularity images. Sometimes the pipeline can crash if you are not using downloaded images (might be some network issues). ```bash -nextflow run nf-core/rnafusion/download-singularity-img.nf --all --outdir /path +nextflow run nf-core/rnafusion/download-singularity-img.nf --download_all --outdir /path # or @@ -184,25 +189,28 @@ First, go to the [nf-core/rnafusion releases page](https://github.com/nf-core/rn This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future. -## Main Arguments +## Main arguments ### `-profile` Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments. Note that multiple profiles can be loaded, for example: `-profile docker` - the order of arguments is important! +If `-profile` is not specified at all the pipeline will be run locally and expects all software to be installed and available on the `PATH`. + * `awsbatch` * A generic configuration profile to be used with AWS Batch. * `conda` * A generic configuration profile to be used with [conda](https://conda.io/docs/) + * Pulls most software from [Bioconda](https://bioconda.github.io/) * `docker` * A generic configuration profile to be used with [Docker](http://docker.com/) - * Pulls software from dockerhub: [`nfcore/rnafusion`](http://hub.docker.com/r/nfcore/rnafusion/) + * Pulls software from dockerhub: [`nfcore/rnafusion`](http://hub.docker.com/r/nfcore/rnafusion/) * `singularity` * A generic configuration profile to be used with [Singularity](http://singularity.lbl.gov/) - * Pulls software from docker-hub + * Pulls software from DockerHub: [`nfcore/rnafusion`](http://hub.docker.com/r/nfcore/rnafusion/) * `test` * A profile with a complete configuration for automated testing - * Includes links to test data so needs no other parameters + * Includes links to test data so needs no other parameters ### `--reads` @@ -222,6 +230,14 @@ If left unspecified, a default pattern is used: `data/*{1,2}.fastq.gz` It is not possible to run a mixture of single-end and paired-end files in one run. +### `--singleEnd` + +By default, the pipeline expects paired-end data. If you have single-end data, you need to specify `--singleEnd` on the command line when you launch the pipeline. A normal glob pattern, enclosed in quotation marks, can then be used for `--reads`. For example: + +```bash +--singleEnd --reads '*.fastq.gz' +``` + ## Tool flags ### `--star_fusion` @@ -271,7 +287,7 @@ nextflow run nf-core/rnafusion --reads '*_R{1,2}.fastq.gz' --genome GRCh38 -prof If enabled, executes `Fusion-Inspector` tool. -## References +## Reference genomes The pipeline config files come bundled with paths to the illumina iGenomes reference index files. If running with docker or AWS, the configuration is set up to use the [AWS-iGenomes](https://ewels.github.io/AWS-iGenomes/) resource. @@ -339,8 +355,6 @@ Required reference in order to run `Pizzly`. --pizzly_gtf '[path to Pizzly GTF annotation]' ``` -## Options - ### `--genome` (using iGenomes) There are 31 different species supported in the iGenomes references. To run the pipeline, you must specify which to use with the `--genome` flag. @@ -358,7 +372,7 @@ The syntax for this reference configuration is as follows: ```nextflow params { genomes { - 'GRCh37' { + 'GRCh38' { fasta = '' // Used if no star index given } // Any number of additional genomes, key is used with --genome @@ -366,19 +380,11 @@ params { } ``` -### `--read_length` - -Length is used to build a STAR index. Default is 100bp (Illumina). - -### `--singleEnd` +### `--igenomesIgnore` -By default, the pipeline expects paired-end data. If you have single-end data, you need to specify `--singleEnd` on the command line when you launch the pipeline. A normal glob pattern, enclosed in quotation marks, can then be used for `--reads`. For example: - -```bash ---singleEnd --reads '*.fastq.gz' -``` +Do not load `igenomes.config` when running the pipeline. You may choose this option if you observe clashes between custom parameters and those supplied in `igenomes.config`. -## Job Resources +## Job resources ### Automatic resubmission @@ -386,7 +392,11 @@ Each step in the pipeline has a default set of requirements for number of CPUs, ### Custom resource requests -Wherever process-specific requirements are set in the pipeline, the default value can be changed by creating a custom config file. See the files in [`conf`](../conf) for examples. +Wherever process-specific requirements are set in the pipeline, the default value can be changed by creating a custom config file. See the files hosted at [`nf-core/configs`](https://github.com/nf-core/configs/tree/master/conf) for examples. + +If you are likely to be running `nf-core` pipelines regularly it may be a good idea to request that your custom config file is uploaded to the `nf-core/configs` git repository. Before you do this please can you test that the config file works with your pipeline of choice using the `-c` parameter (see definition below). You can then create a pull request to the `nf-core/configs` repository with the addition of your config file, associated documentation file (see examples in [`nf-core/configs/docs`](https://github.com/nf-core/configs/tree/master/docs)), and amending [`nfcore_custom.config`](https://github.com/nf-core/configs/blob/master/nfcore_custom.config) to include your custom profile. + +If you have any questions or issues please send us a message on [Slack](https://nf-core-invite.herokuapp.com/). ## AWS Batch specific parameters @@ -403,13 +413,17 @@ Please make sure to also set the `-w/--work-dir` and `--outdir` parameters to a ## Other command line parameters +### `--read_length` + +Length is used to build a STAR index. Default is 100bp (Illumina). + ### `--outdir` The output directory where the results will be saved. ### `--email` -Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to speicfy this on the command line for every run. +Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits. If set in your user config file (`~/.nextflow/config`) then you don't need to specify this on the command line for every run. ### `-name` @@ -431,16 +445,42 @@ Specify the path to a specific config file (this is a core NextFlow command). **NB:** Single hyphen (core Nextflow option) -Note - you can use this to override defaults. For example, you can specify a config file using `-c` that contains the following: +Note - you can use this to override pipeline defaults. -```nextflow -process.$multiqc.module = [] +### `--custom_config_version` + +Provide git commit id for custom Institutional configs hosted at `nf-core/configs`. This was implemented for reproducibility purposes. Default is set to `master`. + +```bash +## Download and use config file with following git commid id +--custom_config_version d52db660777c4bf36546ddb188ec530c3ada1b96 +``` + +### `--custom_config_base` + +If you're running offline, nextflow will not be able to fetch the institutional config files +from the internet. If you don't need them, then this is not a problem. If you do need them, +you should download the files from the repo and tell nextflow where to find them with the +`custom_config_base` option. For example: + +```bash +## Download and unzip the config files +cd /path/to/my/configs +wget https://github.com/nf-core/configs/archive/master.zip +unzip master.zip + +## Run the pipeline +cd /path/to/my/data +nextflow run /path/to/pipeline/ --custom_config_base /path/to/my/configs/configs-master/ ``` +> Note that the nf-core/tools helper package has a `download` command to download all required pipeline +> files + singularity containers + institutional configs in one go for you, to make this process easier. + ### `--max_memory` Use to set a top-limit for the default memory requirement for each process. -Should be a string in the format integer-unit. eg. `--max_memory '8.GB'`` +Should be a string in the format integer-unit. eg. `--max_memory '8.GB'` ### `--max_time` @@ -456,6 +496,10 @@ Should be a string in the format integer-unit. eg. `--max_cpus 1` Set to receive plain-text e-mails instead of HTML formatted. +### `--monochrome_logs` + +Set to disable colourful command line output and live life in monochrome. + ### `--multiqc_config` Specify a path to a custom MultiQC configuration file. diff --git a/download-references.nf b/download-references.nf index 8034d881..4f5b14b0 100644 --- a/download-references.nf +++ b/download-references.nf @@ -9,18 +9,9 @@ ---------------------------------------------------------------------------------------- */ -nfcore_logo = """======================================================= - ,--./,-. - ___ __ __ __ ___ /,-._.--~\' - |\\ | |__ __ / ` / \\ |__) |__ } { - | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-, - `._,._,\' - -nf-core/rnafusion v${workflow.manifest.version} -=======================================================""" - def helpMessage() { - nfcore_help = """ + log.info nfcoreHeader() + log.info""" Usage: The typical command for downloading references is as follows: @@ -44,7 +35,6 @@ def helpMessage() { --igenomesIgnore Download iGenome Homo Sapiens version NCBI/GRCh38. Ignored on default """.stripIndent() - log.info "${nfcore_logo}${nfcore_help}" } /* @@ -87,27 +77,28 @@ if (params.fusion_report) { } // Header log info -log.info nfcore_logo +log.info nfcoreHeader() def summary = [:] summary['Pipeline Name'] = 'nf-core/rnafusion/download-references.nf' summary['Pipeline Version'] = workflow.manifest.version summary['References'] = params.running_tools.size() == 0 ? 'None' : params.running_tools.join(", ") -summary['Max Memory'] = params.max_memory -summary['Max CPUs'] = params.max_cpus -summary['Max Time'] = params.max_time +summary['Max Resources'] = "$params.max_memory memory, $params.max_cpus cpus, $params.max_time time per job" summary['Output dir'] = params.outdir summary['Working dir'] = workflow.workDir -summary['Current home'] = "$HOME" -summary['Current user'] = "$USER" -summary['Current path'] = "$PWD" -summary['Script dir'] = workflow.projectDir +summary['Launch dir'] = workflow.launchDir +summary['Working dir'] = workflow.workDir +summary['Script dir'] = workflow.projectDir +summary['User'] = workflow.userName summary['Config Profile'] = workflow.profile +if(params.config_profile_description) summary['Config Description'] = params.config_profile_description +if(params.config_profile_contact) summary['Config Contact'] = params.config_profile_contact +if(params.config_profile_url) summary['Config URL'] = params.config_profile_url if(workflow.profile == 'awsbatch'){ summary['AWS Region'] = params.awsregion summary['AWS Queue'] = params.awsqueue } -log.info summary.collect { k,v -> "${k.padRight(15)}: $v" }.join("\n") -log.info "=========================================" +log.info summary.collect { k,v -> "${k.padRight(18)}: $v" }.join("\n") +log.info "\033[2m----------------------------------------------------\033[0m" process download_star_fusion { publishDir "${params.outdir}/star_fusion_ref", mode: 'copy' @@ -120,7 +111,7 @@ process download_star_fusion { script: """ - wget -N https://data.broadinstitute.org/Trinity/CTAT_RESOURCE_LIB/GRCh38_v27_CTAT_lib_Feb092018.plug-n-play.tar.gz -O GRCh38_v27_CTAT_lib_Feb092018.plug-n-play.tar.gz + wget -N https://data.broadinstitute.org/Trinity/CTAT_RESOURCE_LIB/__genome_libs_StarFv1.3/GRCh38_v27_CTAT_lib_Feb092018.plug-n-play.tar.gz -O GRCh38_v27_CTAT_lib_Feb092018.plug-n-play.tar.gz tar -xvzf GRCh38_v27_CTAT_lib_Feb092018.plug-n-play.tar.gz && rm GRCh38_v27_CTAT_lib_Feb092018.plug-n-play.tar.gz """ } @@ -216,7 +207,7 @@ process download_databases { script: """ - fusion_report download --cosmic_usr ${params.cosmic_usr} --cosmic_passwd ${params.cosmic_passwd} . + fusion_report download --cosmic_usr "${params.cosmic_usr}" --cosmic_passwd "${params.cosmic_passwd}" . """ } @@ -240,4 +231,27 @@ process download_igenome { */ workflow.onComplete { log.info "[nf-core/rnafusion] Pipeline Complete" +} + +def nfcoreHeader(){ + // Log colors ANSI codes + c_reset = params.monochrome_logs ? '' : "\033[0m"; + c_dim = params.monochrome_logs ? '' : "\033[2m"; + c_black = params.monochrome_logs ? '' : "\033[0;30m"; + c_green = params.monochrome_logs ? '' : "\033[0;32m"; + c_yellow = params.monochrome_logs ? '' : "\033[0;33m"; + c_blue = params.monochrome_logs ? '' : "\033[0;34m"; + c_purple = params.monochrome_logs ? '' : "\033[0;35m"; + c_cyan = params.monochrome_logs ? '' : "\033[0;36m"; + c_white = params.monochrome_logs ? '' : "\033[0;37m"; + + return """ ${c_dim}----------------------------------------------------${c_reset} + ${c_green},--.${c_black}/${c_green},-.${c_reset} + ${c_blue} ___ __ __ __ ___ ${c_green}/,-._.--~\'${c_reset} + ${c_blue} |\\ | |__ __ / ` / \\ |__) |__ ${c_yellow}} {${c_reset} + ${c_blue} | \\| | \\__, \\__/ | \\ |___ ${c_green}\\`-._,-`-,${c_reset} + ${c_green}`._,._,\'${c_reset} + ${c_purple} nf-core/rnafusion v${workflow.manifest.version}${c_reset} + ${c_dim}----------------------------------------------------${c_reset} + """.stripIndent() } \ No newline at end of file diff --git a/download-singularity-img.nf b/download-singularity-img.nf index 7bbefee7..b490399d 100644 --- a/download-singularity-img.nf +++ b/download-singularity-img.nf @@ -9,18 +9,10 @@ ---------------------------------------------------------------------------------------- */ -nfcore_logo = """======================================================= - ,--./,-. - ___ __ __ __ ___ /,-._.--~\' - |\\ | |__ __ / ` / \\ |__) |__ } { - | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-, - `._,._,\' - -nf-core/rnafusion v${workflow.manifest.version} -=======================================================""" - def helpMessage() { - nfcore_help = """ + log.info nfcoreHeader() + log.info""" + Usage: The typical command for downloading singularity images is as follows: @@ -43,7 +35,6 @@ def helpMessage() { --squid Download Squid image --fusion_inspector Download Fusion-Inspector image """.stripIndent() - log.info "${nfcore_logo}${nfcore_help}" } /* @@ -83,27 +74,28 @@ if (params.fusion_inspector) { } // Header log info -log.info nfcore_logo +log.info nfcoreHeader() def summary = [:] summary['Pipeline Name'] = 'nf-core/rnafusion/download-singularity-img.nf' summary['Pipeline Version'] = workflow.manifest.version summary['Tool images'] = params.running_tools.size() == 0 ? 'None' : params.running_tools.join(", ") -summary['Max Memory'] = params.max_memory -summary['Max CPUs'] = params.max_cpus -summary['Max Time'] = params.max_time +summary['Max Resources'] = "$params.max_memory memory, $params.max_cpus cpus, $params.max_time time per job" summary['Output dir'] = params.outdir summary['Working dir'] = workflow.workDir -summary['Current home'] = "$HOME" -summary['Current user'] = "$USER" -summary['Current path'] = "$PWD" -summary['Script dir'] = workflow.projectDir +summary['Launch dir'] = workflow.launchDir +summary['Working dir'] = workflow.workDir +summary['Script dir'] = workflow.projectDir +summary['User'] = workflow.userName summary['Config Profile'] = workflow.profile +if(params.config_profile_description) summary['Config Description'] = params.config_profile_description +if(params.config_profile_contact) summary['Config Contact'] = params.config_profile_contact +if(params.config_profile_url) summary['Config URL'] = params.config_profile_url if(workflow.profile == 'awsbatch'){ summary['AWS Region'] = params.awsregion summary['AWS Queue'] = params.awsqueue } -log.info summary.collect { k,v -> "${k.padRight(15)}: $v" }.join("\n") -log.info "=========================================" +log.info summary.collect { k,v -> "${k.padRight(18)}: $v" }.join("\n") +log.info "\033[2m----------------------------------------------------\033[0m" process download_base_image { publishDir "${params.outdir}", mode: 'copy' @@ -215,4 +207,27 @@ process download_fusion_inspector { */ workflow.onComplete { log.info "[nf-core/rnafusion] Pipeline Complete" +} + +def nfcoreHeader(){ + // Log colors ANSI codes + c_reset = params.monochrome_logs ? '' : "\033[0m"; + c_dim = params.monochrome_logs ? '' : "\033[2m"; + c_black = params.monochrome_logs ? '' : "\033[0;30m"; + c_green = params.monochrome_logs ? '' : "\033[0;32m"; + c_yellow = params.monochrome_logs ? '' : "\033[0;33m"; + c_blue = params.monochrome_logs ? '' : "\033[0;34m"; + c_purple = params.monochrome_logs ? '' : "\033[0;35m"; + c_cyan = params.monochrome_logs ? '' : "\033[0;36m"; + c_white = params.monochrome_logs ? '' : "\033[0;37m"; + + return """ ${c_dim}----------------------------------------------------${c_reset} + ${c_green},--.${c_black}/${c_green},-.${c_reset} + ${c_blue} ___ __ __ __ ___ ${c_green}/,-._.--~\'${c_reset} + ${c_blue} |\\ | |__ __ / ` / \\ |__) |__ ${c_yellow}} {${c_reset} + ${c_blue} | \\| | \\__, \\__/ | \\ |___ ${c_green}\\`-._,-`-,${c_reset} + ${c_green}`._,._,\'${c_reset} + ${c_purple} nf-core/rnafusion v${workflow.manifest.version}${c_reset} + ${c_dim}----------------------------------------------------${c_reset} + """.stripIndent() } \ No newline at end of file diff --git a/environment.yml b/environment.yml index b7157748..6a8fa42b 100644 --- a/environment.yml +++ b/environment.yml @@ -1,15 +1,17 @@ -name: nf-core-rnafusion-1.0.1 +# You can use this file to create a conda environment for this pipeline: +# conda env create -f environment.yml +name: nf-core-rnafusion-1.0.2 channels: - - bioconda - conda-forge + - bioconda - defaults dependencies: - anaconda::openjdk=8.0.152 # Needed for FastQC - conda build hangs without this - - bioconda::fastqc=0.11.8 - bioconda::star=2.6.1b # update STAR-Fusion and Fusion-Inspector - - bioconda::multiqc=1.7 - conda-forge::r-data.table=1.12.0 - conda-forge::r-gplots=3.0.1.1 - bioconda::bioconductor-edger=3.24.1 - conda-forge::r-markdown=0.9 - bioconda::fusion-report=1.0.0 + - fastqc=0.11.8 + - multiqc=1.7 diff --git a/main.nf b/main.nf index 1f13d6f6..a6a2302a 100644 --- a/main.nf +++ b/main.nf @@ -9,18 +9,10 @@ ---------------------------------------------------------------------------------------- */ -nfcore_logo = """======================================================= - ,--./,-. - ___ __ __ __ ___ /,-._.--~\' - |\\ | |__ __ / ` / \\ |__) |__ } { - | \\| | \\__, \\__/ | \\ |___ \\`-._,-`-, - `._,._,\' - -nf-core/rnafusion v${workflow.manifest.version} -=======================================================""" - def helpMessage() { - nfcore_help = """ + log.info nfcoreHeader() + log.info""" + Usage: The typical command for running the pipeline is as follows: @@ -30,7 +22,7 @@ def helpMessage() { Mandatory arguments: --reads Path to input data (must be surrounded with quotes) -profile Configuration profile to use. Can use multiple (comma separated) - Available: standard, conda, docker, singularity, awsbatch, test + Available: conda, docker, singularity, awsbatch, test and more. Tool flags: --star_fusion Run STAR-Fusion @@ -65,80 +57,46 @@ def helpMessage() { Other Options: --outdir The output directory where the results will be saved --email Set this parameter to your e-mail address to get a summary e-mail with details of the run sent to you when the workflow exits + --maxMultiqcEmailFileSize Theshold size for MultiQC report to be attached in notification email. If file generated by pipeline exceeds the threshold, it will not be attached (Default: 25MB) -name Name for the pipeline run. If not specified, Nextflow will automatically generate a random mnemonic. AWSBatch options: --awsqueue The AWSBatch JobQueue that needs to be set when running on AWSBatch --awsregion The AWS Region for your AWS Batch job to run on """.stripIndent() - log.info "${nfcore_logo}${nfcore_help}" } /* * SET UP CONFIGURATION VARIABLES */ +params.running_tools = [] + // Show help emssage if (params.help){ helpMessage() exit 0 } -// Configurable variables -params.name = false -params.fasta = params.genome ? params.genomes[ params.genome ].fasta ?: false : false -params.gtf = params.genome ? params.genomes[ params.genome ].gtf ?: false : false -params.running_tools = [] -params.multiqc_config = "$baseDir/conf/multiqc_config.yaml" -params.email = false -params.plaintext_email = false - -multiqc_config = file(params.multiqc_config) -output_docs = file("$baseDir/docs/output.md") - -// Reference variables required by tools -// These are needed in order to run the pipeline -fasta = false -gtf = false -pizzly_fasta = false -pizzly_gtf = false -star_fusion_ref = false -fusioncatcher_ref = false -fusion_inspector_ref = false -ericscript_ref = false - -// AWSBatch sanity checking -if(workflow.profile == 'awsbatch'){ - if (!params.awsqueue || !params.awsregion) exit 1, "Specify correct --awsqueue and --awsregion parameters on AWSBatch!" - if (!workflow.workDir.startsWith('s3') || !params.outdir.startsWith('s3')) exit 1, "Specify S3 URLs for workDir and outdir parameters on AWSBatch!" +// Check if genome exists in the config file +if (params.genomes && params.genome && !params.genomes.containsKey(params.genome)) { + exit 1, "The provided genome '${params.genome}' is not available in the iGenomes file. Currently the available genomes are ${params.genomes.keySet().join(", ")}" } -// Has the run name been specified by the user? -// this has the bonus effect of catching both -name and --name -custom_runName = params.name -if( !(workflow.runName ==~ /[a-z]+_[a-z]+/) ){ - custom_runName = workflow.runName -} - -// Check workDir/outdir paths to be S3 buckets if running on AWSBatch -// related: https://github.com/nextflow-io/nextflow/issues/813 -if( workflow.profile == 'awsbatch') { - if(!workflow.workDir.startsWith('s3:') || !params.outdir.startsWith('s3:')) exit 1, "Workdir or Outdir not on S3 - specify S3 Buckets for each to run on AWSBatch!" -} +// Configurable reference genomes +params.fasta = params.genome ? params.genomes[ params.genome ].fasta ?: false : false +params.gtf = params.genome ? params.genomes[ params.genome ].gtf ?: false : false -// Validate pipeline variables -// These variable have to be defined in the profile configuration which is referenced in nextflow.config -if (params.fasta) { - fasta = file(params.fasta) - if(!fasta.exists()) exit 1, "Fasta file not found: ${params.fasta}" -} +fasta = Channel + .fromPath(params.fasta) + .ifEmpty { exit 1, "Fasta file not found: ${params.fasta}" } -if (params.gtf) { - gtf = file(params.gtf) - if(!gtf.exists()) exit 1, "GTF file not found: ${params.fasta}" -} +Channel + .fromPath(params.gtf) + .ifEmpty { exit 1, "GTF annotation file not found: ${params.gtf}" } + .into { gtf; gtf_squid } -if (!params.star_index && (!params.fasta && !params.gtf)) { +if (!params.star_index && (!params.fasta && !params..gtf)) { exit 1, "Either specify STAR-INDEX or fasta and gtf file!" } @@ -146,6 +104,7 @@ if (!params.databases) { exit 1, "Database path for fusion-report has to be specified!" } +star_fusion_ref = false if (params.star_fusion) { params.running_tools.add("STAR-Fusion") if (!params.star_fusion_ref) { @@ -157,6 +116,7 @@ if (params.star_fusion) { } } +fusioncatcher_ref = false if (params.fusioncatcher) { params.running_tools.add("Fusioncatcher") if (!params.fusioncatcher_ref) { @@ -168,6 +128,7 @@ if (params.fusioncatcher) { } } +ericscript_ref = false if (params.ericscript) { params.running_tools.add("Ericscript") if (!params.ericscript_ref) { @@ -179,6 +140,8 @@ if (params.ericscript) { } } +pizzly_fasta = false +pizzly_gtf = false if (params.pizzly) { params.running_tools.add("Pizzly") if (params.pizzly_fasta) { @@ -196,11 +159,12 @@ if (params.pizzly) { if (params.squid) { params.running_tools.add("Squid") - if (!gtf) { + if (!gtf_squid) { exit 1, "Missing GTF annotation file for squid!" } } +fusion_inspector_ref = false if (params.fusion_inspector) { params.running_tools.add("FusionInspector") if (!params.star_fusion_ref) { @@ -212,21 +176,58 @@ if (params.fusion_inspector) { } } +// Has the run name been specified by the user? +// this has the bonus effect of catching both -name and --name +custom_runName = params.name +if( !(workflow.runName ==~ /[a-z]+_[a-z]+/) ){ + custom_runName = workflow.runName +} + +if( workflow.profile == 'awsbatch') { + // AWSBatch sanity checking + if (!params.awsqueue || !params.awsregion) exit 1, "Specify correct --awsqueue and --awsregion parameters on AWSBatch!" + // Check outdir paths to be S3 buckets if running on AWSBatch + // related: https://github.com/nextflow-io/nextflow/issues/813 + if (!params.outdir.startsWith('s3:')) exit 1, "Outdir not on S3 - specify S3 Bucket to run on AWSBatch!" + // Prevent trace files to be stored on S3 since S3 does not support rolling files. + if (workflow.tracedir.startsWith('s3:')) exit 1, "Specify a local tracedir or run without trace! S3 cannot be used for tracefiles." +} + +// Stage config files +ch_multiqc_config = Channel.fromPath(params.multiqc_config) +ch_output_docs = Channel.fromPath("$baseDir/docs/output.md") + /* * Create a channel for input read files */ -Channel - .fromFilePairs( params.reads, size: params.singleEnd ? 1 : 2 ) - .ifEmpty { exit 1, "Cannot find any reads matching: ${params.reads}\nNB: Path needs to be enclosed in quotes!\nIf this is single-end data, please specify --singleEnd on the command line." } - .into { read_files_fastqc; read_files_summary; read_files_multiqc; read_files_star_fusion; read_files_fusioncatcher; +if(params.readPaths){ + if(params.singleEnd){ + Channel + .from(params.readPaths) + .map { row -> [ row[0], [file(row[1][0])]] } + .ifEmpty { exit 1, "params.readPaths was empty - no input files supplied" } + .into { read_files_fastqc; read_files_summary; read_files_multiqc; read_files_star_fusion; read_files_fusioncatcher; + read_files_gfusion; read_files_fusion_inspector; read_files_ericscript; read_files_pizzly; read_files_squid } + } else { + Channel + .from(params.readPaths) + .map { row -> [ row[0], [file(row[1][0]), file(row[1][1])]] } + .ifEmpty { exit 1, "params.readPaths was empty - no input files supplied" } + .into { read_files_fastqc; read_files_summary; read_files_multiqc; read_files_star_fusion; read_files_fusioncatcher; + read_files_gfusion; read_files_fusion_inspector; read_files_ericscript; read_files_pizzly; read_files_squid } + } +} else { + Channel + .fromFilePairs( params.reads, size: params.singleEnd ? 1 : 2 ) + .ifEmpty { exit 1, "Cannot find any reads matching: ${params.reads}\nNB: Path needs to be enclosed in quotes!\nIf this is single-end data, please specify --singleEnd on the command line." } + .into { read_files_fastqc; read_files_summary; read_files_multiqc; read_files_star_fusion; read_files_fusioncatcher; read_files_gfusion; read_files_fusion_inspector; read_files_ericscript; read_files_pizzly; read_files_squid } - +} // Header log info -log.info nfcore_logo +log.info nfcoreHeader() def summary = [:] -summary['Pipeline Name'] = 'nf-core/rnafusion' -summary['Pipeline Version'] = workflow.manifest.version +if(workflow.revision) summary['Pipeline Release'] = workflow.revision summary['Run Name'] = custom_runName ?: workflow.runName summary['Reads'] = params.reads summary['Fasta Ref'] = params.fasta @@ -234,31 +235,32 @@ summary['GTF Ref'] = params.gtf summary['STAR Index'] = params.star_index ? params.star_index : 'Not specified, building' summary['Tools'] = params.running_tools.size() == 0 ? 'None' : params.running_tools.join(", ") summary['Data Type'] = params.singleEnd ? 'Single-End' : 'Paired-End' -summary['Max Memory'] = params.max_memory -summary['Max CPUs'] = params.max_cpus -summary['Max Time'] = params.max_time +summary['Max Resources'] = "$params.max_memory memory, $params.max_cpus cpus, $params.max_time time per job" +if(workflow.containerEngine) summary['Container'] = "$workflow.containerEngine - $workflow.container" summary['Output dir'] = params.outdir +summary['Launch dir'] = workflow.launchDir summary['Working dir'] = workflow.workDir -summary['Container Engine'] = workflow.containerEngine -if(workflow.containerEngine) summary['Container'] = workflow.container -summary['Current home'] = "$HOME" -summary['Current user'] = "$USER" -summary['Current path'] = "$PWD" -summary['Working dir'] = workflow.workDir -summary['Output dir'] = params.outdir -summary['Script dir'] = workflow.projectDir -summary['Config Profile'] = workflow.profile +summary['Script dir'] = workflow.projectDir +summary['User'] = workflow.userName if(workflow.profile == 'awsbatch'){ - summary['AWS Region'] = params.awsregion - summary['AWS Queue'] = params.awsqueue + summary['AWS Region'] = params.awsregion + summary['AWS Queue'] = params.awsqueue +} +summary['Config Profile'] = workflow.profile +if(params.config_profile_description) summary['Config Description'] = params.config_profile_description +if(params.config_profile_contact) summary['Config Contact'] = params.config_profile_contact +if(params.config_profile_url) summary['Config URL'] = params.config_profile_url +if(params.email) { + summary['E-mail Address'] = params.email + summary['MultiQC maxsize'] = params.maxMultiqcEmailFileSize } -if(params.email) summary['E-mail Address'] = params.email -log.info summary.collect { k,v -> "${k.padRight(15)}: $v" }.join("\n") -log.info "=========================================" +log.info summary.collect { k,v -> "${k.padRight(18)}: $v" }.join("\n") +log.info "\033[2m----------------------------------------------------\033[0m" +// Check the hostnames against configured profiles +checkHostname() def create_workflow_summary(summary) { - def yaml_file = workDir.resolve('workflow_summary_mqc.yaml') yaml_file.text = """ id: 'nf-core-rnafusion-summary' @@ -335,7 +337,7 @@ process star_fusion { file reference from star_fusion_ref output: - file '*fusion_predictions.tsv' into star_fusion_fusions + file '*fusion_predictions.tsv' optional true into star_fusion_fusions file '*.{tsv,txt}' into star_fusion_output script: @@ -385,7 +387,7 @@ process fusioncatcher { file data_dir from fusioncatcher_ref output: - file 'final-list_candidate-fusion-genes.txt' into fusioncatcher_fusions + file 'final-list_candidate-fusion-genes.txt' optional true into fusioncatcher_fusions file '*.{txt,zip,log}' into fusioncatcher_output script: @@ -416,8 +418,8 @@ process ericscript { file reference from ericscript_ref output: - file './tmp/fusions.results.filtered.tsv' into ericscript_fusions - file './tmp/fusions.results.total.tsv' into ericscript_output + file './tmp/fusions.results.filtered.tsv' optional true into ericscript_fusions + file './tmp/fusions.results.total.tsv' optional true into ericscript_output script: """ @@ -447,7 +449,7 @@ process pizzly { file gtf from pizzly_gtf output: - file 'pizzly_fusions.txt' into pizzly_fusions + file 'pizzly_fusions.txt' optional true into pizzly_fusions file '*.{json,txt}' into pizzly_output script: @@ -478,10 +480,10 @@ process squid { input: set val(name), file(reads) from read_files_squid file star_index_squid - file gtf + file gtf from gtf_squid output: - file '*_annotated.txt' into squid_fusions + file '*_annotated.txt' optional true into squid_fusions file '*.txt' into squid_output script: @@ -528,11 +530,11 @@ process summary { script: def extra_params = params.fusion_report_opt ? "${params.fusion_report_opt}" : '' - def tools = params.fusioncatcher ? "--fusioncatcher ${fusioncatcher} " : '' - tools += params.star_fusion ? "--starfusion ${starfusion} " : '' - tools += params.ericscript ? "--ericscript ${ericscript} " : '' - tools += params.pizzly ? "--pizzly ${pizzly} " : '' - tools += params.squid ? "--squid ${squid} " : '' + def tools = !fusioncatcher.empty() ? "--fusioncatcher ${fusioncatcher} " : '' + tools += !starfusion.empty() ? "--starfusion ${starfusion} " : '' + tools += !ericscript.empty() ? "--ericscript ${ericscript} " : '' + tools += !pizzly.empty() ? "--pizzly ${pizzly} " : '' + tools += !squid.empty() ? "--squid ${squid} " : '' """ fusion_report run ${name} . ${params.databases} \\ ${tools} ${extra_params} @@ -576,19 +578,25 @@ process fusion_inspector { } /************************************************************* - * Building report + * Quality check & software verions ************************************************************/ /* * Parse software version numbers */ process get_software_versions { + publishDir "${params.outdir}/pipeline_info", mode: 'copy', + saveAs: {filename -> + if (filename.indexOf(".csv") > 0) filename + else null + } when: !params.debug output: file 'software_versions_mqc.yaml' into software_versions_yaml + file "software_versions.csv" script: """ @@ -603,7 +611,7 @@ process get_software_versions { cat $baseDir/tools/pizzly/environment.yml > v_pizzly.txt cat $baseDir/tools/squid/environment.yml > v_squid.txt cat $baseDir/environment.yml > v_fusion_report.txt - scrape_software_versions.py > software_versions_mqc.yaml + scrape_software_versions.py &> software_versions_mqc.yaml """ } @@ -641,16 +649,16 @@ process multiqc { !params.debug input: - set val(name), file(reads) from read_files_multiqc - file multiqc_config - file ('fastqc/*') from fastqc_results.collect() - file ('software_versions/*') from software_versions_yaml + file multiqc_config from ch_multiqc_config + file ('fastqc/*') from fastqc_results.collect().ifEmpty([]) + file ('software_versions/*') from software_versions_yaml.collect() file workflow_summary from create_workflow_summary(summary) file fusions_mq from summary_fusions_mq.ifEmpty('') output: file "*multiqc_report.html" into multiqc_report file "*_data" + file "multiqc_plots" script: rtitle = custom_runName ? "--title \"$custom_runName\"" : '' @@ -664,13 +672,13 @@ process multiqc { * Output Description HTML */ process output_documentation { - publishDir "${params.outdir}/Documentation", mode: 'copy' + publishDir "${params.outdir}/pipeline_info", mode: 'copy' when: !params.debug input: - file output_docs + file output_docs from ch_output_docs output: file "results_description.html" @@ -710,10 +718,25 @@ workflow.onComplete { if(workflow.repository) email_fields['summary']['Pipeline repository Git URL'] = workflow.repository if(workflow.commitId) email_fields['summary']['Pipeline repository Git Commit'] = workflow.commitId if(workflow.revision) email_fields['summary']['Pipeline Git branch/tag'] = workflow.revision + if(workflow.container) email_fields['summary']['Docker image'] = workflow.container email_fields['summary']['Nextflow Version'] = workflow.nextflow.version email_fields['summary']['Nextflow Build'] = workflow.nextflow.build email_fields['summary']['Nextflow Compile Timestamp'] = workflow.nextflow.timestamp + // On success try attach the multiqc report + def mqc_report = null + try { + if (workflow.success) { + mqc_report = multiqc_report.getVal() + if (mqc_report.getClass() == ArrayList){ + log.warn "[nf-core/rnafusion] Found multiple reports from process 'multiqc', will use only one" + mqc_report = mqc_report[0] + } + } + } catch (all) { + log.warn "[nf-core/rnafusion] Could not attach MultiQC report to summary email" + } + // Render the TXT template def engine = new groovy.text.GStringTemplateEngine() def tf = new File("$baseDir/assets/email_template.txt") @@ -726,7 +749,7 @@ workflow.onComplete { def email_html = html_template.toString() // Render the sendmail template - def smail_fields = [ email: params.email, subject: subject, email_txt: email_txt, email_html: email_html, baseDir: "$baseDir" ] + def smail_fields = [ email: params.email, subject: subject, email_txt: email_txt, email_html: email_html, baseDir: "$baseDir", mqcFile: mqc_report, mqcMaxSize: params.maxMultiqcEmailFileSize.toBytes() ] def sf = new File("$baseDir/assets/sendmail_template.txt") def sendmail_template = engine.createTemplate(sf).make(smail_fields) def sendmail_html = sendmail_template.toString() @@ -746,7 +769,7 @@ workflow.onComplete { } // Write summary e-mail HTML to a file - def output_d = new File( "${params.outdir}/Documentation/" ) + def output_d = new File( "${params.outdir}/pipeline_info/" ) if( !output_d.exists() ) { output_d.mkdirs() } @@ -755,6 +778,66 @@ workflow.onComplete { def output_tf = new File( output_d, "pipeline_report.txt" ) output_tf.withWriter { w -> w << email_txt } - log.info "[nf-core/rnafusion] Pipeline Complete" + c_reset = params.monochrome_logs ? '' : "\033[0m"; + c_purple = params.monochrome_logs ? '' : "\033[0;35m"; + c_green = params.monochrome_logs ? '' : "\033[0;32m"; + c_red = params.monochrome_logs ? '' : "\033[0;31m"; + + if (workflow.stats.ignoredCountFmt > 0 && workflow.success) { + log.info "${c_purple}Warning, pipeline completed, but with errored process(es) ${c_reset}" + log.info "${c_red}Number of ignored errored process(es) : ${workflow.stats.ignoredCountFmt} ${c_reset}" + log.info "${c_green}Number of successfully ran process(es) : ${workflow.stats.succeedCountFmt} ${c_reset}" + } -} \ No newline at end of file + if(workflow.success){ + log.info "${c_purple}[nf-core/rnafusion]${c_green} Pipeline completed successfully${c_reset}" + } else { + checkHostname() + log.info "${c_purple}[nf-core/rnafusion]${c_red} Pipeline completed with errors${c_reset}" + } + +} + +def nfcoreHeader(){ + // Log colors ANSI codes + c_reset = params.monochrome_logs ? '' : "\033[0m"; + c_dim = params.monochrome_logs ? '' : "\033[2m"; + c_black = params.monochrome_logs ? '' : "\033[0;30m"; + c_green = params.monochrome_logs ? '' : "\033[0;32m"; + c_yellow = params.monochrome_logs ? '' : "\033[0;33m"; + c_blue = params.monochrome_logs ? '' : "\033[0;34m"; + c_purple = params.monochrome_logs ? '' : "\033[0;35m"; + c_cyan = params.monochrome_logs ? '' : "\033[0;36m"; + c_white = params.monochrome_logs ? '' : "\033[0;37m"; + + return """ ${c_dim}----------------------------------------------------${c_reset} + ${c_green},--.${c_black}/${c_green},-.${c_reset} + ${c_blue} ___ __ __ __ ___ ${c_green}/,-._.--~\'${c_reset} + ${c_blue} |\\ | |__ __ / ` / \\ |__) |__ ${c_yellow}} {${c_reset} + ${c_blue} | \\| | \\__, \\__/ | \\ |___ ${c_green}\\`-._,-`-,${c_reset} + ${c_green}`._,._,\'${c_reset} + ${c_purple} nf-core/rnafusion v${workflow.manifest.version}${c_reset} + ${c_dim}----------------------------------------------------${c_reset} + """.stripIndent() +} + +def checkHostname(){ + def c_reset = params.monochrome_logs ? '' : "\033[0m" + def c_white = params.monochrome_logs ? '' : "\033[0;37m" + def c_red = params.monochrome_logs ? '' : "\033[1;91m" + def c_yellow_bold = params.monochrome_logs ? '' : "\033[1;93m" + if(params.hostnames){ + def hostname = "hostname".execute().text.trim() + params.hostnames.each { prof, hnames -> + hnames.each { hname -> + if(hostname.contains(hname) && !workflow.profile.contains(prof)){ + log.error "====================================================\n" + + " ${c_red}WARNING!${c_reset} You are running with `-profile $workflow.profile`\n" + + " but your machine hostname is ${c_white}'$hostname'${c_reset}\n" + + " ${c_yellow_bold}It's highly recommended that you use `-profile $prof${c_reset}`\n" + + "============================================================" + } + } + } + } +} diff --git a/nextflow.config b/nextflow.config index b0fc73b0..a5a23c62 100644 --- a/nextflow.config +++ b/nextflow.config @@ -3,9 +3,6 @@ * nf-core/rnafusion Nextflow config file * ------------------------------------------------- * Default config options for all environments. - * Cluster-specific config options should be saved - * in the conf folder and imported under a profile - * name here. */ // Global default params, used in configs @@ -65,14 +62,26 @@ params { outdir = './results' tracedir = "${params.outdir}/pipeline_info" - // Options: Default + // Boilerplate options + name = false + multiqc_config = "$baseDir/assets/multiqc_config.yaml" + email = false + maxMultiqcEmailFileSize = 25.MB + plaintext_email = false + monochrome_logs = false help = false genome = false custom_config_version = 'master' custom_config_base = "https://raw.githubusercontent.com/nf-core/configs/${params.custom_config_version}" + hostnames = false + config_profile_description = false + config_profile_contact = false + config_profile_url = false } -process.container = 'nfcore/rnafusion:1.0.1' // Container slug. Stable releases should specify release tag! +// Container slug. Stable releases should specify release tag! +// Developmental code should specify :dev +process.container = 'nfcore/rnafusion:1.0.2' // Load base.config by default for all pipelines includeConfig 'conf/base.config' @@ -87,6 +96,7 @@ try { profiles { awsbatch { includeConfig 'conf/awsbatch.config' } conda { process.conda = "$baseDir/environment.yml" } + debug { process.beforeScript = 'echo $HOSTNAME' } docker { docker.enabled = true } singularity { singularity.enabled = true } test { includeConfig 'conf/test.config' } @@ -102,28 +112,29 @@ process.shell = ['/bin/bash', '-euo', 'pipefail'] timeline { enabled = true - file = "${params.tracedir}/pipeline_info/nf-core/rnafusion_timeline.html" + file = "${params.tracedir}/execution_timeline.html" } report { enabled = true - file = "${params.tracedir}/pipeline_info/nf-core/rnafusion_report.html" + file = "${params.tracedir}/execution_report.html" } trace { enabled = true - file = "${params.tracedir}/pipeline_info/nf-core/rnafusion_trace.txt" + file = "${params.tracedir}/execution_trace.txt" } dag { enabled = true - file = "${params.tracedir}/pipeline_info/nf-core/rnafusion_dag.svg" + file = "${params.tracedir}/pipeline_dag.svg" } manifest { name = 'nf-core/rnafusion' - description = 'Nextflow rnafusion analysis pipeline, part of the nf-core community.' + author = 'Martin Proks' homePage = 'https://github.com/nf-core/rnafusion' - version = '1.0.1' + description = 'Nextflow rnafusion analysis pipeline, part of the nf-core community.' mainScript = 'main.nf' nextflowVersion = '>=0.32.0' + version = '1.0.2' } // Function to ensure that resource requirements don't go beyond @@ -157,4 +168,4 @@ def check_max(obj, type) { return obj } } -} +} \ No newline at end of file diff --git a/tools/star-fusion/Dockerfile b/tools/star-fusion/Dockerfile index 3dc4821f..ddf875fd 100644 --- a/tools/star-fusion/Dockerfile +++ b/tools/star-fusion/Dockerfile @@ -6,6 +6,7 @@ LABEL authors="rickard.hammaren@scilifelab.se, phil.ewels@scilifelab.se, martin. COPY environment.yml / RUN conda env create -f /environment.yml && conda clean -a ENV PATH /opt/conda/envs/star-fusion/bin:$PATH +ENV TRINITY_HOME /opt/conda/opt/trinity-2.6.6 RUN apt-get install make && perl -MCPAN -e 'install Carp::Assert' RUN ln -s /lib/x86_64-linux-gnu/libcrypt.so.1 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 \ No newline at end of file diff --git a/tools/star-fusion/environment.yml b/tools/star-fusion/environment.yml index 2aa1a3d6..8e28080b 100644 --- a/tools/star-fusion/environment.yml +++ b/tools/star-fusion/environment.yml @@ -4,3 +4,4 @@ channels: dependencies: - bioconda::star=2.6.1b - star-fusion=1.5.0 + - trinity=2.6.6