-
Notifications
You must be signed in to change notification settings - Fork 911
Parabricks bulk update: Upgrading modules to Parabricks version 4.6 #9190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parabricks bulk update: Upgrading modules to Parabricks version 4.6 #9190
Conversation
.github/workflows/nf-test-gpu.yml
Outdated
nf-test-gpu: | ||
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.xlarge/image=ubuntu24-gpu-x64" | ||
runs-on: "runs-on=${{ github.run_id }}/family=g4dn.8xlarge/image=ubuntu24-gpu-x64" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
except for this where we might need @mashehu for approval I think it looks good!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's quite a lot larger. Any way to make the test data smaller or what is causing this bloat?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current solution is to change it back to g4dn.xlarge before pushing to main (see the note in the README.md). Only starfusion and rnafq2bam have this higher requirement and it comes down to system memory. Even with the smallest test data possible, the memory on the g4dn.xlarge is not large enough.
The minimum system requirements for Parabricks are 100 GB of memory for 1 GPU so this instance matches that, although for most of the tests it looks like we can get away with less.
Is this satisfactory for now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current solution is to change it back to g4dn.xlarge before pushing to main
you mean you will switch this value back in this PR before merging or what do you mean with "pushing to main"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes for now I will change it back before merging so I can unblock this PR until we can think of a more permanent solution. I will do some testing on smaller instances to see if I can size it down to a 4xlarge or 2xlarge, but I know the 1xlarge won't work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I finished my testing and everything was able to run on a g4dn.2xlarge, meaning that Parabricks is using 15-30 GB of system memory for the rnafq2bam module. I'd like to continue the discussion (maybe as a GitHub issue?) about if there's a better solution to testing this than having to manually change this in the nf-test-gpu.yml.
@famosab is there anything else you need to approve this PR or the starfusion one since they are both blocked by this issue?
Co-authored-by: Matthias Hörtenhuber <[email protected]>
PR checklist
Closes #XXX
versions.yml
file.label
nf-core modules test <MODULE> --profile docker
nf-core modules test <MODULE> --profile singularity
nf-core modules test <MODULE> --profile conda
nf-core subworkflows test <SUBWORKFLOW> --profile docker
nf-core subworkflows test <SUBWORKFLOW> --profile singularity
nf-core subworkflows test <SUBWORKFLOW> --profile conda