-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running multiple instances of a singularity alphafold - fails /etc/ld.so.cache #258
Comments
I can confirm that I can run the |
how do you run the singularity file? |
I run it on an SGE based cluster, my current version of the alphafold.wrapper script I use with qsub is here https://pastebin.com/QGNFCnTN About a week ago alphafold v2.1.1 was compiled on the docker of a university (before that I couldn't figure out how to get it on singularity, never even heard of singularity before this): qsub_cmd: I am now experimenting with increasing GPU card from smp1 to smp2. I am doing somethings wrong because many jobs fail, either memory issues, or something with the HT step, and the newest: |
You might want to take a look at these two links: https://sbgrid.org/wiki/examples/alphafold2 Our IT guy helped set up the alphafold in a python virtual env on a slurm based cluster. He told me he referred to these links. |
You can try my docker image from |
I am trying to run multiple predictions of the multimer model simultaneously using the same singularity .sif build and I am wondering if that's the reason I am running into:
Perhaps these warnings are related:
I have checked the input and it doesn't seem like anything is wrong with the fasta file.
Singularity built from:
singularity build alphafold_v2.1.1.sif docker://uvarc/alphafold:2.1.1
Thanks,
Adrian
The text was updated successfully, but these errors were encountered: