You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I have access to a cluster, Leonardo at CINECA, and I have compiled from the sources AutoDock-GPU using
OCLGPU as a DEVICE following the issue #239.
At login I receive the following informations:
Atos Bull Sequana XH21355 "Da Vinci" Blade -
Red Hat Enterprise Linux 8.6 (Ootpa)
3456 compute nodes with:
- 32 cores Ice Lake at 2.60 GHz
- 4 x NVIDIA Ampere A100 GPUs, 64GB
- 512 GB RAM
For my needs I'll use few nodes.
Can you suggest me some general guidelines about which command line options can be
used to get the most out of this kind of GPUs or, in other words, to avoid the underuse
of computing power of these cards?
Many thanks.
Saverio
The text was updated successfully, but these errors were encountered:
one more thing: if you have access to an NVME scratch space it's a good idea to use it to run the dockings node-local on - file io with four A100s will be a bottleneck particularly on a networked file system
Hi,
I have access to a cluster, Leonardo at CINECA, and I have compiled from the sources AutoDock-GPU using
OCLGPU as a DEVICE following the issue #239.
At login I receive the following informations:
Atos Bull Sequana XH21355 "Da Vinci" Blade -
Red Hat Enterprise Linux 8.6 (Ootpa)
3456 compute nodes with:
- 32 cores Ice Lake at 2.60 GHz
- 4 x NVIDIA Ampere A100 GPUs, 64GB
- 512 GB RAM
For my needs I'll use few nodes.
Can you suggest me some general guidelines about which command line options can be
used to get the most out of this kind of GPUs or, in other words, to avoid the underuse
of computing power of these cards?
Many thanks.
Saverio
The text was updated successfully, but these errors were encountered: