-
Notifications
You must be signed in to change notification settings - Fork 110
Guideline for users
To download the CUDA toolkit (which also includes OpenCL) and get installation instructions please visit https://developer.nvidia.com/cuda-toolkit.
The path to CUDA libraries varies widely across systems. The default for the Cuda toolkit installer package places the most recent toolkit into /usr/local/cuda
. In this case, the following environment variables to compile AutoDock-GPU need to be set to:
export GPU_INCLUDE_PATH=/usr/local/cuda/include
export GPU_LIBRARY_PATH=/usr/local/cuda/lib64
This is not always (or, arguably, not even usually) the case. The installation folder can be set at installation time and different vendors prefer different default locations for various reasons. On compute clusters, a good way to find where Cuda is installed is to use the module system for information (i.e. module show cuda
) or to ask the system administrator. For example, as of November 2021, installing CUDA with aptitude on Ubuntu 20.04 places "cuda.h" in /usr/include
and "libcuda.so" in /usr/lib/x86_64-linux-gnu
. If these paths are where CUDA is installed on your system, setting the following environment variables will tell the compiler where to find them.
export GPU_INCLUDE_PATH=/usr/include
export GPU_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu
Once Cuda is installed on a given system both make DEVICE=GPU
and make DEVICE=CUDA
will compile the Cuda version of AutoDock-GPU. To compile the OpenCL version on the same system make DEVICE=OCLGPU
can be used.
To install Intel's OpenCL SDK and get installation instructions, please go to: https://www.intel.com/content/www/us/en/developer/tools/opencl-sdk/choose-download.html
AMD includes their GPU OpenCL library into their drivers and only OpenCL headers (i.e. from the Intel SDK above) are needed additionally. On Debian-based systems (such as Ubuntu) sudo apt-get install opencl-headers
can also be used.
Similar to the Cuda case above, the environment variables GPU_INCLUDE_PATH
and GPU_LIBRARY_PATH
have to point to the locations where CL/CL.h
and libopencl.so
, respectively, can be found.
Finally, although OpenCL support is officially "deprecated" on macOS it still exists as of November 2021 and should work out-of-the box for AutoDock-GPU.
The current version of AutoDock-GPU compiles two binaries - a CPU-only tool that can be used for contact analysis (adgpu_analysis) and the accelerated docking tool. For accelerated docking, make sure that Cuda and/or OpenCL drivers have been installed for the target accelerator platform in your system.
The following configurations have worked smoothly. Other environments or configurations likely work as well, but are untested.
Operating system | CPU | GPU |
---|---|---|
CentOS 6.7 & 6.8 / Ubuntu 14.04 & 16.04 | Intel SDK for OpenCL 2017 | AMD APP SDK v3.0 / CUDA 9, 10, and 11 |
macOS Catalina 10.15.1 | Apple / Intel | Apple / Intel Iris, Radeon Vega 64, Radeon VII |
The corresponding environmental variables must be defined
- CPU accelerator :
$(CPU_INCLUDE_PATH)
and$(CPU_LIBRARY_PATH)
- GPU accelerator :
$(GPU_INCLUDE_PATH)
and$(GPU_LIBRARY_PATH)
- Both platforms:
$(LD_LIBRARY_PATH)
Explanation
-
$(CPU_INCLUDE_PATH)
/$(GPU_INCLUDE_PATH)
: paths containing the Cuda or OpenCL header files, i.e.,cuda.h
,CL/cl.h
,CL/cl.hpp
, andopencl.h
. -
$(CPU_LIBRARY_PATH)
/$(GPU_LIBRARY_PATH)
: paths containing the Cuda or OpenCL shared libraries, i.e.,libcudart.so
andlibOpenCL.so
.
The following environment variables are usually set by the corresponding driver installer or module systems: $INTELOCLSDKROOT
, $AMDAPPSDKROOT
, and $CUDAROOT
(or $CUDAPATH
).
If they are defined, they can be used to set the required include and library paths, e.g.,:
% echo $INTELOCLSDKROOT
/opt/intel/opencl-1.2-sdk-6.0.0.1049
% export CPU_INCLUDE_PATH=$INTELOCLSDKROOT/include
% export CPU_LIBRARY_PATH=$INTELOCLSDKROOT/lib/x64
% echo $AMDAPPSDKROOT
/opt/AMDAPPSDK-3.0
% export GPU_INCLUDE_PATH=$AMDAPPSDKROOT/include
% export GPU_LIBRARY_PATH=$AMDAPPSDKROOT/lib/x64_86
% echo $CUDAROOT
/usr/local/cuda-10.0
% export GPU_INCLUDE_PATH=$CUDAROOT/include
% export GPU_LIBRARY_PATH=$CUDAROOT/lib64
The basic compilation requires to specify the target accelerator with DEVICE
while NUMWI
is optional (with a default of 64):
make DEVICE=<TYPE> NUMWI=<NWI>
Parameters | Description | Values |
---|---|---|
<TYPE> |
Accelerator chosen |
CPU , GPU , CUDA , OCLGPU
|
<NWI> |
work-group/thread block size |
1 , 2 , 4 , 8 , 16 , 32 , 64 , 128 , 256
|
When DEVICE=GPU
is chosen, the Makefile will automatically test if it can compile Cuda succesfully and use it. To override, use DEVICE=CUDA
or DEVICE=OCLGPU
. The cpu target is only supported using OpenCL. Furthermore, an OpenMP-enabled overlapped pipeline (for setup and processing) can be compiled with OVERLAP=ON
.
Hints: The best work-group size depends on the GPU and workload. Try NUMWI=128
or NUMWI=64
for modern cards with the example workloads. On macOS, use NUMWI=1
for CPUs.
After successful compilation, the host binary autodock_<type>_<N>wi is placed under bin.
Binary-name portion | Description | Values |
---|---|---|
<type> | Accelerator chosen |
cpu , gpu
|
./bin/autodock_<type>_<N>wi \
--ffile <protein>.maps.fld \
--lfile <ligand>.pdbqt \
--nrun <nruns>
Mandatory options | Description | Value | |
---|---|---|---|
--ffile | -M | Protein file | <protein>.maps.fld |
--lfile | -L | Ligand file | <ligand>.pdbqt |
Both options can alternatively be provided in the contents of the files specified with --filelist (-B)
(see below for format) and --import_dpf (-I)
(AD4 dpf file format).
./bin/autodock_gpu_64wi \
--ffile ./input/1stp/derived/1stp_protein.maps.fld \
--lfile ./input/1stp/derived/1stp_ligand.pdbqt
By default the output log file is written in the current working folder. Examples of output logs can be found under examples/output.
Argument | Description | Default value | |
---|---|---|---|
INPUT | |||
--lfile | -L | Ligand pdbqt file | no default |
--ffile | -M | Grid map files descriptor fld file | no default |
--flexres | -F | Flexible residue pdbqt file | no default |
--filelist | -B | Batch file | no default |
--import_dpf | -I | Import AD4-type dpf input file (only partial support) | no default |
--xraylfile | -R | reference ligand file for RMSD analysis | ligand file |
CONVERSION | |||
--xml2dlg | -X | One (or many) AD-GPU xml file(s) to convert to dlg(s) | no default |
OUTPUT | |||
--resnam | -N | Name for docking output log | ligand basename |
--contact_analysis | -C | Perform distance-based analysis (description below) | 0 (no) |
--xmloutput | -x | Specify if xml output format is wanted | 1 (yes) |
--dlgoutput | -d | Control if dlg output is created | 1 (yes) |
--dlg2stdout | -2 | Write dlg file output to stdout (if not OVERLAP=ON) | 0 (no) |
--rlige | Print reference ligand energies | 0 (no) | |
--gfpop | Output all poses from all populations of each LGA run | 0 (no) | |
--npdb | # pose pdbqt files from populations of each LGA run | 0 | |
--gbest | Output single best pose as pdbqt file | 0 (no) | |
--clustering | Output clustering analysis in dlg and/or xml file | 1 (yes) | |
--hsym | Handle symmetry in RMSD calc. | 1 (yes) | |
--rmstol | RMSD clustering tolerance | 2 (Å) | |
SETUP | |||
--devnum | -D | OpenCL/Cuda device number (counting starts at 1) | 1 |
--loadxml | -c | Load initial population from xml results file | no default |
--seed | -s | Random number seeds (up to three comma-sep. integers) | time, process id |
SEARCH | |||
--heuristics | -H | Ligand-based automatic search method and # evals | 1 (yes) |
--heurmax | -E | Asymptotic heuristics # evals limit (smooth limit) | 12000000 |
--autostop | -A | Automatic stopping criterion based on convergence | 1 (yes) |
--asfreq | -a | AutoStop testing frequency (in # of generations) | 5 |
--nrun | -n | # LGA runs | 20 |
--nev | -e | # Score evaluations (max.) per LGA run | 2500000 |
--ngen | -g | # Generations (max.) per LGA run | 42000 |
--lsmet | -l | Local-search method | ad (ADADELTA) |
--lsit | -i | # Local-search iterations (max.) | 300 |
--psize | -p | Population size | 150 |
--mrat | Mutation rate | 2 (%) | |
--crat | Crossover rate | 80 (%) | |
--lsrat | Local-search rate | 100 (%) | |
--trat | Tournament (selection) rate | 60 (%) | |
--dmov | Maximum LGA movement delta | 6 (Å) | |
--dang | Maximum LGA angle delta | 90 (°) | |
--rholb | Solis-Wets lower bound of rho parameter | 0.01 | |
--lsmov | Solis-Wets movement delta | 2 (Å) | |
--lsang | Solis-Wets angle delta | 75 (°) | |
--cslim | Solis-Wets cons. success/failure limit to adjust rho | 4 | |
--stopstd | AutoStop energy standard deviation tolerance | 0.15 (kcal/mol) | |
--initswgens | Initial # generations of Solis-Wets instead of -lsmet | 0 (no) | |
SCORING | |||
--derivtype | -T | Derivative atom types (e.g. C1,C2,C3=C/S4=S/H5=HD) | no default |
--modpair | -P | Modify vdW pair params (e.g. C1:S4,1.60,1.200,13,7) | no default |
--ubmod | -u | Unbound model: 0 (bound), 1 (extended), 2 (compact) | 0 (same as bound) |
--smooth | Smoothing parameter for vdW interactions | 0.5 (Å) | |
--elecmindist | Min. electrostatic potential distance (w/ dpf: 0.5 Å) | 0.01 (Å) | |
--modqp | Use modified QASP from VirtualDrug or AD4 original | 0 (no, use AD4) |
Autostop is ON by default since v1.4. The collective distribution of scores among all LGA populations
is tested for convergence every <asfreq>
generations, and docking is stopped if the top-scored poses
exhibit a small variance. This avoids wasting computation after the best docking solutions have been found.
The heuristics set the number of evaluations at a generously large number. They are a function
of the number of rotatable bonds. It prevents unreasonably long dockings in cases where autostop fails
to detect convergence.
In our experience --heuristics 1
and --autostop 1
allow sufficient score evaluations for searching
the energy landscape accurately. For molecules with many rotatable bonds (e.g. about 15 or more)
it may be advisable to increase --heurmax
.
When the heuristics is used and --nev <max evals>
is provided as a command line argument it provides the (hard) upper # of evals limit to the value the heuristics suggests. Conversely, --heurmax
is the rolling-off type asymptotic limit to the heuristic's # of evals formula and should only be changed with caution.
The batch file is a text file containing the parameters to --ffile
, --lfile
, and --resnam
each on an individual line. It is possible to only use one line to specify the Protein grid map file which means it will be used for all ligands. Here is an example:
./receptor1.maps.fld
./ligand1.pdbqt
Ligand 1
./receptor2.maps.fld
./ligand2.pdbqt
Ligand 2
./receptor3.maps.fld
./ligand3.pdbqt
Ligand 3
When the distance-based analysis is used (--contact_analysis 1
or --contact_analysis <R_cutoff>,<H_cutoff>,<V_cutoff>
),
the ligand poses of a given run (either after a docking run or even when --xml2dlg <xml file(s)>
is used) are analyzed in
terms of their individual atom distances to the target protein with individual cutoffs for:
-
R
eactive (default: 2.1 Å): These are interactions between modified atom types numbered 1, 4, or 7 (i.e. between C1 and S4) -
H
ydrogen bonds (default: 3.7 Å): Interactions between Hydrogen-bond donor (closest N,O,S to an HD, or HD otherwise) and acceptor atom types (NA,NS,OA,OS,SA atom types). -
V
an der Waals (default: 4.0 Å): All other interactions not fulfilling the above criteria.
The contact analysis results for each pose are output in dlg lines starting with ANALYSIS:
and/or in <contact_analysis>
blocks in xml file output.
Go to Wiki home.
AutoDock for GPUs and other accelerators.
Contents