-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Resource exhausted: Out of memory while trying to allocate 12582912 bytes. #197
Comments
Update: I have tried to run TensorFlow commands in Python3 following this tutorial. |
Update: After running I have installed AlphaFold without Docker following this Non Docker Setup of @sanjaysrikakulam. Later on I got another error message Overall, I have serious memory usage issues with WSL. It is weird that I had to increase the swap memory and then it works. |
Here is the output of my last try without any serious error message:
|
I met the same issue and my solution: comment out the following lines in
|
@Ozcifci The memory issues that you are seeing with hhblits are expected - generally we recommend only running You could try running with |
@andycowie I have upgraded to 64 GB RAM and yet I got the same issue with WSL. @johnnytam100 thanks for the info. However, now I don't have WSL with AlphaFold to test it. Hope it helps others. |
@Ozcifci The problem of using unified memory on WSL CUDA was documented https://docs.nvidia.com/cuda/wsl-user-guide/index.html |
@johnnytam100 Hi, I had the same issue as described above. According to your comment, I commented out the two lines in run_docker.py. It worked for small proteins, which requires less volume than the GPU memory size, but it returned |
@Augustin-Zidek after you telling that this issue was fixed in the newest versions, I gave AlphaFold on WSL another chance. |
@Ozcifci v2.3.0 after i comment out
It was working perfectly fine. Even with big complex I can see it used not just my 3090's 24GB memory but also my 64GB RAM with WSL. I never tired older version, but at least i think now after you comment that out, WSL is able to use both GPU and RAM to run |
Hi all,
I am running into a RuntimeError
Resource exhausted: Out of memory while trying to allocate 12582912 bytes.
while executing the command linepython3 docker/run_docker.py --fasta_paths=T1050.fasta --max_template_date=2020-05-14
as introduced in the installation steps.Some people got the same RuntimeError with long amino acid sequences, but in my case I get this error message anytime, even with short amino acid sequences.
I have traced the VRAM memory usage with
watch nvidia-smi
and it just increased ~100mb while runningdocker/run_docker.py
. The total VRAM usage is ~800 mb out of 8192 mb while running.I have mentioned my problem in this Issue but since it is not correlated to that topic anymore, I have created a new one.
Installed on WSL2. tested CUDA with many packages including a Nvidia sample
./BlackScholes
in/usr/local/cuda/samples/4_Finance/BlackScholes
and I get within a second 'Test passed' message.The text was updated successfully, but these errors were encountered: