-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ModuleNotFoundError: No module named 'jax.extend' related to #209, #210 #224
Comments
I suspect that the issue lies in the version of the dm-haiku module being 0.0.11 or later. In my environment: $ localcolabfold/colabfold-conda/bin/python3.10 -m pip list
jax 0.4.23
jaxlib 0.4.23+cuda11.cudnn86
chex 0.1.85
dm-haiku 0.0.10 If CUDA 12.1 is installed, these versions should be fine. Please set your dm-haiku to version 0.0.10. Otherwise, you may encounter the error |
Thank you for your suggestion. (I just noticed your response) And I ran localcolabfold 1.5.5. No more " ModuleNotFoundError: No module named 'jax.extend' ", but now new message showed up and the program stopped. " Could not predict ProteinA. Not Enough GPU memory? FAILED_PRECONDITION: DNN library initialization failed. Look at the errors above for more details. " Is there any more suggestion to solve this?? |
If you are using WSL2, did you turn on the settings shown in https://github.com/YoshitakaMo/localcolabfold?tab=readme-ov-file#for-wsl2-in-windows ? |
Yes, I did. Thank you though. |
I wonder... |
Finally, I might have found the solution. I downgraded " nvidia-cudnn-cu11 " by doing this command from 9.0.0.312 to 8.5.0.96 . pip install --upgrade nvidia-cudnn-cu11==8.5.0.96 I ran the localcolabfold and it processed very smoothly on GPU. I was astonished. Thank you. |
Requirement already satisfied: torch==1.13.1 in /usr/local/lib/python3.10/dist-packages (1.13.1) |
I updated the installer and updater script for Linux two days ago as Jax 0.4.23 no longer seems suitable for cuda 12 and cudnn 9. Please update your cuda to 12.4, cudnn to 9, and use the latest updater script. |
Hello,
My question is related to #209 and #210
My environment is...
Wsl2
OS: Ubuntu 22.04.4
GCC: 11.4.0
CUDA: 12.1
GPU: RTX 4090
LocalColabFold Ver. 1.5.5
As instructed in #209 , I checked if GPU was recognized and it was not.
So, I dongraded jax and jaxlib to
jax 0.4.7
jaxlib0.4.7+cuda11.cudnn82
as instructed in #209 .
And then I checked again using
$ /path/to/your/localcolabfold/colabfold-conda/bin/python3.10
and "gpu" was returned.
Then, I run the localcolabfold. But, this error message popped up and stopped like below
2024-04-01 15:14:35,452 Running colabfold 1.5.5 (61df3b853140ca79dbdf64349824beb14364ebfd)
2024-04-01 15:14:36,006 Running on GPU
Traceback (most recent call last):
File "/mnt/d/Alphafold/localcolabfold/colabfold-conda/bin/colabfold_batch", line 8, in sys.exit(main())
File "/mnt/d/AlphaFold/localcolabfold/colabfold-conda/lib/python3.10/site-packages/colabfold/batch.py", line 2037, in main run(
File "/mnt/d/AlphaFold/localcolabfold/colabfold-conda/lib/python3.10/site-packages/colabfold/batch.py", line 1292, in run from colabfold.alphafold.models import load_models_and_params
File "/mnt/d/AlphaFold/localcolabfold/colabfold-conda/lib/python3.10/site-packages/colabfold/alphafold/models.py", line 4, in import haiku
File "/mnt/d/AlphaFold/localcolabfold/colabfold-conda/lib/python3.10/site-packages/haiku/init.py", line 20, in from haiku import experimental
File "/mnt/d/AlphaFold/localcolabfold/colabfold-conda/lib/python3.10/site-packages/haiku/experimental/init.py", line 34, in from haiku._src.dot import abstract_to_dot
File "/mnt/d/AlphaFold/localcolabfold/colabfold-conda/lib/python3.10/site-packages/haiku/_src/dot.py", line 29, in from jax.extend import linear_util as lu
ModuleNotFoundError: No module named 'jax.extend'
It would be helpful if there would be any instruction for solving this issue.
The text was updated successfully, but these errors were encountered: