-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage] ImportError: cannot import name 'LlavaLlamaForCausalLM' from 'llava.model' #1101
Comments
Try importing the packages without the "try, except" block for a more informative error. Probably related to flash attn installation. In my case, the following worked:
|
For me it was problem with packages like deepspeed etc. that led to above. But any issue will lead to it. This is the combo that worked for me:
|
I am also having the same error. |
Tried and now i get this error:
|
@TobiasJu I got same, but then did what I did: |
We find that this is due to flash-attn compiled previously with a different version of pytorch. Please reinstall that with:
|
Correct, but even that isn't enough, deepspeed etc. have to be correct range of versions, else hit the other issues shown above. |
Thanks,
|
In my and the above case the deepspeed error was hit with inference too. I didn't check exactly which deepspeed version was required. I just did pip install accelerate deepspeed --upgrade and it started working, so I wanted to remember the versions that worked so wrote those down. |
Great, thank you for the information! |
After following the instructions in the repo, I encountered the same error:
Currently, after attempting #1101 (comment), I got this one:
All attempts to resolve this issue have only resulted in more errors. [ Debian GNU/Linux 11 (bullseye) |
I got that A last thing I may have done is to literally relink the cuda directory:
But make sure all your cuda stuff is consistent. |
#1101 (comment) |
Seems pretty random, but I got rid of this error by just commenting out the only line in
Now my installation is running correctly. Can anybody explain? |
Same problem here.
|
The above order worked for me. The order is important. |
I tried but, |
this works for me ,thanks! |
I'm also encountering this issue. Have you resolved it? |
''' |
fro me it don't work! (llava) C:\Users\aymen\LLaVA>pip install -e ".[train]" × python setup.py egg_info did not run successfully. note: This error originates from a subprocess, and is likely not a problem with pip. × Encountered error while generating package metadata. note: This is an issue with the package mentioned above, not pip. And × python setup.py bdist_wheel did not run successfully.
|
I believe that in certain situations, it would be beneficial to disable the try-except mechanism during the debugging process, particularly after making modifications to the original codebase. For instance, in this scenario, the error arises due to the binding of the local package within the Conda environment, so you can see this for help. |
It works for me, thanks! |
A quick update. I followed the installment routine and attended the discussion on this page, but I still ran into the problem. |
During the training process, I always encountered this problem. Later, based on my CUDA=11.7, I downgraded the torch version and also downgraded the flash attn version, and the problem was solved. |
Fixed due to haotian-liu#1101 (comment)
I solved the issue by changing Python version: replacing |
@pseudotensor Does CUDA have to be 12.1? |
same question |
OK. I found a more general solution to this problem: Updating the package requirement to:
So basically, the error is caused by a lot of package incompatibility. The try-except in the init has hidden the real issues. But the issues have been actually solved by updated deepspeed and accelerate. Hope this will be useful for people like me who have to switch to newer torch/CUDA/NCCL due to the constraints of hardware. BTW, probably this codebase also requires flash-attn<=2.6.3 |
thank you ! you actually solved this porblem and it's very useful. |
hh. You’re welcome. I had to solve the problem due to some constraints of our infra at the time. |
Describe the issue
Issue:
So i updated the repo and now i can not start the server anymore. I deleted the repo and cloned it again, but get the same error.
System:
Win10 WSL2 Ubuntu
pip 24.0 from /home/tobias/.local/lib/python3.10/site-packages/pip (python 3.10)
Command:
Log:
Packages:
Screenshots:

The text was updated successfully, but these errors were encountered: