-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ms_deform_attn_forward_cuda" not implemented for 'BFloat16 #38
Comments
Hi @ChinChyi This is caused by |
Hi @ChinChyi Have you changed any code in your local env, we have fix this bug in our original implementation here:
By removing # FIXME: figure how does this influence the G-DINO model
torch.autocast(device_type="cuda", dtype=torch.bfloat16).__enter__()
if torch.cuda.get_device_properties(0).major >= 8:
# turn on tfloat32 for Ampere GPUs (https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices)
torch.backends.cuda.matmul.allow_tf32 = True
torch.backends.cudnn.allow_tf32 = True After running grounding dino |
So, what is the solution?I have also encounter this problem.Thank you very much! |
This error happened when I called |
Would you like to share your code with us which may be more convenient for us to debug this issue. |
Thanks |
@ChinChyi changing |
Hello!
This is the problem when I use
grounded_sam2_local_demo.py
for image inferenceThe text was updated successfully, but these errors were encountered: