-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Deformable attention implementation #9260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9260
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@NicolasHug Can you please review this PR? |
|
Hi @Isalia20 , thanks for the PR. This is a massive one :) Can you share more about why this is needed? I shared my thought and questions on that before inhttps://github.com/pytorch/pytorch/issues/112827#issuecomment-1804905345. It seems like there are really solid implementations of deformable attention already, so we'd like to understand the added-value of having this natively in torchvision. |
|
There are 2 main reasons why this would be a nice addition:
|
Deformable attention implementation. Fixes: pytorch/pytorch#112827