-
Notifications
You must be signed in to change notification settings - Fork 31.6k
OWL-ViT box_predictor inefficiency issue #29712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OWL-ViT box_predictor inefficiency issue #29712
Conversation
amyeroberts
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this!
Could you add the equivalent changes to owlv2, as in #29713 and add a contribution for @nvbinh15 in the commit?
Once that's done, all I need to see are the outputs for the slow model tests to confirm everything still runs as before:
RUN_SLOW=1 pytest tests/models/owlvit tests/models/owlv2 -vv
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, because this is a public method and part of a model which is importable from the top level of transformers, we need to make this change so that it's backwards compatible to some degree. Since I doubt this is being called much, we can do something like the following:
def compute_box_bias(self, num_patches: int, feature_map: Optional[torch.FloatTensor] = None) -> torch.Tensor:
if feature_map is not None:
raise ValueError("feature_map has been deprecated as an input. Please pass in num_patches instead")There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 83200135c7c78138a32c4aab84bda20c7fc24b4f commit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The good thing to do here is add a lru_cache decorator. Otherwise this is going to be recalculated every time it's called
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in ae0d8bc1ceaf2a01deb1774f25d48bd3dab90a8f commit.
I didn't found any other usage of compute_box_bias, so just added the maxsize as 2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't reset self.box_bias here - this effectively creates a side-effect in the function
| self.box_bias = self.box_bias.to(feature_map.device) | |
| pred_boxes += self.box_bias | |
| box_bias = self.box_bias.to(feature_map.device) | |
| pred_boxes += box_bias |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in edd18742fe6cac818a49cb7d87ca281e87f7d934 commit
ddb96b7 to
5626dbd
Compare
|
Here are the result for the slow tests: |
|
Also, I am not sure whether is need to create a new PR or how to again trigger this PR with updated code. |
|
@RVV-karma Thanks for sharing the test outputs! If you don't have anything else to commit, you can do an empty commit to add the contribution e.g.: You'll need to run |
91eb253 to
99e1886
Compare
99e1886 to
5fe0e13
Compare
|
Thanks a lot @amyeroberts! Also, here is the output from |
amyeroberts
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work - thanks again for adding this and improving our models! 🔥
I am calculating box_bias at the start while initialization, and storing it in a variable, then reusing it at inference. Also, shifting it to the correct memory location at the interence time.
I also converted the numpy implementation to torch implementation, and verified that both are giving the same results.
Fixes #26099
@5hadytru @amyeroberts
Can you please review?
I updated the code to the best of my knwoledge. Please let me know if you need any changes. Thanks.