Add Image Processor Fast Deformable DETR#34353
Conversation
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
src/transformers/models/deformable_detr/image_processing_deformable_detr_fast.py
Outdated
Show resolved
Hide resolved
b9cfe3b to
f9848a7
Compare
ArthurZucker
left a comment
There was a problem hiding this comment.
Thanks, same comment as for the other PR mostly! 🤗
|
Will make the modifications once this PR #34354 is merged, as most of them will be copied from :) |
f9848a7 to
f7f480a
Compare
ArthurZucker
left a comment
There was a problem hiding this comment.
One thing I don't understand: literally everything is copied from. Why not directy map to use the detr class?
src/transformers/models/deformable_detr/image_processing_deformable_detr_fast.py
Outdated
Show resolved
Hide resolved
f7f480a to
4bc94a4
Compare
All pre-processing functions are copied from |
4bc94a4 to
449e4e5
Compare
ArthurZucker
left a comment
There was a problem hiding this comment.
Got it, thanks! Let's work to make it simpler to add these, with maybe a bit of abstraction on the FastImageProcessor class!
449e4e5 to
e35fda7
Compare
* add deformable detr image processor fast * add fast processor to doc * fix copies * nit docstring * Add tests gpu/cpu and fix docstrings * fix docstring * import changes from detr * fix imports * rebase and fix * fix input data format change in detr and rtdetr fast
What does this PR do?
Adds a fast image processor for Deformable DETR. Follows issue #33810.
This image processor is a result of this work on comparing different image processing method.
The diffs look bad but this PR is almost exclusively made up of
# Copied frombased on the fast image processor for DETR!Implementation
See #34063
Usage
Except for the fact that it only returns torch tensors, this fast processor is fully compatible with the current one.
It can be instantiated through AutoImageProcessor with use_fast=True, or through the Class directly:
Usage is the same as the current processor, except for the
devicekwarg:If
deviceis not specified:Performance gains
batch_size=8. Forcing padding to 1333x1333 (="longest_edge"), as otherwise torch.compile needs to recompile if the different batches have different max sizes.batch_size=1. Forcing padding to 1333x1333.Tests
Who can review?
@ArthurZucker Pinging you directly as there is almost no "new" code here.