Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference time #105

Open
vietpho opened this issue Nov 23, 2023 · 0 comments
Open

Inference time #105

vietpho opened this issue Nov 23, 2023 · 0 comments

Comments

@vietpho
Copy link

vietpho commented Nov 23, 2023

Hello,

First off, I want to thank you for sharing your amazing code with us. However, I noticed in your documentation that you mentioned, "[Note: Inference on CPU may take up to 2 minutes. On a single RTX A6000 GPU, OneFormer can perform inference at more than 15 FPS.]" Additionally, I saw in the issues section that you responded to a question about real-time segmentation, stating that a model with Swin-L as the backbone could achieve this.

But, I'm using an RTX 3090 and have tried running demo.py with various models using the checkpoints you provided. Unfortunately, it's taking at least 2 seconds per image. I used R50, and my images are 1280x1280 in size.

Here are the models I tested:

150_16_dinat_l_oneformer_coco_100ep.pth
150_16_swin_l_oneformer_coco_100ep.pth
250_16_swin_l_oneformer_cityscapes_90k.pth
1280x1280_250_16_swin_l_oneformer_ade20k_160k.pth

Could you help me understand why there is such a significant difference in inference time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant