Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to run inference on single video? #6

Open
tarunmcom opened this issue Apr 12, 2024 · 4 comments
Open

How to run inference on single video? #6

tarunmcom opened this issue Apr 12, 2024 · 4 comments
Labels
in planning This will be dealt in the future

Comments

@tarunmcom
Copy link

How to run inference on single video?

@HELLORPG
Copy link
Collaborator

HELLORPG commented Apr 13, 2024

Currently we not support for run inference on a single video (like .mp4 files). This is because the MOT benchmarks store videos in the form of continuous images. If you want to run our model on a video file, a straightforward approach is to sample it into a sequences of images (we suggest about 25 FPS).

In the future, I do plan to implement a demo file like demo.ipynb in MeMOTR, which can directly process a video file.
However due to other work arrangements, this planning may require more time. You may be able to modify it based on the demo code of MeMOTR. The code structures of the two are relatively similar.

@HELLORPG HELLORPG added the in planning This will be dealt in the future label Apr 13, 2024
@urbaneman
Copy link

@tarunmcom I added this feature in my own fork project, you can refer to it support video infer

@yarkable
Copy link

yarkable commented May 7, 2024

great! @urbaneman

@BianJinLong123
Copy link

有没有推理的代码呢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in planning This will be dealt in the future
Projects
None yet
Development

No branches or pull requests

5 participants