Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature-Request] Multi-cam image stitching #782

Open
Erol444 opened this issue Sep 1, 2022 · 3 comments
Open

[Feature-Request] Multi-cam image stitching #782

Erol444 opened this issue Sep 1, 2022 · 3 comments
Labels
enhancement New feature or request sdk Related to DepthAI SDK

Comments

@Erol444
Copy link
Member

Erol444 commented Sep 1, 2022

Start with the why:

Because it's simples/cheaper to use multiple camera sensors (eg. using our Modular Camera design) that are at an angle compared to using a high-res high-fov camera sensor. Also for a full 360deg view you need multiple camera sensors, and then stitch the image together.

Move to the what:

Create a demo for multi-camera stitching. We would first need multi-cam extrinsics calibration script in order to then develop image stitching.

Move to the how:

Computer vision. Maybe look into solution by kornia?

@Erol444 Erol444 added the enhancement New feature or request label Sep 1, 2022
@Erol444 Erol444 added the sdk Related to DepthAI SDK label Mar 27, 2023
@tinito
Copy link

tinito commented Mar 30, 2023

Any progress on the stitching part?

@Erol444
Copy link
Member Author

Erol444 commented Mar 30, 2023

Hi @tinito , we haven't yet created a demo of this, as there wasn't much interest.

@john-maidbot
Copy link

john-maidbot commented Apr 21, 2023

I was also interested in this. However, in my first attempt, I hit a dead end trying to convert the kornia image stitching to an onnx model (I even tried different matchers and feature descriptors. All failed to convert). So many errors and warnings from the onnx tracer... And I didn't want to go through trying to patch all that or rewrite the kornia models.
So I am going to try to see if I can at least offload the feature extraction to the device. And then do matching and homography estimation on the host.

Update: I ended up running SuperPoint on the device and then knn feature matching and homography estimation on the host with opencv. SuperPoint runs decently fast on device (currently using a decimated input image of 320x240 as trying to run it on a 640x480 input image used up all the vpu memory on the oakd lite, this is because I am running the model on two different images so I am holding 2 copies of the superpoint model in memory. Maybe there's a better way to do this?)
I guess with fixed cameras you could calibrate once to get the extrinsics but I think there are instances where you might want to estimate the best transform on the fly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request sdk Related to DepthAI SDK
Projects
None yet
Development

No branches or pull requests

3 participants