-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document system, modularization, and interaction life-cycle #2
Comments
@jywarren I've added proper documentation adhering to the standards above. Let me know what do you think. |
Hi, the docs are really nice! Fantastic work! I wonder if an even simpler non-technical intro sentence or two is worthwhile, something like your one-liner at the top of the page (
Performance - Also thinking - could there be a brief section on performance - like, roughly, what parts take the longest, and some sense of how long one might expect small or large images to take at each major stage. Like, "a few seconds" or "almost instantly" or "up to 60 seconds". Also, do you resize the images before running this, for optimization? What if the images are really huge? We should note if you're resizing, or if this is an option. Oops, it looks like the demo is offline? https://rexagod.github.io/matcher-core/demo/ Great work on this!!!!! |
Ah, and one more thing, sorry! Let's add a section almost at the top discussing uses - like, we can link to LDI and say "here's one use case, and an example of how integration can work" Make sense? |
@jywarren I think,
I think e7fbf6f satisfies this? What do you think? Should I go in deeper and add detailed and more specific info for this, or is it okay?
@jywarren As I stated in my proposal, all images are pyrdowned to 640 by 480 and then evaluated. I thought about including this info in the docs but was afraid that this would be too much "in-depth" for the common user who just wants the points. What do you think? Should I add this to the readme?
Definitely! And the image-sequencer one too! |
I think the docs are looking good. Is there an easy way to set the image
downscale size as a parameter?
…On Sat, Jul 20, 2019, 3:28 PM Pranshu Srivastava ***@***.***> wrote:
@jywarren <https://github.com/jywarren> I think,
roughly, what parts take the longest, and some sense of how long one might
expect small or large images to take at each major stage. Like, "a few
seconds" or "almost instantly" or "up to 60 seconds".
I think e7fbf6f
<e7fbf6f>
satisfies this? What do you think? Should I go in deeper and add detailed
and more specific info for this, or is it okay?
Also, do you resize the images before running this, for optimization? What
if the images are really huge? We should note if you're resizing, or if
this is an option.
@jywarren <https://github.com/jywarren> As I stated in my proposal, all
images are pyrdown
<https://docs.opencv.org/2.4/doc/tutorials/imgproc/pyramids/pyramids.html>ed
to 640 by 480 and then evaluated. I thought about including this info in
the docs but was afraid that this would be too much "in-depth" for the
common user who just wants the points. What do you think? Should I add this
to the readme?
like, we can link to LDI and say "here's one use case, and an example of
how integration can work" Make sense?
Definitely! And the image-sequencer one too!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#2?email_source=notifications&email_token=AAAF6J435PFNYNUPQIN2R53QANRKDA5CNFSM4HYZBVYKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2NUQUY#issuecomment-513493075>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAF6JZ344FNEQRP4YLACQ3QANRKDANCNFSM4HYZBVYA>
.
|
Line 142 in 92d54fd
@jywarren The algorithm is optimized best for 640 by 480 images and increasing this will only increase the computational overhead, while decreasing these dimensions could cause loss of accuracy. I'd suggest not exposing these params to the user since it could very well cause produce errors (incorrect ratios, etc.) or simply wrong results (gaussian pyramids of smaller sizes are susceptible to overlaps), unless there is some advantage to this that I'm not seeing, or maybe some measure by which this could be prevented. Also, all this "downsizing" is done in the virtual canvas that's not appended to the DOM, just created, evaluated, and destroyed, while the original image is used for overlay manipulations. What do you think? |
Was thinking about this and maybe we can pass in ratios smaller than 640:480? Although I'm not sure if that would be a solid implementation as it is right now so let me run some test cases for this and get back to you! |
Okay, so I just tried this, and the overhead was neglible (even for 1080p images, or higher!!), which can be due to the performance upgrade that was incorporated. I never knew we had optimized the algorithm to such extent! Also, I've tested out this on small canvas sizes too (till 352x240) and was not able to find any outliers, but this may become a problem when the two images are very similar, almost identical in nature and the user expects a larger set of (accurate) points where these smaller pyramids may generate lesser and inaccurate results. Perhaps we can add a warning to alert users that smaller dimensions imply lesser points (in rare cases) just to be on the safe side, or maybe restrict them to resolutions equal to, or above 480p? |
OK, so why don't we default to some strong default, and add a few other suggested dimensions and some of the text from this issue as guidance (or just link to it?) so that people can make good decisions with your extra context here, yeah! Docs can specify the ideal constraints. And we can even say "if not sure, just stick to the defaults!" :-) |
Done! Please have a look! |
Hi @rexagod - Linking to discussion here: https://publiclab.org/notes/rexagod/03-11-2019/gsoc-proposal-mapknitter-orb-descriptor-w-auto-stitching-pattern-training-and-live-video-support-and-ldi-revamp-major-ui-enhancements#c22147 and https://publiclab.org/notes/rexagod/03-11-2019/gsoc-proposal-mapknitter-orb-descriptor-w-auto-stitching-pattern-training-and-live-video-support-and-ldi-revamp-major-ui-enhancements#c23688
So perhaps the docs might outline something like:
findPoints()
andfindPointMatches()
findPoints(img)
points
in format:[{...},{...}]
findPointMatches(img1points, img2points)
matches
in format:[{...},{...}]
Creating good documentation (see https://www.npmjs.com/package/inline-markdown-editor for some really simple but specific documentation) will help us make good decisions too!
The text was updated successfully, but these errors were encountered: