Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix issue where precomputed knn aren't being reused #2171

Merged
merged 1 commit into from
May 22, 2019

Conversation

cannoneyed
Copy link
Contributor

@cannoneyed cannoneyed commented Apr 30, 2019

Fixes #2082.

@wchargin
Copy link
Contributor

@manivaradarajan: Let’s make sure to get this in for the 1.14 release.

@wchargin wchargin requested review from wchargin and removed request for manivaradarajan May 3, 2019 22:18
Copy link
Contributor

@wchargin wchargin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. This looks correct, and appears to fix the divergence for me.
Before this commit, running t-SNE with perplexity=8 then re-running with
perplexity=100 and waiting for a minute or so caused clusters to spiral
farther and farther apart. After this commit, it behaves correctly.

Almost a shame—the wrong behavior was actually really pretty. :-)

tensorboard/plugins/projector/vz_projector/data.ts Outdated Show resolved Hide resolved
@cannoneyed
Copy link
Contributor Author

@wchargin I wound up fixing another knn-related bug in this PR (that @nsthorat found while debugging the standalone release) - You'll probably want to give it another look.

tl;dr is there was some slight drift between the umap.d.ts file and the umap implementation's function signature that was causing an issue with small dataset sizes.

@wchargin
Copy link
Contributor

wchargin commented May 6, 2019

Ah, didn’t see your comment before starting the review. Thanks for the
context. Please separate the other fix into another PR.

@cannoneyed
Copy link
Contributor Author

OK cool, I'll move the other fix into a separate PR

@cannoneyed
Copy link
Contributor Author

New PR opened here

Copy link
Contributor

@wchargin wchargin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great; thank you! Ideally the whitespace changes on otherwise-unchanged
lines in this PR should be omitted, too—what I meant by my comment was
“please avoid introducing new whitespace errors”. If we could keep
future PRs localized, that’d be great.

@wchargin wchargin merged commit 00495e1 into tensorflow:master May 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

k-nearest-neighbors caching is broken (unsound)
2 participants