-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Todo list #8
Comments
I don't need it. I didn't try it. But for the sake of completeness: https://lemire.me/blog/2019/12/19/xor-filters-faster-and-smaller-than-bloom-filters/ ( Hello btw! :3 ) |
Some update on this topic:
|
Some update: The MinHash has been released in |
Thanks for your jobs. |
Update: XOR filters has been released in the |
Update: the Scalable Bloom Filter with optimizations and bug fixes have been released in version 3.0.0 |
Hi @Callidon and @folkvir - thanks for a really useful library! I had a suggestion for an additional data structure, as well as a question/suggestion for improvement on the CountingBloomFilter. I have a need for a Time-To-Live, or "expiring", CountingBloomFilter. That is, when entries are added to the CountingBloomFilter, they will automatically be removed after some number of milliseconds. In addition, upon creation of the CountingBloomFilter, the creator can optionally specify a "dispose" callback function that will get called when the entry expires. The idea is generally to merge the concepts of a TTL Cache (e.g. https://github.com/isaacs/ttlcache) with a CountingBloomFilter. Obviously this could just be built as a separate data structure on top of CountingBloomFilter that manages the timeouts, but I think there is a lot of utility to having it in one single data structure. I am going to implement this for a project I need, but please let me know if you would like me to contribute this back to your repository. My other question relates to the implementation of the |
Feel free to modify this list or add suggestions in comments.
The text was updated successfully, but these errors were encountered: