Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add zremrangebyrank #47

Merged
merged 2 commits into from
Mar 2, 2019
Merged

Add zremrangebyrank #47

merged 2 commits into from
Mar 2, 2019

Conversation

peng-huang-ch
Copy link
Contributor

Add zremrangebyrank to limit useful information count

@noamshemesh
Copy link
Collaborator

Thanks @penghap
On another thought, assuming we have an expiry for keys, don't you think the time of finding the indexes to delete isn't worth it?

@peng-huang-ch
Copy link
Contributor Author

peng-huang-ch commented Mar 2, 2019 via email

@noamshemesh
Copy link
Collaborator

noamshemesh commented Mar 2, 2019 via email

@peng-huang-ch
Copy link
Contributor Author

peng-huang-ch commented Mar 2, 2019 via email

@peng-huang-ch
Copy link
Contributor Author

@noamshemesh The tests have failed, seems to be the reason for the node version.

@noamshemesh
Copy link
Collaborator

That's fine, I'll drop support in node 4 and add the next versions. Thanks

@noamshemesh noamshemesh merged commit 2103dcc into tj:master Mar 2, 2019
@noamshemesh
Copy link
Collaborator

Merged. Thank you!

@noamshemesh noamshemesh mentioned this pull request Mar 2, 2019
@Kikobeats
Copy link
Contributor

Hello, I checked this PR in order to determinate the benefits to set tidy=true but I didn't find the point.

I understand that, since you are counting thing in redis, you don't want to count more than max, so you start deleting old counting over the same thing.

My question is: why not is this the default behavior? is it something related to perf? is it doesn't matter?

Just the thing I want to determinate is when I should use it 🙂

(after you reply I can do another PR for extend the explanation on docs)

@peng-huang-ch
Copy link
Contributor Author

Yes, if we clean up redundant data every time, It's will loss the perf. But when a los of records for a key in the redis, It will improve it. I just provides a choice. When and where use it, just depend on the actual situation 😄

@Kikobeats
Copy link
Contributor

Kikobeats commented Mar 27, 2019

Thanks for the comment!

Are you using it in a real production environment?

How many records do you consider would be enough to be significant enable it?

Also, thinking could be interesting a strategy that uses tidy after N times 🤔

@peng-huang-ch
Copy link
Contributor Author

Yes, I have already used. But in the service i need to limit the count.
I think the max records is enough.
It sounds interesting, but it may need another way to record the N.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants