-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement RetryingClient #324
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple existing "retry" libraries on PyPI, and it is probably worth either implementing their full set of functionality, or just using them directly. Here are the two I found:
https://pypi.org/project/retry/
https://pypi.org/project/retrying/
Was there any specific features from those libraries you where wanting? I didn't look for existing libraries because adding dependencies for something like this would be a shame in my eyes. |
Tenacity is an interesting implementation and already well used on openstack. However, I agree with @martinnj , if possible, it would be more interesting to keep pymemcache as a pure python library (I voluntarly ignore |
Also notice that Wednesday I also started to implement similar features, however, I didn't yet proposed them before today through a pull request. #326 By introspection the passed clients, I dynamically decorate their methods to make them "retryable". It's not a problem for me if you decide to continue with this version (that of @martinnj). Maybe we could converge our implementations to design and co-author something, it's up to you to decide. I propose you dispose. |
Wild timing, I'll just wait with spending more time until a collaborator decides either way. :) |
I had some time so I thought I'd give it a shot anyway. Changes:
I haven't added a unit-test for the "delay" function. If forced I would time how long the call took and check that it was strictly longer than the delay argument. But I also feel like inserting second long delays in the unit-tests is a bit weird. Let me know if I should start squashing my commits so there is less bloat in the history. EDIT: Might've send out an email notification or two too many last night. Sorry about that. Github and me are not always best friends. |
@martinnj I'm out of town right now and mostly away from my computer. I will have time to look through this next week, apologies for the delay. |
No stress, it's all good. |
Hello @martinnj: Please can you also add related doc with some examples of usage? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello see my inline comment about abc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice changes, thank you for applying my previous comments.
Here is a couple of new suggestions, they aren't mandatory comments. I think that they can help to improve this feature.
Also, I think that it could be worth to reword and to squash your commits into a all in one commit to avoid to spread the history of these changes. Example:
All these changes are related to a single one feature. |
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
These changes allow users to activate and configure their socket keepalive. For now users can configure: - the enabling of their socket keepalive; - the time the connection needs to remain idle before TCP starts sending keepalive probes; - the time (in seconds) between individual keepalive probes; - the maximum number of keepalive probes TCP should send before dropping the connection. This could be used by users who want to handle fine grained failures and scenarios. Also this could be another way to handle problems more or less similar to pinterest#307 This could be in addition of the retry mechanisms pinterest#324 For now this feature only support Linux platform. Other systems support could be added through follow up patches.
80f6b3a
to
d300a2b
Compare
Plan was to squash just before merging, since the commits give context to some of the comments in the PR. Edit: |
d300a2b
to
52e0238
Compare
Thanks to you too :) No problem it make sense to squash it just before. |
@cgordon Hate to be pushy, but any news on this one? :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! I left one comment that you can do, or not, at your discretion.
Hmm, I hit "Comment" instead of "Approve" by accident, anyone know how to navigate this UI to approve this PR? |
52e0238
to
75fe5c8
Compare
@cgordon Good question. No clue. I've hit "Re-request review" for the blocking changes, might give you a prompt to clear it. :) |
Goal here is basically to help with #307.
And takes inspiration from this comment
The idea is a thin "wrapper client" that does nothing but forward calls and retry as appropriate.
The features are as follows:
I haven't written tests yet since I wanted some opinions on the interface/constructor first.
Currently the class uses
__getattr__
to pass through any attributes/methods that isn't present on itself to the inner client.It also passes through
dir()
calls to the inner client to allow for checking available methods, attributes, etc.Is this an acceptable implementation or is it too opaque?
I also did a more naive implementation, it is commented out in the code.
Feedback and suggestions are welcome.
I'll write the unit-tests ones the feel of the client is more solid.
@cgordon are you the person to go to for this? 😄