-
Couldn't load subscription status.
- Fork 878
Implement & use RateLimitTransport #145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
6ae82b4 to
3a19bbf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there might have been 1 bug I found and then a few gentle suggestions, however you might inform me every line is intentional at which point I will approve!
github/transport.go
Outdated
|
|
||
| // drainBody reads all of b to memory and then returns two equivalent | ||
| // ReadClosers yielding the same bytes. | ||
| func (rlt *rateLimitTransport) drainBody(b io.ReadCloser) (r1, r2 io.ReadCloser, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you forsee this method needing rlt in scope? Not a huge preference but I would just define this as a helper drainBody as opposed to a method of *rateLimitTransport
github/transport.go
Outdated
| rlt.m.Unlock() | ||
| } | ||
|
|
||
| func (rlt *rateLimitTransport) isWriteMethod(method string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this method I feel more strongly to having just as a little helper and not a method
github/transport.go
Outdated
| log.Printf("[DEBUG] Sleeping %s between write operations", writeDelay) | ||
| time.Sleep(writeDelay) | ||
| } | ||
| if rlt.isWriteMethod(req.Method) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick suggestion:
rlt.delayNextRequest = rlt.isWriteMethod(req.Method)
github/transport.go
Outdated
| } | ||
| resp.Body = r1 | ||
| ghErr := github.CheckResponse(resp) | ||
| if err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need to check ghErr or reuse defined err
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking further down maybe this error check is not wanted at all? seems like ghErr is handled below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good spot! 🙈
github/transport.go
Outdated
|
|
||
| // drainBody reads all of b to memory and then returns two equivalent | ||
| // ReadClosers yielding the same bytes. | ||
| func (rlt *rateLimitTransport) drainBody(b io.ReadCloser) (r1, r2 io.ReadCloser, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I usually always need a refresher when it comes to golangs io package (my mental model always thinks things are backwards). I have a feeling what you've written is exactly what you needed, but would io.TeeReader be of any use? passing the first reader to checkResponse which will fill the second reader with the content? My apologies if there is some gotcha you already explored.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how TeeReader would be handy without more re-buffering/hacking. 🤔
It seems to do the same thing, but it requires io.Writer as source and io.ReadCloser coming from the response body doesn't comply with that interface.
Maybe I'm missing something obvious?
| return nil, err | ||
| } | ||
| resp.Body = r1 | ||
| ghErr := github.CheckResponse(resp) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm seeing now that github.CheckResponse accepts the response object, making the use of TeeReader impossible. The first time I read it I thought it accepted just a Reader, in that case TeeReader would have worked nicely
var r2 bytes.Buffer
r1 := io.TeeReader(resp.Body, &r2)
ghErr := github.CheckResponse(r1)
resp.Body = &r2
Also the interfaces might not have aligned, if resp.Body was ReadCloser you would have to wrap it just like you did in drainBody anyways nevermind me... always looking for an excuse to use io builtins 😛
…imits Implement & use RateLimitTransport
* Leveraged existing patterns/code used for the normal terraform-provider-github package which was first implimented in [this PR](integrations/terraform-provider-github#145)
* Leveraging existing patterns/code used for the normal terraform-provider-github package which was first implimented in [this PR](integrations/terraform-provider-github#145) * Upgraded GH client to v54 for compatibility
* Leveraging existing patterns/code used for the normal terraform-provider-github package which was first implimented in [this PR](integrations/terraform-provider-github#145) * Upgraded GH client to v54 for compatibility
As described in attached code comments this PR implements best practices GitHub describes in their official documentation.
This is second of two PRs addressing #5
Closes #5
The upside is that with this patch user should never see rate-limit related errors anymore as we'll always retry accordingly and more importantly avoid hitting the limit.
Downsides include slowdown of CUD operations and generally slower UX when managing large number of resources. Admittedly if the previous/current experience was "rate limit errors" and now is slow, but successful apply/destroy I'd still call that a win. 🤷♂️ See some figures below.
We could in theory expose a flag to turn off the sleep logic and/or mutex, but I'm not too keen on doing that pre-emptively without further context, esp. because we're just implementing GitHub's own best practices.
100 repositories test
Current implementation
Serializing requests via mutex
9xslower)6xslower)5xslower)1sec delay between write operations
22xand2.4xslower)6xslower and equal)16xand3xslower)