-
Notifications
You must be signed in to change notification settings - Fork 250
assets: add creater based on dynamic client #220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
assets: add creater based on dynamic client #220
Conversation
adcbf62 to
9c29776
Compare
|
TODO: need unit test /hold |
962da9a to
7ef565a
Compare
7ef565a to
e520006
Compare
e520006 to
2b6640a
Compare
ccdb5f0 to
973c0c2
Compare
|
/cc @wking |
pkg/assets/create/creater.go
Outdated
|
|
||
| // Retry creation until no errors are returned or the timeout is hit. | ||
| var lastCreateError error | ||
| err = wait.PollImmediateUntil(500*time.Millisecond, func() (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Half a second is not a big deal, but if we are worried about flooding the API server it seems like we'd want to sleep between API requests, and not just between manifests iterations. That would be dropping PollImmediateUntil with a raw loop:
while len(manifests) > 0 {
err, refresh := create(ctx, manifests, client, mapper, options)
...
}possibly with a sleep inside create's loop. If you have any manifests at all, it's hard to see the iteration over those manifests completing in less than half a second.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would be concerned with flooding API server, it should handle creating few resources just fine (right @sttts ?)
I would like to keep wait.PollImmediateUntil because it support context and timeouts. If we wait after each create call, calculating timeout will be more complex (that we need).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to keep wait.PollImmediateUntil because it support context and timeouts. If we wait after each create call, calculating timeout will be more complex (that we need).
I've pushed git://github.com/wking/openshift-library-go.git mid-create-cancel 8e509de with a stab at dropping PollImmediateUntil and moving the wait to per-manifest instead of per-create call. If you're comfortable without a per-manifest rate-limit, I'd recommend just dropping the ticker and looping over manifests as fast as we get responses from the server. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking the client itself can be configured to do QPS and rate limiting:
| // QPS indicates the maximum QPS to the master from this client. |
If we are concerned about DOSing the API server, we can set this up in the *rest.Config the caller pass to this helper? I can drop the ticker to 10ms or so just to avoid hot-loop, but each individual create call can be limited by using these parameters IMHO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the kubeconfig have a QPS setting as well? I don't think this belongs hardcoded here, but should be settable from the outside. If the kubeconfig knows qps, there is nothing to do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think kubeconfig has that option, we always default qps to 5 in https://github.com/kubernetes/kubernetes/blob/4992402fa5bee9f7ac6fe931d101d362a4295cea/staging/src/k8s.io/client-go/rest/config.go#L276-L278
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
de682c4 to
0967e06
Compare
670f0a6 to
bf5432c
Compare
bf5432c to
2dc857f
Compare
2dc857f to
9ad542a
Compare
9ad542a to
8b71890
Compare
8b71890 to
ba120f5
Compare
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mfojtik, sttts The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
|
||
| // StdErr allows to override the standard error output for printing verbose messages. | ||
| // If not set, os.StdErr is used. | ||
| StdErr io.Writer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like Verbose and StdErr should be replaced with a slot for a logrus logger or generic logging interface. I can file a follow-up for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can file a follow-up for that.
Filed as #225.
|
|
||
| // Default QPS in client (when not specified) is 5 requests/per second | ||
| // This specifies the interval between "create-all-resources", no need to make this configurable. | ||
| interval := 200 * time.Millisecond |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not clear to me why we need more sleeping on top of restConfig's limits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still not clear to me why we need more sleeping on top of
restConfig's limits.
I've filed #226 dropping this additional sleep.
Typo snuck in with 0967e06 (assets: add creater based on dynamic client, 2019-02-04, openshift#220). CC @mfojtik.
assets: add creater based on dynamic client
Typo snuck in with 0967e06 (assets: add creater based on dynamic client, 2019-02-04, openshift#220). CC @mfojtik.
This PR add
pkg/assets/createpackage that contain helper to create multiple resources from disk and wait for them to be created.The rationale here is to move this create and wait logic away from cluster-bootstrap (former bootkube) and make it available for multiple components as needed.