Skip to content

Conversation

@mfojtik
Copy link
Contributor

@mfojtik mfojtik commented Feb 4, 2019

This PR add pkg/assets/create package that contain helper to create multiple resources from disk and wait for them to be created.

The rationale here is to move this create and wait logic away from cluster-bootstrap (former bootkube) and make it available for multiple components as needed.

@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 4, 2019
@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Feb 4, 2019
@mfojtik
Copy link
Contributor Author

mfojtik commented Feb 4, 2019

/cc @sttts
/cc @deads2k

@mfojtik
Copy link
Contributor Author

mfojtik commented Feb 4, 2019

TODO: need unit test

/hold

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Feb 4, 2019
@mfojtik mfojtik force-pushed the assets-create-01 branch 2 times, most recently from 962da9a to 7ef565a Compare February 4, 2019 14:02
@mfojtik mfojtik force-pushed the assets-create-01 branch 3 times, most recently from ccdb5f0 to 973c0c2 Compare February 4, 2019 15:00
@mfojtik
Copy link
Contributor Author

mfojtik commented Feb 4, 2019

/cc @wking


// Retry creation until no errors are returned or the timeout is hit.
var lastCreateError error
err = wait.PollImmediateUntil(500*time.Millisecond, func() (bool, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Half a second is not a big deal, but if we are worried about flooding the API server it seems like we'd want to sleep between API requests, and not just between manifests iterations. That would be dropping PollImmediateUntil with a raw loop:

while len(manifests) > 0 {
  err, refresh := create(ctx, manifests, client, mapper, options)
  ...
}

possibly with a sleep inside create's loop. If you have any manifests at all, it's hard to see the iteration over those manifests completing in less than half a second.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would be concerned with flooding API server, it should handle creating few resources just fine (right @sttts ?)

I would like to keep wait.PollImmediateUntil because it support context and timeouts. If we wait after each create call, calculating timeout will be more complex (that we need).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to keep wait.PollImmediateUntil because it support context and timeouts. If we wait after each create call, calculating timeout will be more complex (that we need).

I've pushed git://github.com/wking/openshift-library-go.git mid-create-cancel 8e509de with a stab at dropping PollImmediateUntil and moving the wait to per-manifest instead of per-create call. If you're comfortable without a per-manifest rate-limit, I'd recommend just dropping the ticker and looping over manifests as fast as we get responses from the server. Thoughts?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've pushed git://github.com/wking/openshift-library-go.git mid-create-cancel 8e509de...

Rebased onto your 0069198 with 8e509de -> fd93f4f.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wking the client itself can be configured to do QPS and rate limiting:

// QPS indicates the maximum QPS to the master from this client.

If we are concerned about DOSing the API server, we can set this up in the *rest.Config the caller pass to this helper? I can drop the ticker to 10ms or so just to avoid hot-loop, but each individual create call can be limited by using these parameters IMHO.

@sttts @deads2k agree?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the kubeconfig have a QPS setting as well? I don't think this belongs hardcoded here, but should be settable from the outside. If the kubeconfig knows qps, there is nothing to do.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh @sttts we pass rest.Config that has that option and the caller can decide how he want to throttle the qps (or even replace qps with ratelimiter if he wants).

@sttts
Copy link
Contributor

sttts commented Feb 5, 2019

/lgtm
/approve

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Feb 5, 2019
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mfojtik, sttts

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 5, 2019
@openshift-merge-robot openshift-merge-robot merged commit 4dab831 into openshift:master Feb 5, 2019

// StdErr allows to override the standard error output for printing verbose messages.
// If not set, os.StdErr is used.
StdErr io.Writer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like Verbose and StdErr should be replaced with a slot for a logrus logger or generic logging interface. I can file a follow-up for that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can file a follow-up for that.

Filed as #225.


// Default QPS in client (when not specified) is 5 requests/per second
// This specifies the interval between "create-all-resources", no need to make this configurable.
interval := 200 * time.Millisecond
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not clear to me why we need more sleeping on top of restConfig's limits.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still not clear to me why we need more sleeping on top of restConfig's limits.

I've filed #226 dropping this additional sleep.

wking added a commit to wking/openshift-library-go that referenced this pull request Sep 10, 2019
Typo snuck in with 0967e06 (assets: add creater based on dynamic client, 2019-02-04, openshift#220).

CC @mfojtik.
bertinatto pushed a commit to bertinatto/library-go that referenced this pull request Jul 2, 2020
assets: add creater based on dynamic client
bertinatto pushed a commit to bertinatto/library-go that referenced this pull request Jul 2, 2020
Typo snuck in with 0967e06 (assets: add creater based on dynamic client, 2019-02-04, openshift#220).

CC @mfojtik.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants