-
Notifications
You must be signed in to change notification settings - Fork 256
Upgrade to controller-runtime 0.7.0 #1318
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to controller-runtime 0.7.0 #1318
Conversation
|
Tests are passing and verify is clear, I am doing manual testing of a manageDNS cluster at the moment and it has made it past DNS and launched the install, so looking good so far. |
|
/test e2e-azure |
apis/go.mod
Outdated
| k8s.io/api v0.20.0 | ||
| k8s.io/apimachinery v0.20.0 | ||
| sigs.k8s.io/controller-runtime v0.6.2 | ||
| sigs.k8s.io/controller-runtime v0.7.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we bump this to 0.7.3 as the PR title suggests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm.. or maybe rename the PR title as we are bumping to 0.7.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup my mistake, wanted to stick with 0.7.0 to match agent controllers, but this is going to be unsustainable if we have to stay in lockstep. Maybe we can get their controllers into Hive.
| handler.EnqueueRequestsFromMapFunc( | ||
| func(a client.Object) []reconcile.Request { | ||
| cpKey := clusterPoolKey(a.(*hivev1.ClusterDeployment)) | ||
| if cpKey == nil { | ||
| return nil | ||
| } | ||
| return []reconcile.Request{{NamespacedName: *cpKey}} | ||
| }, | ||
| ), | ||
| r, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
struct is being initialized without Field names https://github.com/uber-go/guide/blob/master/style.md#use-field-names-to-initialize-structs
|
Updated per review. Thanks! |
|
/test e2e-azure |
joelddiaz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
/hold |
Looks like this is ok, we specify --secure-port 9443 in our Deployment and the Service forwards 443 to 9443. I can see the admission hook getting called if I try to make a bad change.
Reading the discussion in there, I don't think there's any action for us to take. We don't want to log at V(5), we should be logging our own errors, and it sounds like that is the direction controller-runtime is going. Feels like we'll have to handle any situations where we missed logging as they arise. EDIT: actually may have been restored: kubernetes-sigs/controller-runtime#1245
|
If controller-runtime return from Reconcile will not log the errors, we should log any error returned by Reconcile... that does not happen today. And the alternative that you have proposed of logging before returning error when we see a bug/requirement is not sustainable. logging the error returned by Reconcile is important info that should not be left to each return. If the controller-runtime cannot be our backstop, we should enforce all Reconcile functions look like func (r *Reconciler) Reconcile(_, _) (reconcile.Result, error) {
// setup logger
r, err := r.reconcile(_, _, logger)
if err != nil {
logger.WithError(err).Error("failed to reconcile")
}
return r, err
}and it sounds like that is the direction controller-runtime is going. Feels like we'll have to handle any situations where we missed logging as they arise.
|
|
I have confirmed that we still get reconciler errors logged by controller-runtime by modifying clusterdeployment_controller to just immediately return an error: Which looks like Alberto's commit restored what we were counting on. Per slack I'm going to remove the hold, let this roll to stage/int so we can test it on one of the v3 clusters and make sure admission webhooks are working properly. |
|
/hold cancel |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
2 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/hold for rebase |
|
/hold cancel Updated with a merge. |
joelddiaz
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dgoodwin, joelddiaz The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
3 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
A number of invasive changes required here not limited to:
The highest risk areas here are:
x-ref: https://issues.redhat.com/browse/HIVE-1452
/assign @abhinavdahiya @joelddiaz