-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NO-ISSUE: Update to use rukpakv1alpha2 apis #108
NO-ISSUE: Update to use rukpakv1alpha2 apis #108
Conversation
MetricsBindAddress: metricsAddr, | ||
Port: 9443, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you mean to remove these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah.
kubernetes-sigs/controller-runtime#2407
Controller runtime reimplemented the metrics server configuration, and given the state of this project I didn't think it was worth reconfiguring so I just removed the metrics configuration rather than reimplementing it. It wouldn't be a ton of effort to do, but I think we want to try to move toward getting rid of this thing entirely once we start swapping out rukpak.
Ref: image, | ||
}, | ||
}, | ||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would delete this
func buildBundleDeployment(image string) rukpakv1alpha1.BundleDeploymentSpec { | ||
return rukpakv1alpha1.BundleDeploymentSpec{ | ||
func buildBundleDeployment(image string) rukpakv1alpha2.BundleDeploymentSpec { | ||
return rukpakv1alpha2.BundleDeploymentSpec{ | ||
ProvisionerClassName: plainProvisionerID, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like ProvisionerClassName should be registryProvisionerID
. The bundle provisioner id would go in here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually think platform operators was specifically designed not to work with registryv1 bundles. Did we lose the plain provisioner in the latest revision?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plain provisioner does exist. I noticed BundleSpec.provisionerClassName to be registry. Which is what made me think that this could be registryV1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this a little bit and you were correct. We actually don't have support for configuring the provisioner in PO and the only test examples we have support for are registryv1, so the right thing to migrate here is to use the registryv1 provisioner. I've updated accordingly.
232052b
to
cf737e2
Compare
Includes controller runtime bump changes Signed-off-by: kevinrizza <[email protected]>
e1858c5
to
bb1111d
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/override openshift-e2e-aws |
/approve |
/override ci/prow/openshift-e2e-aws |
@kevinrizza: This pull request explicitly references no jira issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
@kevinrizza: kevinrizza unauthorized: /override is restricted to Repo administrators, approvers in top level OWNERS file, and the following github teams:openshift: openshift-release-oversight openshift-staff-engineers. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@kevinrizza: kevinrizza unauthorized: /override is restricted to Repo administrators, approvers in top level OWNERS file, and the following github teams:openshift: openshift-release-oversight openshift-staff-engineers. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED Approval requirements bypassed by manually added approval. This pull-request has been approved by: kevinrizza, varshaprasad96 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
AWS flake |
/retest |
/retest |
/retest |
@kevinrizza: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
[ART PR BUILD NOTIFIER] This PR has been included in build ose-cluster-platform-operators-manager-container-v4.16.0-202402011710.p0.geebb073.assembly.stream for distgit ose-cluster-platform-operators-manager. |
Includes controller runtime bump changes