-
Notifications
You must be signed in to change notification settings - Fork 95
Cfg cluster op group #46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: gabemontero The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
another test of the new e2e job @bparees :-) |
|
oops forgot the unit test |
|
@gabemontero since we have that awesome new e2e job, please add a test that confirms the clusteroperator object is getting created w/ appropriate status information. |
dbaaad9 to
e5d985a
Compare
|
assuming I get a clean run as-is @bparees, sure |
|
aws nat pain on e2e-aws: |
|
same nat pain with our new e2e job ... posted on forum-testplatform |
|
/retest |
|
the new e2e test failed on a timeout ... it never found the ruby imagestream i have the new test ... hopefully I can debug what is up with the existing test while I vet out the new one |
|
ah ha: more changes needed .... |
|
A bit stuck ...my email to aos-devel: Ok, after
I can't seem to get past this error when I attempt to get/update/create my The comment for the DesiredUpdate field notes it is optional. Any insights on how to make the operator sdk work with I do get the following ClusterVersion (with the status missing as a result of the error I previously noted) |
e5d985a to
489e0fc
Compare
|
@gabemontero have you bumped openshift/api and openshift/client-go in this repo? |
|
(actually that may not matter) |
|
@gabemontero: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
| kind: CustomResourceDefinition | ||
| metadata: | ||
| name: clusteroperators.operatorstatus.openshift.io | ||
| name: clusterversions.operatorstatus.openshift.io |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clusterversions.config.openshift.io ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(honestly i dunno that we need to keep this file around anymore, it was a useful development tool when we stood up clusters manually but now that the installer is the only way, we can rely on the installer to have created this CRD in all our clusters)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I just as soon delete it ... will do that when I circle back to this PR (pending response to my outstanding client question I've circled back to implementing the progressing condition for image streams)
| - "config.openshift.io" | ||
| resources: | ||
| - clusteroperators | ||
| - clusterversions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think this was supposed to change. the resource type is still clusteroperators.
|
total mix up on my part ... @bparees got me back on track ... waiting on @dmage 's openshift/cluster-version-operator#48 |
per recent aos-devel announce from clayton ... moving to config.openshift.io
/assign @bparees
@openshift/sig-developer-experience fyi / ptal