-
Notifications
You must be signed in to change notification settings - Fork 142
add empty CRs #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add empty CRs #15
Conversation
|
/lgtm |
|
if this works we'll want to follow it up w/ one to create the default build resource |
|
/retest |
|
/hold thinking about the impact of this when combined with the render step. |
|
aws /retest |
|
/hold cancel |
|
/lgtm |
|
/retest |
|
the dreaded sha sha |
|
/test e2e-aws |
|
@deads2k including these in |
it's not merging/applying? |
the |
/hold |
|
|
We'll sort it out tomorrow. For items in config.openshift.io, I'm sure we can work out a spec/status update split. |
I'm confused too, but I have this pull and openshift/installer#1546 both consistently failing in this mode. |
|
It's most probably the empty Image.config.openshift.io that is triggering https://github.com/openshift/machine-config-operator/tree/master/pkg/controller/container-runtime-config .... |
|
New changes are detected. LGTM label has been removed. |
|
This now only reconciles via the CVO. It relies on openshift/cluster-version-operator#159 @sttts see if you're happy with validation rules. |
|
/test e2e-aws kicking since the prereq merged. |
|
/retest |
|
@abhinavdahiya looks like the static pod CVO still has the lock, but I have no logs from the bootstrap CVO. How's that coming? |
|
/retest |
|
Gaaaaaaah. The mcd mismatch is so frustrating. Surely they should be rendering their initial cluster state or something. This is wrapping us around the axle. This also implies that something is treating missing and empty differently. /retest Just in case |
|
/retest |
|
/retest |
|
appears to be openshift/cluster-kube-scheduler-operator#94 |
|
/retest |
|
And the console is missing logs. /retest |
|
the master pool worked at least once. /retest |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
Looks like it's still having considerable MCD difficulties. I guess I'll try to split out pieces, merge them and find the one the MCD doesn't like. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
1 similar comment
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
@deads2k: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@bparees since you asked
I think this will avoid stomping. The test-aws should fail if this clears values.