Skip to content

Conversation

@wking
Copy link
Member

@wking wking commented Apr 11, 2019

Similar to #3305, but this template is AWS-specific. I'll look into unifying later. I also still need to wire up an installer job for this.

CC @abhinavdahiya, @cuppett, @staebler, @vrutkovs

@openshift-ci-robot openshift-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Apr 11, 2019
@openshift-ci-robot openshift-ci-robot added the size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. label Apr 11, 2019
@wking wking force-pushed the aws-upi branch 3 times, most recently from fec962f to dac4d74 Compare April 11, 2019 09:16
@openshift-ci-robot openshift-ci-robot removed the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 11, 2019
Copy link
Member

@petr-muller petr-muller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 11, 2019
@wking wking force-pushed the aws-upi branch 2 times, most recently from 007745d to 799fe4a Compare April 11, 2019 10:37
@wking
Copy link
Member Author

wking commented Apr 11, 2019

e2e-aws-upi:

2019/04/11 10:42:37 Running pod e2e-aws-upi
Installing from release registry.svc.ci.openshift.org/ci-op-xqpc1pw7/release@sha256:56f0603347fb1e71596953ae1b4a637ab89dce8be8e268ca8cef8646631a3f22
level=info msg="Consuming \"Install Config\" from target directory"
level=warning msg="Found override for ReleaseImage. Please be warned, this is not advised"
/bin/sh: line 80: aws: command not found

No aws? We ask for it. Checking the image myself:

$ podman pull registry.svc.ci.openshift.org/ci-op-xqpc1pw7/release@sha256:56f0603347fb1e71596953ae1b4a637ab89dce8be8e268ca8cef8646631a3f22
$ podman run --rm -it --entrypoint rpm registry.svc.ci.openshift.org/ci-op-xqpc1pw7/release@sha256:56f0603347fb1e71596953ae1b4a637ab89dce8be8e268ca8cef8646631a3f22 -q awscli
package awscli is not installed

Can we get the build log from somewhere? Maybe this aspect of things doesn't play nicely with rehearsals? Wait...

$ podman inspect --format '{{.Config.Entrypoint}}' registry.svc.ci.openshift.org/ci-op-xqpc1pw7/release@sha256:56f0603347fb1e71596953ae1b4a637ab89dce8be8e268ca8cef8646631a3f22
[/usr/bin/cluster-version-operator]

how did a CVO image get over here?

@vrutkovs
Copy link
Contributor

vrutkovs commented Apr 11, 2019

No aws? We ask for it. Checking the image myself:

https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/2/artifacts/build-logs/upi-installer.log.gz says No package epel-release available. and thus no awscli.
You'd need to contact ART to include it in base repos (and drop epel)

how did a CVO image get over here?

Each PR triggers a new CVO image build, even if the component is not part of it

@openshift-ci-robot openshift-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 15, 2019
@openshift-ci-robot openshift-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Apr 25, 2019
@wking
Copy link
Member Author

wking commented Apr 25, 2019

Rebased and updated with 799fe4a -> 71cdc8977, although in hindsight I implemented under the assumption that openshift/installer#1649 had landed :p.

@wking wking force-pushed the aws-upi branch 2 times, most recently from ee071e0 to 422d571 Compare April 26, 2019 07:24
@wking
Copy link
Member Author

wking commented Apr 26, 2019

Hooray, vSphere is now up to:

Apply complete! Resources: 31 added, 0 changed, 0 destroyed.
Waiting for bootstrap to complete
level=info msg="Waiting up to 30m0s for the Kubernetes API at https://api.ci-op-gr07n873-dbc3b.origin-ci-int-aws.dev.rhcloud.com:6443..."
level=info msg="Use the following commands to gather logs from the cluster"
level=info msg="openshift-install gather bootstrap --help"
level=fatal msg="waiting for Kubernetes API: context deadline exceeded"

I'm trying to figure out if that's vSphere's current standard...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use something similar to #3612

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can use something similar to #3612

Are we committed to that approach? Seems like a hack to me. And if so, this isn't much more of a hack ;).

Copy link
Contributor

@abhinavdahiya abhinavdahiya Apr 26, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we committed to that approach? Seems like a hack to me. And if so, this isn't much more of a hack ;).

That's only for our testing of UPI. and using the copied rhcos.json atleast makes sure we don't need to bump bootsimages here when we bump it in installer.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's only for our testing of UPI...

Right, but holding out for a real fix will help motivate us to give customers something so that they don't have to bump bootimages when we bump them in the installer ;). Dog food, for the win :p

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's only for our testing of UPI...

Right, but holding out for a real fix will help motivate us to give customers something so that they don't have to bump bootimages when we bump them in the installer ;). Dog food, for the win :p

but we cannot test the bump in installer if you hard code here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but we cannot test the bump in installer if you hard code here.

ahh, true. But we don't have tight doc coupling now, so there's some benefit in testing both the old AMI (which the docs will still be recommending) and the new AMI (which IPI will use).

@droslean
Copy link
Member

/test pj-rehearse

@wking
Copy link
Member Author

wking commented May 4, 2019

With 136ab5408 -> 7b2c175fb, I've rebased onto master, dropped the openshift/installer#1706 workarounds now that that's landed, and pushed a compute node into the second private subnet to avoid failing:

[sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Suite:openshift/conformance/parallel] [Suite:k8s]

@wking
Copy link
Member Author

wking commented May 4, 2019

e2e-aws:

level=fatal msg="failed to initialize the cluster: Cluster operator image-registry is still updating: timed out waiting for the condition"

Looking at the ClusterOperator:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/23/artifacts/e2e-aws-upi/clusteroperators.json | jq -r '.items[] | select(.metadata.name == "image-registry") | ([.status.conditions[] | {key: .type, value: .}] | from_entries).Progressing'
{
  "lastTransitionTime": "2019-05-04T03:45:29Z",
  "message": "All resources are successfully applied, but the deployment does not exist",
  "reason": "WaitingForDeployment",
  "status": "True",
  "type": "Progressing"
}

Checking that Deployment:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/23/artifacts/e2e-aws-upi/must-gather/namespaces/openshift-image-registry/apps/deployments/image-registry.yaml | yaml2json | jq .status.conditions
[
  {
    "reason": "MinimumReplicasAvailable",
    "message": "Deployment has minimum availability.",
    "type": "Available",
    "status": "True",
    "lastTransitionTime": "2019-05-04T03:45:48Z",
    "lastUpdateTime": "2019-05-04T03:45:48Z"
  },
  {
    "reason": "NewReplicaSetAvailable",
    "message": "ReplicaSet \"image-registry-7749f787d4\" has successfully progressed.",
    "type": "Progressing",
    "status": "True",
    "lastTransitionTime": "2019-05-04T03:45:29Z",
    "lastUpdateTime": "2019-05-04T03:45:48Z"
  }
]

I dunno why we took so long we timed out (by 19 seconds!?). But whatever, I'll just kick it again out of curiousity:

/retest

@wking
Copy link
Member Author

wking commented May 6, 2019

Hrm, same e2e-aws error as last time. This time:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/24/artifacts/e2e-aws-upi/installer/.openshift_install.log | grep fatal
time="2019-05-04T05:42:27Z" level=fatal msg="failed to initialize the cluster: Cluster operator image-registry is still updating: timed out waiting for the condition"
$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/24/artifacts/e2e-aws-upi/must-gather/cluster-scoped-resources/config.openshift.io/clusterversions.yaml | yaml2json | jq '.items[0].status.conditions[] | select(.type == "Failing" or .type == "Progressing")'
{
  "lastTransitionTime": "2019-05-04T05:37:08Z",
  "message": "Cluster operator image-registry is still updating",
  "status": "True",
  "type": "Failing",
  "reason": "ClusterOperatorNotAvailable"
}
{
  "lastTransitionTime": "2019-05-04T05:07:58Z",
  "message": "Unable to apply 0.0.1-2019-05-04-044538: the cluster operator image-registry has not yet successfully rolled out",
  "status": "True",
  "type": "Progressing",
  "reason": "ClusterOperatorNotAvailable"
}
$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/24/artifacts/ee-aws-upi/must-gather/cluster-scoped-resources/config.openshift.io/clusteroperators/image-registry.yaml | yaml2json | jq .status.conditions
[
  {
    "status": "False",
    "message": "The deployment does not have available replicas",
    "lastTransitionTime": "2019-05-04T05:13:32Z",
    "reason": "NoReplicasAvailable",
    "type": "Available"
  },
  {
    "status": "True",
    "message": "The deployment has not completed",
    "lastTransitionTime": "2019-05-04T05:13:32Z",
    "reason": "DeploymentNotCompleted",
    "type": "Progressing"
  },
  {
    "status": "False",
    "lastTransitionTime": "2019-05-04T05:13:32Z",
    "type": "Degraded"
  }
]
$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/24/artifacts/e2e-aws-upi/pods/openshift-image-registry_cluster-image-registry-operator-fd4799767-9gfvh_cluster-image-registry-operator.log | gunzip | grep Unable
I0504 05:13:39.999971       1 controller.go:199] object changed: *v1.Config, Name=cluster (status=true): changed:status.conditions.5.message={"The deployment has not completed" -> "Unable to apply resources: unable to sync storage configuration: exactly one storage type should be configured at the same time, got 2: [EmptyDir S3]"}, changed:status.conditions.5.reason={"DeploymentNotCompleted" -> "Error"}, changed:status.observedGeneration={"2.000000" -> "3.000000"}

But I see no configs.imageregistry.operator.openshift.io under here. Buggy defaulting?

@staebler
Copy link
Contributor

staebler commented May 6, 2019

@wking Is it correct for the image-registry storage to be set to empty dir for UPI aws?

If I am reading the code correctly, the image-registry operator sets the storage to S3 here. While the CI tests set the image-registry storage to empty directory here.

@wking
Copy link
Member Author

wking commented May 6, 2019

Is it correct for the image-registry storage to be set to empty dir for UPI aws?

Nope, good catch. Fixed with 7b2c175fb -> 7d4e4349b, which will leave the registry provisioning its own S3 bucket. Folks taking the UPI path may want that, or they may want to configure the registry to use an existing S3 bucket that they create themselves. For now, a registry-provisioned bucket seems easiest to exercise in CI.

@wking
Copy link
Member Author

wking commented May 6, 2019

Ok, this round, AWS:

Flaky tests:

[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] Volume expand Verify if editing PVC allows resize [Suite:openshift/conformance/parallel] [Suite:k8s]

Failing tests:

[sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should be mountable [Suite:openshift/conformance/parallel] [Suite:k8s]

vSphere:

level=fatal msg="failed to initialize the cluster: Cluster operator machine-config is reporting a failure: Failed to resync 0.0.1-2019-05-06-182506 because: timed out waiting for the condition during waitForDaemonsetRollout: Daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 5): timed out waiting for the condition"

Still two Multi-AZ Clusters should spread the pods of a ... across zones issues; not sure what's going on there now that I'm using compute in two zones just like the IPI tests.

@wking
Copy link
Member Author

wking commented May 6, 2019

Double checking the node locations, we get the expected distribution:

$ curl -s https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/25/artifacts/e2e-aws-upi/nodes.json | jq '.items[] | {name: .metadata.name, zone: .metadata.labels["failure-domain.beta.kubernetes.io/zone"], conditions: ([.status.conditions[] | {key: .type, value: .status}] | from_entries)}'
{
  "name": "ip-10-0-50-219.ec2.internal",
  "zone": "us-east-1a",
  "conditions": {
    "MemoryPressure": "False",
    "DiskPressure": "False",
    "PIDPressure": "False",
    "Ready": "True"
  }
}
{
  "name": "ip-10-0-62-49.ec2.internal",
  "zone": "us-east-1a",
  "conditions": {
    "MemoryPressure": "False",
    "DiskPressure": "False",
    "PIDPressure": "False",
    "Ready": "True"
  }
}
{
  "name": "ip-10-0-69-73.ec2.internal",
  "zone": "us-east-1b",
  "conditions": {
    "MemoryPressure": "False",
    "DiskPressure": "False",
    "PIDPressure": "False",
    "Ready": "True"
  }
}
{
  "name": "ip-10-0-75-186.ec2.internal",
  "zone": "us-east-1b",
  "conditions": {
    "MemoryPressure": "False",
    "DiskPressure": "False",
    "PIDPressure": "False",
    "Ready": "True"
  }
}
{
  "name": "ip-10-0-91-97.ec2.internal",
  "zone": "us-east-1c",
  "conditions": {
    "MemoryPressure": "False",
    "DiskPressure": "False",
    "PIDPressure": "False",
    "Ready": "True"
  }
}

@wking
Copy link
Member Author

wking commented May 6, 2019

🤷‍♂️

/retest

@wking
Copy link
Member Author

wking commented May 6, 2019

[AWS][1]:

level=fatal msg="failed to initialize the cluster: Some cluster operators are still updating: authentication, console: timed out waiting for the condition"
```things

/retest

[1]: https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/26

@wking
Copy link
Member Author

wking commented May 7, 2019

AWS:

Flaky tests:

[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with defaults [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should create sc, pod, pv, and pvc, read/write to the pv, and delete all created resources [Suite:openshift/conformance/parallel] [Suite:k8s]

Failing tests:

[sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Suite:openshift/conformance/parallel] [Suite:k8s]

Those happened last time too, so they're probably real. Still digging...

@sdodson
Copy link
Member

sdodson commented May 9, 2019

/retest

@wking
Copy link
Member Author

wking commented May 9, 2019

e2e-aws has the same multi-AZ errors as before:

Failing tests:

[sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4] [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [Suite:openshift/conformance/parallel] [Suite:k8s]
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource [Suite:openshift/conformance/parallel] [Suite:k8s]

Still dunno what's going on there.

@sdodson
Copy link
Member

sdodson commented May 10, 2019

/lgtm
optional job, we'll figure out the flakes/failing tests soon but going ahead with the merge.

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 10, 2019
…i-e2e: Add AWS support

We can't indent the here-docs more deeply unless we use tabs and
<<-EOF [1].

The more-specific bootstrap-exporter selector avoids the vSphere job's
service attaching to the AWS job's exporter pod, etc.

The "${!SUBNET}" indirect parameter expansion spreads us over two
zones to avoid failing [2]:

  [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Suite:openshift/conformance/parallel] [Suite:k8s]

I'm somewhat surprised that we need to set AWS_DEFAULT_REGION, but see
[3]:

  You must specify a region. You can also configure your region by running "aws configure".

[1]: https://pubs.opengroup.org/onlinepubs/9699919799.2018edition/utilities/V3_chap02.html#tag_18_07_04
[2]: https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/22
[3]: https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/openshift_release/3440/rehearse-3440-pull-ci-openshift-installer-master-e2e-aws-upi/10
@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label May 10, 2019
…bmits: Remove some timeout clobbers

Like 4372624 (remote timeout,grace_period from jobs, 2019-05-07, openshift#3713)
and 62898a3 (Fixup few remaining fields, 2019-05-08, openshift#3713).  This
gives us the usual grace period and timeout for OpenShift tests,
instead of clobbering the OpenShift values and falling back to the
generic Prow defaults.
@wking
Copy link
Member Author

wking commented May 10, 2019

Pushed 7d4e4349b -> 9ab4e372e, fixing the generated-config error and adding an additional commit to fix that issue for the other presubmit jobs too (following earlier partial work in #3713). @sdodson, re-/lgtm?

@sdodson
Copy link
Member

sdodson commented May 10, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 10, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: petr-muller, sdodson, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit ccad0ca into openshift:master May 10, 2019
@openshift-ci-robot
Copy link
Contributor

@wking: Updated the following 5 configmaps:

  • prow-job-cluster-launch-installer-upi-e2e configmap in namespace ci-stg using the following files:
    • key cluster-launch-installer-upi-e2e.yaml using file ci-operator/templates/openshift/installer/cluster-launch-installer-upi-e2e.yaml
  • ci-operator-master-configs configmap in namespace ci using the following files:
    • key openshift-installer-master.yaml using file ci-operator/config/openshift/installer/openshift-installer-master.yaml
  • ci-operator-master-configs configmap in namespace ci-stg using the following files:
    • key openshift-installer-master.yaml using file ci-operator/config/openshift/installer/openshift-installer-master.yaml
  • job-config-master configmap in namespace ci using the following files:
    • key openshift-installer-master-presubmits.yaml using file ci-operator/jobs/openshift/installer/openshift-installer-master-presubmits.yaml
  • prow-job-cluster-launch-installer-upi-e2e configmap in namespace ci using the following files:
    • key cluster-launch-installer-upi-e2e.yaml using file ci-operator/templates/openshift/installer/cluster-launch-installer-upi-e2e.yaml
Details

In response to this:

Similar to #3305, but this template is AWS-specific. I'll look into unifying later. I also still need to wire up an installer job for this.

CC @abhinavdahiya, @cuppett, @staebler, @vrutkovs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link
Contributor

@wking: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
ci/rehearse/openshift/installer/master/e2e-vsphere 844f447 link /test pj-rehearse
ci/rehearse/openshift/installer/master/e2e-aws-upi 844f447 link /test pj-rehearse
ci/prow/pj-rehearse 844f447 link /test pj-rehearse

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@wking wking deleted the aws-upi branch May 10, 2019 19:44
@wking
Copy link
Member Author

wking commented May 14, 2019

I think #3775 will fix the multi-zone errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants