Skip to content

Conversation

@pliurh
Copy link
Contributor

@pliurh pliurh commented Sep 24, 2020

By default the timeout of 'kubectl get' is infinity. If apiserver don't response,
This command can take ~15min to fail(socket timeout). It's too long. This patch
set a 30s timeout to 'kubectl get'.

@pliurh
Copy link
Contributor Author

pliurh commented Sep 24, 2020

@trozet @danwinship PTAL

@pliurh pliurh changed the title Set a 30s timeout for kubectl command in ovnkube-node Bug 1854306: Set a 30s timeout for kubectl command in ovnkube-node Sep 24, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. label Sep 24, 2020
@openshift-ci-robot
Copy link
Contributor

@pliurh: This pull request references Bugzilla bug 1854306, which is valid. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
Details

In response to this:

Bug 1854306: Set a 30s timeout for kubectl command in ovnkube-node

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. label Sep 24, 2020
@pliurh
Copy link
Contributor Author

pliurh commented Sep 24, 2020

/retest

Copy link
Contributor

@danwinship danwinship left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So under what circumstances is this a problem, and what exactly is the effect? If the apiserver eventually starts responding then won't the kubectl complete then? And if it doesn't eventually start responding then does it really matter if we bail out or keep waiting?

while true; do
db_ip=$(kubectl get ep -n ${ovn_config_namespace} ovnkube-db -o jsonpath='{.subsets[0].addresses[0].ip}')
# wait 30s for kubectl get to return
# TODO: change to use '--request-timeout=30s', if https://github.com/kubernetes/kubernetes/issues/51952 is fixed.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, meant to comment here: the first part ("wait 30s for kubectl get to return") doesn't seem necessary (it's pretty obvious from either "timeout 30" or "--request-timeout=30s"). For the second part, that issue was closed as a duplicate of 49343, so link there instead?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed.

@pliurh
Copy link
Contributor Author

pliurh commented Sep 24, 2020

So under what circumstances is this a problem, and what exactly is the effect? If the apiserver eventually starts responding then won't the kubectl complete then? And if it doesn't eventually start responding then does it really matter if we bail out or keep waiting?

It happens during the SDN migration. After MCO trigger reboot, node come back with br-ex created, the ovnkube-node container will hang there for a long time with following logs. After the socket timeout in ~15mins, the pod will be restart, and then can work as expected. I suspect it was caused by the keepalived VIP floating.

++ kubectl get ep -n openshift-ovn-kubernetes ovnkube-db -o 'jsonpath={.subsets[0].addresses[0].ip}'
Unable to connect to the server: read tcp 192.168.111.5:40626->192.168.111.5:6443: read: connection timed out
+ db_ip=

By default the timeout of 'kubectl get' is infinity. If apiserver don't response,
This command can take ~15min to fail(socket timeout). It's too long. This patch
set a 30s timeout to 'kubectl get'.
@pliurh
Copy link
Contributor Author

pliurh commented Sep 24, 2020

/retest

1 similar comment
@pliurh
Copy link
Contributor Author

pliurh commented Sep 24, 2020

/retest

@pliurh
Copy link
Contributor Author

pliurh commented Sep 25, 2020

/retest

@trozet
Copy link
Contributor

trozet commented Sep 25, 2020

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Sep 25, 2020
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pliurh, trozet

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 25, 2020
@trozet
Copy link
Contributor

trozet commented Sep 25, 2020

/retest

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-merge-robot openshift-merge-robot merged commit 380fe02 into openshift:master Sep 25, 2020
@openshift-ci-robot
Copy link
Contributor

@pliurh: All pull requests linked via external trackers have merged:

Bugzilla bug 1854306 has been moved to the MODIFIED state.

Details

In response to this:

Bug 1854306: Set a 30s timeout for kubectl command in ovnkube-node

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants