Skip to content

Commit

Permalink
Update kubectl deprecated flag delete-local-data
Browse files Browse the repository at this point in the history
Signed-off-by: XinYang <[email protected]>

revert changes

Signed-off-by: XinYang <[email protected]>

update more

Signed-off-by: XinYang <[email protected]>
  • Loading branch information
xinydev committed Jul 23, 2021
1 parent 11ca5f0 commit d095a14
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -415,7 +415,7 @@ and make sure that the node is empty, then deconfigure the node.
Talking to the control-plane node with the appropriate credentials, run:

```bash
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl drain <node name> --delete-emptydir-data --force --ignore-daemonsets
```

Before removing the node, reset the state installed by `kubeadm`:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -379,7 +379,7 @@ This might impact other applications on the Node, so it's best to
**only do this in a test cluster**.

```shell
kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
kubectl drain <node-name> --force --delete-emptydir-data --ignore-daemonsets
```

Now you can watch as the Pod reschedules on a different Node:
Expand Down
8 changes: 4 additions & 4 deletions content/en/docs/tutorials/stateful-application/zookeeper.md
Original file line number Diff line number Diff line change
Expand Up @@ -937,7 +937,7 @@ Use [`kubectl drain`](/docs/reference/generated/kubectl/kubectl-commands/#drain)
drain the node on which the `zk-0` Pod is scheduled.

```shell
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-0 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```

```
Expand Down Expand Up @@ -972,7 +972,7 @@ Keep watching the `StatefulSet`'s Pods in the first terminal and drain the node
`zk-1` is scheduled.

```shell
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data "kubernetes-node-ixsl" cordoned
kubectl drain $(kubectl get pod zk-1 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data "kubernetes-node-ixsl" cordoned
```

```
Expand Down Expand Up @@ -1015,7 +1015,7 @@ Continue to watch the Pods of the stateful set, and drain the node on which
`zk-2` is scheduled.

```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```

```
Expand Down Expand Up @@ -1101,7 +1101,7 @@ zk-1 1/1 Running 0 13m
Attempt to drain the node on which `zk-2` is scheduled.

```shell
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-local-data
kubectl drain $(kubectl get pod zk-2 --template {{.spec.nodeName}}) --ignore-daemonsets --force --delete-emptydir-data
```

The output:
Expand Down

0 comments on commit d095a14

Please sign in to comment.