Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Require more tests] Bump Docker and Kubelet to eliminate CVE-2019-5736 #249

Merged
merged 2 commits into from
Feb 23, 2019

Conversation

md2k
Copy link
Contributor

@md2k md2k commented Feb 14, 2019

  • Kubernetes 1.13.3
  • Docker 18.09.2
  • Containerd.io 1.2.2-3
  • Kubelete Containers runtime to vanila Docker

TODO:

  • Fix deprecated config parameters
  • Follow logs for possible errors

P.S. as bonus, now ctop also able to get stats :)

Successfully deployed:

kubectl get nodes
NAME                STATUS   ROLES    AGE     VERSION
k8s-dev-master-01   Ready    master   9m36s   v1.13.3
k8s-dev-master-02   Ready    master   8m19s   v1.13.3
k8s-dev-master-03   Ready    master   8m12s   v1.13.3
k8s-dev-worker-01   Ready    <none>   6m26s   v1.13.3
kubectl get pods -n kube-system
NAME                                        READY   STATUS    RESTARTS   AGE
canal-26rth                                 3/3     Running   4          8m56s
canal-h4s7f                                 3/3     Running   3          8m49s
canal-xhg6c                                 3/3     Running   2          7m3s
canal-ztntb                                 3/3     Running   6          9m53s
coredns-86c58d9df4-h9v4n                    1/1     Running   1          9m53s
coredns-86c58d9df4-rsxz9                    1/1     Running   2          9m53s
kube-apiserver-k8s-dev-master-01            1/1     Running   1          8m51s
kube-apiserver-k8s-dev-master-02            1/1     Running   1          6m25s
kube-apiserver-k8s-dev-master-03            1/1     Running   1          6m38s
kube-controller-manager-k8s-dev-master-01   1/1     Running   2          8m50s
kube-controller-manager-k8s-dev-master-02   1/1     Running   2          6m26s
kube-controller-manager-k8s-dev-master-03   1/1     Running   2          7m21s
kube-proxy-7brgr                            1/1     Running   2          7m3s
kube-proxy-mdsp6                            1/1     Running   1          8m14s
kube-proxy-tbldc                            1/1     Running   2          8m10s
kube-proxy-zhwmt                            1/1     Running   1          8m17s
kube-scheduler-k8s-dev-master-01            1/1     Running   1          9m13s
kube-scheduler-k8s-dev-master-02            1/1     Running   2          6m16s
kube-scheduler-k8s-dev-master-03            1/1     Running   1          6m27s

@md2k
Copy link
Contributor Author

md2k commented Feb 14, 2019

@xetys this is initial bump, i got cluster without issues, but seen some deprecation warnings (want to fix them before merge), and some errors i not sure if they harmful (didn't check them too deep)

@md2k
Copy link
Contributor Author

md2k commented Feb 14, 2019

Do we need support for swap memory limit ?

- CONFIG_MEMCG_SWAP_ENABLED: missing
    (cgroup swap accounting is currently not enabled, you can enable it by setting boot option "swapaccount=1")

@md2k
Copy link
Contributor Author

md2k commented Feb 14, 2019

ok, i spent a bit time about Kubernetes CRI api to containerd.
In short CRi plugin supports officially only Kubernetes 1.11 (fully tested, newer can have issues), starting from containerd 1.2 (by some reason) cri plugin disabled by default (this why it failed to bootstrap cluster). i'll try to check, later next week, if i can run Kube 1.13 with containerd 1.2.
But frankly, judging by kubernetes development flow, 1.11 soon will be end of life, and 1.14 already not so far.
My personal opinion, while Kube ->cri-plugin -> containerd eliminate one hop from chain, speed of maintaining of CRI plugin cant follow fast enough for K8S releases.
While Docker itself keep support of api on more desired level.

Regardless my thoughts from above, i'll add config option to use docker 18.06 with k8s-cri-containerd or more newer docker but as k8s-cri-dockershim-containerd.

@xetys
Copy link
Owner

xetys commented Feb 15, 2019

I actually really like it now works with docker again. If it's supported, it looks like the most straight forward way. Or am I wrong?

I tested the current version. It seems to work like a charm!

@md2k
Copy link
Contributor Author

md2k commented Feb 15, 2019

in most it not have big difference, nowadays docker runs containers same over containerd (under linux at least) and containerd is just the part of doocker outsourced to community. difference only if we use, docker chain looks like: Kubertenes -> (cri api) -> docker-shim -> containerd. If we do not use docker, just remove docker from this chain. in real fact, it "should speed up" something, but i'm not sure if few milliseconds do a deal. in short you can read about this here: https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-integration-goes-ga/

so as for me, i'm do not see much difference, maybe somewhere later.

P.S. i'll do later this weekend another PR with containerd directly and then we'll see where to go.

@xetys
Copy link
Owner

xetys commented Feb 16, 2019

Well, I like the simplicity of this setup here, as it is less fragile then what we do currently. Just to recap, I want to use Ubuntu 18 as it has a current Linux kernel, while the Ubuntu 16 images from hetzner come with kernel 4.4. this works badly with stuff like ceph for example.

I tried installing any supported container engine on 18 a half year ago, which was close to impossible and highly complex. The current approach only works with docker 18.06.0, but not with 18.06.1 and this is something I want to change ASAP.

So feel free to try your approach with containerd but I would love to see this PR getting finished to provide a simple and stable solution on this issue

@md2k
Copy link
Contributor Author

md2k commented Feb 18, 2019

@xetys i think lets stick with Docker, also had similar discussion with my colleagues and we decide also stick with docker, at least until containerd as standalone became more mature and will have same easy-to-debug-and-well-documented.
So i think if you not met any issues with current PR, lets merge it as it is. (i think i removed containerd related parts already)

@md2k
Copy link
Contributor Author

md2k commented Feb 18, 2019

and also i'm sharing your feeling, because want to start use kubernetes with hetzner as soon as possible :)

@xetys xetys requested a review from mavimo February 18, 2019 20:07
@xetys
Copy link
Owner

xetys commented Feb 18, 2019

well, then I let @mavimo take a look at this, if he doesn't complain, we will merge this. At least the e2e will point out if there is something wrong

@md2k
Copy link
Contributor Author

md2k commented Feb 20, 2019

@mavimo , can you take a look please ? :)

@mavimo
Copy link
Collaborator

mavimo commented Feb 22, 2019

@md2k @xetys I can check it on the weekend, it's fine for you?

PS: thx for the awesome work you're doing here!!

Copy link
Collaborator

@mavimo mavimo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After few test LGTM!

@xetys xetys merged commit 2f1368c into xetys:master Feb 23, 2019
@md2k md2k deleted the docker_bump branch February 23, 2019 14:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants