Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm support #1631

Merged
merged 6 commits into from
Sep 13, 2017
Merged

kubeadm support #1631

merged 6 commits into from
Sep 13, 2017

Conversation

mattymo
Copy link
Contributor

@mattymo mattymo commented Sep 7, 2017

Main changes:

  • move k8s master to a subtask
  • disable k8s secrets when using kubeadm
  • fix etcd cert serial var
  • move simple auth users to master role (because kubeadm handles its own certs)
  • make a kubeadm-specific env file for kubelet
  • add non-ha CI job for kubeadm with canal

Implementation tested only on Ubuntu with Canal. HA is not supported or tested yet. Upgrades from earlier installs are not implemented yet.

Fixes #553

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 7, 2017
@mattymo mattymo force-pushed the kubeadm branch 3 times, most recently from 4e55aa4 to a6d9a64 Compare September 9, 2017 07:59
@k8s-ci-robot k8s-ci-robot added size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Sep 9, 2017
@mattymo
Copy link
Contributor Author

mattymo commented Sep 9, 2017

ci check this

@Starefossen
Copy link
Contributor

Just out of curiosity, what is the motivation behind this? And how does it impact existing kubespray installations?

@mattymo
Copy link
Contributor Author

mattymo commented Sep 9, 2017

@Starefossen This ties into SIG Cluster Lifecycle. They established the kubeadm project and all official projects really ought to consume kubeadm. This is a step in that direction.

With regards to existing kubespray installations, I will do my best to ensure that it is upgradeable from the previous release.

Right now it can deploy with calico and rbac and it should support all the features that were available already, but it has not been tested. Upgrade from previous versions will require more work.

We should discuss this more in #553

* move k8s master to a subtask
* disable k8s secrets when using kubeadm
* fix etcd cert serial var
* move simple auth users to master role
* make a kubeadm-specific env file for kubelet
* add non-ha CI job
@mattymo
Copy link
Contributor Author

mattymo commented Sep 10, 2017

@bradbeam @ant31 @rsmitty @bogdando Can you folks take a look?

@luxas If you want to review this, feel free to.

@mattymo mattymo changed the title wip kubeadm support kubeadm support Sep 10, 2017
@luxas
Copy link

luxas commented Sep 10, 2017

@Starefossen In order to unify different installers and minimize the delta on the Kubernetes (note: infra has nothing to do with this effort) deployment routes, we're gonna use kubeadm as the general base (which has limited scope to only do k8s bootstrap, not anything infra related)

See: http://blog.kubernetes.io/2017/01/stronger-foundation-for-creating-and-managing-kubernetes-clusters.html (already a bit dated, currently writing a new blog post update for this)

What do you get?

  • Lots of functionality for free (esp. around security)
  • A single API for bootstrapping a k8s cluster
  • A "best-practice" cluster per Kubernetes version
  • etc. etc.

Also see this https://groups.google.com/forum/#!topic/kubernetes-sig-cluster-lifecycle/HHr7WphU_xE

@mattymo
Copy link
Contributor Author

mattymo commented Sep 10, 2017

For those of you interested in how kubeadm changes Kubespray, here are some changes:
1 - We no longer have to generate k8s certs (or copy them everywhere)
2 - We don't have to set up master component manifests
3 - We can skip slow wait triggers for waiting for master components to be ready (kubeadm init has its own)

The main win, of course, is speed. Deployment completes in about 9 mins, instead of 26-30 mins.

Copy link

@luxas luxas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice @mattymo!!
I'm excited to see this

I left a couple of comments here... will be cool to see this move forward.
Q: Have you implemented the upgrade code yet or?

I think it would be reasonable to start supporting kubeadm CLI from the v1.8 version if you like to minimize the support matrix. Note that you can still setup v1.7.x clusters with v1.8 CLI if you like

Thanks 👏

.ubuntu_canal_kubeadm_variables: &ubuntu_canal_kubeadm_variables
# stage: deploy-gce-part1
KUBE_NETWORK_PLUGIN: canal
AUTHORIZATION_MODES: "{ 'authorization_modes': [ 'RBAC' ] }"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: RBAC and the Node authorizer are always on.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is internal logic to determine if rbac is enabled. We will refactor this soon.

@@ -135,24 +135,27 @@ def _execute_nofail(self, cmd):
return None
return out.splitlines()

def create(self, check=True):
def create(self, check=True, force=True):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this relevant to this PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's related for replacing kubeadm's unescapable kubedns

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mind posting a little more context for this? Curious

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from doc:

Delete and re-create the specified resource, when PATCH encounters conflict and has retried for 5 times.

It's the only option in some upgrade scenarios.

@@ -19,6 +19,7 @@ download_always_pull: False

# Versions
kube_version: v1.7.3
kubeadm_version: "{{ kube_version }}"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would kubeadm version differ from k8s version here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It defaults to match kube version, but it can be overridden if desired.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, SGTM

@@ -114,6 +114,10 @@ kubelet_deployment_type: docker
cert_management: script
vault_deployment_type: docker

# Enable kubeadm deployment (experimental)
kubeadm_enabled: false
kubeadm_token: "change.thisisabadtoken1"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will make a better token generator in a follow up

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, for now maybe abcdef.0123456789abcdef so it at least works?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change the default. Ansible doesn't support generating passwords with a mandatory character in a specific place. This is one of the more irritating parts of kubeadm and serves no practical purpose.

@@ -19,21 +19,6 @@
mode: o-rwx
group: "{{ kube_cert_group }}"

- name: Make sure the users directory exits
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What has this to do with kubeadm?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For basic auth support

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

podSubnet: {{ kube_pods_subnet }}
kubernetesVersion: {{ kube_version }}
cloudProvider: {{ cloud_provider|default('') }}
#TODO: cloud provider conf file
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If /etc/kubernetes/cloud-config is present on host, kubeadm will pick it up in the args automatically

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to know.

runtime-config: {{ kube_api_runtime_config }}
{% endif %}
{% if enable_network_policy %}
runtime-config: extensions/v1beta1/networkpolicies=true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has graduated to GA now, right? kubernetes/enhancements#185
Why enable this group then?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it was just needed back in 1.6

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least I think you can leave this out for kubeadm

admission-control: {{ kube_apiserver_admission_control | join(',') }}
service-node-port-range: {{ kube_apiserver_node_port_range }}
{% if kube_basic_auth|default(true) %}
basic-auth-file: {{ kube_users_dir }}/known_users.csv
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ugh, are you relying on this heavily?
Is there any way to not use basic auth, I'm pretty opposed to it re the security concerns.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just updated it with a randomly generated password. Yes, it's insecure, and yes, it can be disabled. I'm working on a better option which downloads a kubeconfig with proper x509 cert auth to the ansible host. I think once that is done, we can get rid of the password auth method by default unless a user explicitly wants it.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!

cloudProvider: {{ cloud_provider|default('') }}
#TODO: cloud provider conf file
authorizationModes:
{% for mode in authorization_modes %}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What authz modes are you using normally?
Be aware of that RBAC and Node authz are always on

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By default none

run_once: yes
tags: apps

- name: Create kube system namespace
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the kube-system namespace has been automatically created since the v1.3 timeframe, why have this here?
kubernetes/kubernetes#25196

Copy link
Contributor Author

@mattymo mattymo Sep 10, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our deployment permits using a namespace other than kube-system as the system namespace. Also, this is the "legacy deployment" set of tasks. This will not be executed when deploying kubeadm.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, thanks for explaining

.gitlab-ci.yml Outdated
-e weave_cpu_requests=${WEAVE_CPU_LIMIT}
-e weave_cpu_limit=${WEAVE_CPU_LIMIT}
-e "{kubeadm_enabled: ${KUBEADM_ENABLED}}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yes, that needs to be fixed

any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults}
- { role: kubernetes/kubeadm, tags: kubeadm, when: "kubeadm_enabled" }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we going to need to skip any components w/ kubeadm enabled?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kubernetes/secrets gets skipped and the new task file roles/kubernetes/master/tasks/static-pod-setup.yml will be skipped in kubeadm mode

etcd_download_url: "https://storage.googleapis.com/kargo/{{etcd_version}}_etcd"
kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/release/{{ kubeadm_version }}/bin/linux/amd64/kubeadm"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@luxas Is there a containerized kubeadm release? Is it part of hyperkube?

Copy link

@luxas luxas Sep 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what you mean with containerized kubeadm. It is a CLI binary and has no deps, just like kubectl.
It just writes down a couple of files and expects kubelet to be running.
You can run kubelet however you want, be it containerized, on-host or in an rkt pod or whatever

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And no, not part of hyperkube, by design.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have the option to run nearly every component in a container ( including helm ). I'd like to carry this model forward if possible with kubeadm. Especially in this case since we're just downloading an unpackaged binary, it makes it interesting when managing kubeadm versions and understanding what's actually installed.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Neither kubectl or kubeadm is packaged in container images officially.
You can easily create your own image though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that would be nice to have indeed. See kubernetes/kubernetes#35041, didn't make it tho :)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All , just wanted to understand why can't kubeadm support both systemd and containerized components. Since we are talking of kubeadm entity as a primary bootstrap mechanism the support for components in systemd would be a catalyst for a larger community, I would love to weigh in on any ideas or any work someone can direct me to.

when: not is_kube_master
register: kubeadm_client_conf

- name: Join to cluster if needed
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it worth running the preflight checks in an earlier play? like kubernetes/preinstall?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We re-run kubeadm join any time the templated conf gets updated. It won't harm anything to run again and again. It's idempotent.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's right.
However, I wouldn't recommend doing that for upgrades, although that works in most cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actual upgrades is still the usual process:
cordon/drain
upgrade kubelet binary
restart daemon
uncordon

kubeadm doesn't play a role unless the config params changed

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. (They might do though, but minor issue now)
What about master/control plane upgrades? How's that done?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

etcds are all on-the-fly in parallel (and pray it doesn't go bad, but make a backup on each host just in case)
since all we're doing is changing how k8s master static pods get templated, there is no effective change.

The current process for upgrading k8s master:
(one master at a time)
cordon/drain master
upgrade kubelet (replace binary and restart daemon)
upgrade static pods all at once (just replace manifests)
update any CNI changes (except for ones that require kubectl commands)
wait for them to be up
uncordon

at the end, if cni components require update, apply them with kubectl apply.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd reorder the kubelet upgrade and master upgrade as kubelets support newer masters but not vice versa in e2e tests...
I see the practical reason to not do so but anyway

Copy link
Contributor Author

@mattymo mattymo Sep 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, upgrade all kubelets on all nodes first, then master components last. Or masters first, then kubelets?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general k8s policy is master first, kubelet then

path: "{{ kube_config_dir }}/kubelet.conf"
regexp: '(\s+){{ first_kube_master }}:{{ kube_apiserver_port }}(\s+.*)?$'
replace: '\1{{ kube_apiserver_endpoint }}\2'
backup: yes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels a bit fragile but it's needed. Is this a feature request we can log w/ kubeadm to specify correct value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we could set a different host for discovery and for kube apiserver endpoint, that would work.

We can't get kubelet up in such a way that it's ready for kubeadm AND deploy the nginx localhost proxy as a static pod. We could move it to a standard docker container, but I don't prefer that option.

backup: yes
when: not is_kube_master and kubeadm_discovery_address != kube_apiserver_endpoint

# FIXME(mattymo): Reconcile kubelet kubeconfig filename for both deploy modes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will this be adding a conditional to kubelet to flex on kubeconfig filename?

Copy link
Contributor Author

@mattymo mattymo Sep 11, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably just generate a new kubeconfig file and purge the old one, after updating refs in all configs

tags: facts

- name: kubeadm | Copy etcd cert dir under k8s cert dir
command: "cp -TR {{ etcd_cert_dir }} {{ kube_config_dir }}/ssl/etcd"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be changing our default etcd_cert_dir to be in line with this? Is it a standard enforced with kubeadm?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want kubeadm to support mounting the dir where the etcd certs are located. It lets you specify a path, but doesn't mount them. @luxas what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, in v1.8.0 beta1, it bind mounts the etcd cert dir. We can get rid of this command.

{% endif %}
{# Base kubelet args #}
{% set kubelet_args_base -%}
{# start kubeadm specific settings #}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the only bit that changed between the two kubelet.env files?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More or less. There are some items to clean up that we don't really need which are present in both the old kubelet env file and the new.

@mattymo mattymo closed this Sep 12, 2017
@mattymo mattymo reopened this Sep 12, 2017
@mattymo mattymo merged commit 6744726 into kubernetes-sigs:master Sep 13, 2017
@luxas
Copy link

luxas commented Sep 13, 2017

Yayy! Thanks for doing this @mattymo!

@mattymo
Copy link
Contributor Author

mattymo commented Sep 13, 2017

I am preparing a followup with your suggestions, HA, and a switch to v1.8.0

@mattymo mattymo deleted the kubeadm branch September 4, 2018 10:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Adopt kubeadm tool for cluster creation
7 participants