Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make kubeadm upgrade HA ready #706

Closed
fabriziopandini opened this issue Feb 18, 2018 · 13 comments
Closed

Make kubeadm upgrade HA ready #706

fabriziopandini opened this issue Feb 18, 2018 · 13 comments
Assignees
Labels
area/HA kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@fabriziopandini
Copy link
Member

Current implementation of the kubeadm upgrade relies on the kubeadm-config configMap created at kubeadm init time.

Such configMap - that is the serialization of the master configuration file -, contains two kind of information:

  • Cluster attributes
  • Master attributes = information that are specific of the master node where init is executed e.g. the nodeName

To make kubeadm upgrade working in an HA scenario, with more than one master, the management of the second group of information should be improved, by adding the capability to track information specific to each master node (e.g. more than one nodeName)

@stealthybox
Copy link
Member

Current issue described by this comment:
#546 (comment)

@mattkelly
Copy link

mattkelly commented Feb 22, 2018

@fabriziopandini I'm interested in picking this up but I'm a bit confused about how it would work / what exactly is needed.

  1. NodeName is the only node-specific master attribute that I can identify - at least unless we wanted to support asymmetric configs across masters for some reason. What am I missing?

  2. My understanding is that for kubeadm upgrade, NodeName is only used to find the control plane static pods belonging to the current node. How will we identify which master/node in the MasterConfiguration is the current node that kubeadm is running on? We can't assume that NodeName == hostname (which is the default if NodeName isn't provided) as that breaks things.

@fabriziopandini
Copy link
Member Author

@mattkelly thanks for helping on this issue!

I happily share my personal opinion on how it would work / what exactly is needed, but please consider that it is necessary to get a wider consensus before starting to write a PR for this issue.

The key elements of my idea are:

  • We will support only kubeadm join --master as a way for adding new masters; this gives us the opportunity to have a strict on control node-specific parameters (and implicitly ensure consistency among all the other settings)
  • When kubeadm init or kubeadm join --master are executed, kubeadm should identify some kind of machine UID, and then store node-specific data in a dedicated configMap named kubeadm-config-machineUID (the same information will be stripped from the shared kubeadm--config configMap)
  • When running kubeadm upgrade, as a first step, kubeadm should retrieve and merge the kubeadm-config configMap and the kubeadm-config-machineUID configMap for the current machine

Detail are still TBD but let's discuss them in the breakout session or in slack if there is consensus on the approach 😉

@mattkelly
Copy link

mattkelly commented Feb 23, 2018

I'm not yet qualified to really comment on whether that general approach would be acceptable, but it does seem reasonable to me. We already require a unique MAC address and product_uuid for each node, so we do have potential sources for UIDs on each master.

I agree, let's discuss more at the next breakout session (and people can continue to comment here) before I go off and start implementing.

@stealthybox
Copy link
Member

@mattkelly feel free to add a new section for next week with an agenda item 🙂
https://docs.google.com/document/d/130_kiXjG7graFNSnIAgtMS1G8zPDwpkshgfRYS0nggo/edit#heading=h.48xxo9690nfd

@0xmichalis
Copy link
Contributor

When kubeadm init or kubeadm join --master are executed, kubeadm should identify some kind of machine UID, and then store node-specific data in a dedicated configMap named kubeadm-config-machineUID (the same information will be stripped from the shared kubeadm--config configMap)

Is there any overlap between this and dynamic kubelet config?

@fabriziopandini
Copy link
Member Author

Is there any overlap between this and dynamic kubelet config?

@Kargakis I don't think there is an overlap. Kubeadm applies the same dynamic kubelet config to all nodes, so IMO for the scope of this discussion the dynamic kubelet config is not a node-specific configuration.

@timothysc timothysc added area/HA priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Mar 5, 2018
@timothysc timothysc added this to the v1.11 milestone Mar 5, 2018
@timothysc
Copy link
Member

We had a long discussion on this during last weeks call, I think the path forward was a simple proposal which can be linked here as well as a prototype which could help to determine whether a single of multiple config maps makes more sense.

@mattkelly
Copy link

@timothysc yup, sounds good to me. I wasn't sure if you would have further comments after reviewing the ticket more in-depth. I'll have something out for review within a few days.

@timothysc timothysc added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 6, 2018
@timothysc timothysc added the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Apr 18, 2018
@timothysc timothysc removed their assignment Apr 18, 2018
@timothysc
Copy link
Member

/cc @liztio

this will be added as one of the requirements on the config KEP. It's also listed in the kubeadm office hour notes for 20180418

@luxas luxas modified the milestones: v1.11, v1.12 May 14, 2018
@timothysc timothysc removed the lifecycle/active Indicates that an issue or PR is actively being worked on by a contributor. label Aug 9, 2018
@timothysc timothysc assigned timothysc and unassigned chuckha Aug 21, 2018
@timothysc
Copy link
Member

/assign @detiber @chuckha @rdodev

We need to go through an update to the docs using the control-plane-join

@fabriziopandini
Copy link
Member Author

@timothysc IMO this issue should be closed as soon as kubernetes/kubernetes#67944 merges

@neolit123
Copy link
Member

IMO this issue should be closed as soon as kubernetes/kubernetes#67944 merges

pinging @fabriziopandini and @timothysc for status.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/HA kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

10 participants