Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Migrate to Ignition #728

Closed
mumoshu opened this issue Jul 4, 2017 · 15 comments
Closed

Migrate to Ignition #728

mumoshu opened this issue Jul 4, 2017 · 15 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@mumoshu
Copy link
Contributor

mumoshu commented Jul 4, 2017

Once coreos/ignition#382 lands on Container Linux Stable.

See my comment in #252 (comment)

@mumoshu mumoshu added this to the v0.9.9-rc.1 milestone Jul 4, 2017
@redbaron
Copy link
Contributor

redbaron commented Jul 4, 2017

yeah, I've been following that PR for a while, I have WIP userdata conversion and kube-aws code changes to support ignition, waiting for CoreOS alpha release with Ignition 0.17 included, then I can test a create a PR.

Then we wait for a Ignition 0.17 to land on stable, which might take couple of months.

@mumoshu
Copy link
Contributor Author

mumoshu commented Jul 4, 2017

Great! Really looking forward to see your PR 👍

@mumoshu
Copy link
Contributor Author

mumoshu commented Jul 6, 2017

#675 (comment)

@mumoshu
Copy link
Contributor Author

mumoshu commented Jul 13, 2017

@redbaron Ignition 0.17 seems to have landed to the latest alpha of container linux 1465.0.0. Could we support enabling the ignition integration only for clusters created with that or later versions of container linux?

@redbaron
Copy link
Contributor

In theory yes, there are 2 ways:

  1. Maintain 2 sets of userdata
  2. Transform one form of userdata to another on the fly. Probably from cloud-init to ignition and have separate “instance” templates for old and new way of doing it

Realistically I wouldn’t bother, not with current codebase at least. I’ll test my branch and submit WIP PR which then can be rebased until Ignition lands to master in ~6 months or so

@mumoshu
Copy link
Contributor Author

mumoshu commented Aug 25, 2017

@redbaron FYI, the stable channel is now pointed to 1465.6.0 with ignition 0.17.2.

Regarding the 2 ways I slightly prefer 1 with starting by calling out to coreos-cloudinit from ignition while gradually migrating our templating logics inside cloudinit-based cloud-config-(worker|etcd|controller) to their ignition alternatives. WDYT?
Anyway, I'm ok with either way as long as it works until we fully migrate to ignition though 😃

@mumoshu mumoshu modified the milestones: v0.9.9-rc.1, v0.9.10.rc-1 Oct 11, 2017
@mumoshu
Copy link
Contributor Author

mumoshu commented Oct 11, 2017

@redbaron Do you think you can work on this towards v0.9.10-rc.1?

@redbaron
Copy link
Contributor

Our internal kube-aws fork just finally caught up with master, I can resurrect branch and see how it goes. Have no ETA, but I have very little time left to work on kube-aws at all, so need to pick my battles wisely :) At worst I'll push incomplete version as a branch for somebody else to pick up.

@pawelprazak
Copy link

pawelprazak commented Dec 4, 2017

Have you been able to workaround the CoreOS Docker 1.12 issue?

we do not recommend creating or modifying /etc/coreos/docker-1.12 using a cloud-config

Without Ignition it looks like a problem.

@mumoshu
Copy link
Contributor Author

mumoshu commented Dec 4, 2017

Hi!
Just confirming but are you saying we can't keep relying on the old docker 1.12 in our whole provisioning process, without migrating to ignition?
I'm saying this as I read it as it is possible to downgrade docker at the very end of the cloud-init.

@pawelprazak
Copy link

pawelprazak commented Dec 5, 2017

If I read the announcement correctly then likely yes, it caught us by surprise in our private fork. Just wanted to confirm this with the community.

Relevant fragments:

December 6, 2017: The stable channel defaults to Docker 17.09 unless /etc/coreos/docker-1.12 is set to yes.

Kubernetes 1.8 is officially validated for Docker versions through 17.03. If you want to run a Docker version validated by Kubernetes, we recommend staying on Docker 1.12 for now.

Container Linux executes cloud-configs late in the boot process, after /etc/coreos/docker-1.12 has already been processed, and potentially after the Docker daemon has started. Therefore, we do not recommend creating or modifying /etc/coreos/docker-1.12 using a cloud-config. Instead, modify /etc/coreos/docker-1.12 after the system has booted, using the instructions in the next section, or migrate to Container Linux Configs.

So for Kubernetes 1.8 it should work, but anyone running 1.6 or 1.7 might have problems.
Also cloud-config is too late in the boot process to be useful here.

Also the manual downgrade is IMHO unacceptable at scale:

To downgrade Docker, write yes to /etc/coreos/docker-1.12, stop all containers and the Docker daemon, delete /var/lib/docker, and reboot.

That's why I was very curious to how this might work in details:

starting by calling out to coreos-cloudinit from ignition while gradually migrating

[Edit]:
we ended up creating custom AMIs with the /etc/coreos/docker-1.12 set to yes as the least invasive action

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants