Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP Initial support for Bottlerocket OS #9855

Closed
wants to merge 8 commits into from

Conversation

rifelpet
Copy link
Member

@rifelpet rifelpet commented Sep 1, 2020

https://github.com/bottlerocket-os/bottlerocket

ref: #8723

The userdata needs to be entirely different (in fact the userdata is just settings defined in TOML) so Kops needs to be aware of the AMI's OS at the time it creates userdata, so I added a new ImageFamily API field (following the name from eksctl but I'm happy to change it). I figured the new field would also be useful for Windows support, since that will need different userdata too.

This also requires that the Userdata model now depends on the CA keypair. I wasn't able to get that setup easily in unit testing, but the integration test covers it well.

for this PR:

  • make tests results consistent - I'm assuming i'll use the same CA key strategy used in the new PublicJWKS integration test
  • warn or fail API validation if certain IG fields are set that are incompatible with bottlerocket. gossip clusters wont be supported yet either since that would require access to the host's /etc/hosts. aws-iam-authenticator is also required to be enabled and setup to bind the nodes role with rbac permissions to join the cluster
  • images.md

for followup PRs:

  • add SSH support
  • add support for additionalUserdata so that users can provide additional bottlerocket settings. Or come up with a more integrated way of specifying these.

Limitations:

  • kubelet serving certificate will not work for metrics server

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Sep 1, 2020
@k8s-ci-robot k8s-ci-robot added area/api area/provider/aws Issues or PRs related to aws provider size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Sep 1, 2020
@rifelpet
Copy link
Member Author

rifelpet commented Sep 1, 2020

/test pull-kops-e2e-kubernetes-aws

1 similar comment
@hakman
Copy link
Member

hakman commented Sep 1, 2020

/test pull-kops-e2e-kubernetes-aws

@rifelpet
Copy link
Member Author

rifelpet commented Sep 1, 2020

/test pull-kops-e2e-kubernetes-aws

@rifelpet
Copy link
Member Author

rifelpet commented Sep 1, 2020

W0901 21:20:52.097277 6980 executor.go:131] error running task "LaunchTemplate/nodes-us-west-1a.e2e-42766ca32c-ff1eb.test-cncf-aws.k8s.io" (1m50s remaining to succeed): could not find Image for "bottlerocket/bottlerocket-aws-k8s-1.17-x86_64-v0.5.0-e0ddf1b"

apparently the bottlerocket AMIs are not published in every region, which doesn't play nicely with prow and boskos' randomly chosen regions.

aws/containers-roadmap#827 (comment)

/test pull-kops-e2e-kubernetes-aws

EDIT: and the bottlerocket AMIs have different owner IDs per region :(

@rifelpet
Copy link
Member Author

rifelpet commented Sep 1, 2020

/test pull-kops-e2e-kubernetes-aws

@rifelpet
Copy link
Member Author

rifelpet commented Sep 1, 2020

/test pull-kops-e2e-kubernetes-aws

@rifelpet
Copy link
Member Author

rifelpet commented Sep 1, 2020

/test pull-kops-e2e-kubernetes-aws

@k8s-ci-robot k8s-ci-robot added size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Sep 2, 2020
@rifelpet
Copy link
Member Author

rifelpet commented Sep 2, 2020

/test pull-kops-e2e-kubernetes-aws

@rifelpet
Copy link
Member Author

rifelpet commented Sep 2, 2020

/test pull-kops-e2e-kubernetes-aws

@rifelpet
Copy link
Member Author

rifelpet commented Sep 2, 2020

/test pull-kops-e2e-kubernetes-aws

1 similar comment
@rifelpet
Copy link
Member Author

rifelpet commented Sep 2, 2020

/test pull-kops-e2e-kubernetes-aws

@mikesplain
Copy link
Contributor

Awesome, glad to hear you started on this!

@k8s-ci-robot
Copy link
Contributor

@rifelpet: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 26, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 28, 2022
@rifelpet
Copy link
Member Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 28, 2022
@oded-dd
Copy link

oded-dd commented May 11, 2022

Great Work! Any update on that?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 9, 2022
@k8s-ci-robot
Copy link
Contributor

@rifelpet: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kops-verify-hashes 9d0afd2 link /test pull-kops-verify-hashes
pull-kops-e2e-k8s-containerd fc9684d link /test pull-kops-e2e-k8s-containerd
pull-kops-e2e-cni-canal fc9684d link true /test pull-kops-e2e-cni-canal
pull-kops-e2e-kubernetes-aws c9ec417 link true /test pull-kops-e2e-kubernetes-aws
pull-kops-e2e-cni-cilium-ipv6 c9ec417 link true /test pull-kops-e2e-cni-cilium-ipv6
pull-kops-e2e-cni-weave c9ec417 link true /test pull-kops-e2e-cni-weave
pull-kops-e2e-cni-kuberouter c9ec417 link true /test pull-kops-e2e-cni-kuberouter
pull-kops-e2e-cni-calico-ipv6 c9ec417 link true /test pull-kops-e2e-cni-calico-ipv6
pull-kops-e2e-cni-amazonvpc c9ec417 link true /test pull-kops-e2e-cni-amazonvpc
pull-kops-e2e-cni-calico c9ec417 link true /test pull-kops-e2e-cni-calico
pull-kops-e2e-cni-flannel c9ec417 link true /test pull-kops-e2e-cni-flannel
pull-kops-e2e-cni-cilium c9ec417 link true /test pull-kops-e2e-cni-cilium
pull-kops-e2e-k8s-gce-cilium c9ec417 link true /test pull-kops-e2e-k8s-gce-cilium
pull-kops-e2e-aws-karpenter c9ec417 link true /test pull-kops-e2e-aws-karpenter
pull-kops-e2e-k8s-aws-calico c9ec417 link true /test pull-kops-e2e-k8s-aws-calico
pull-kops-build c9ec417 link true /test pull-kops-build
pull-kops-test c9ec417 link true /test pull-kops-test

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 30, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/addons area/api area/documentation area/provider/aws Issues or PRs related to aws provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants