-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update kubeadm api version from v1beta1 to v1beta2 #6150
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @RubenBaez! |
Hi @RubenBaez. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can one of the admins verify this patch? |
Codecov Report
@@ Coverage Diff @@
## master #6150 +/- ##
==========================================
+ Coverage 37.46% 37.95% +0.48%
==========================================
Files 128 128
Lines 8650 8716 +66
==========================================
+ Hits 3241 3308 +67
+ Misses 4990 4982 -8
- Partials 419 426 +7
|
thank you for this PR ! I was just hoping someone would make this ! |
/ok-to-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to add unit tests like other ones ?
/retest this please |
serviceSubnet: {{.ServiceCIDR}} | ||
scheduler: {} | ||
`)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kind: KubeletConfiguration
doesn't exist here.
Is it OK?
Or kind: KubeletConfiguration
isn't needed in Kubernetes v1.17+ ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@atoato88 I was reviewing a little more detail and found that kind: KubeletConfiguration
is necessary, so I wrote it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RubenBaez
Thank you to response 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RubenBaez
I just noticed this PR, overrides the image repostiory. to imageRepository: k8s.gcr.io
please fix that.
and also make sure the order of the block is the same as v1beta1 so we can diff and compare.
the current v1beta1 is
apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration localAPIEndpoint: advertiseAddress: {{.AdvertiseAddress}} bindPort: {{.APIServerPort}} bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: {{if .CRISocket}}{{.CRISocket}}{{else}}/var/run/dockershim.sock{{end}} name: {{.NodeName}} taints: [] --- apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration {{ if .ImageRepository}}imageRepository: {{.ImageRepository}} {{end}}{{range .ExtraArgs}}{{.Component}}: extraArgs: {{- range $i, $val := printMapInOrder .Options ": " }} {{$val}} {{- end}} {{end -}} {{if .FeatureArgs}}featureGates: {{range $i, $val := .FeatureArgs}}{{$i}}: {{$val}} {{end -}}{{end -}} certificatesDir: {{.CertDir}} clusterName: kubernetes controlPlaneEndpoint: localhost:{{.APIServerPort}} dns: type: CoreDNS etcd: local: dataDir: {{.EtcdDataDir}} kubernetesVersion: {{.KubernetesVersion}} networking: dnsDomain: {{if .DNSDomain}}{{.DNSDomain}}{{else}}cluster.local{{end}} podSubnet: "" serviceSubnet: {{.ServiceCIDR}} --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%"
@@ -105,6 +105,10 @@ func GenerateKubeadmYAML(k8s config.KubernetesConfig, r cruntime.Manager) ([]byt | |||
if version.GTE(semver.MustParse("1.14.0-alpha.0")) { | |||
configTmpl = template.KubeAdmConfigTmplV1Beta1 | |||
} | |||
// v1beta2 isn't required until v1.18. | |||
if version.GTE(semver.MustParse("1.18.0-alpha.0")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment for this PR and the comments in the linked issue say that v1beta2 should be used for v1.17 and higher, so this should parse 1.17.0
instead of 1.18.0-alpha.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@priyawadhwa I made the change, thank you.
@medyagh I made a refactor for all your observations, may you check it pls. |
This seems to be erroring with:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any chance we could get this merged before v1.7 (Jan 30)?
@@ -35,11 +36,3 @@ networking: | |||
dnsDomain: cluster.local | |||
podSubnet: "192.168.32.0/20" | |||
serviceSubnet: 10.96.0.0/12 | |||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this expected? Shouldn't we still be setting the eviction settings for this version?
@tstromberg |
All Times minikube: [ 97.416506 94.083811 96.239326] Average minikube: 95.913215 Averages Time Per Log
|
All Times Minikube (PR 6150): [ 243.863336 243.266605 244.269333] Average minikube: 93.742225 Averages Time Per Log
|
…update-kubeadm-v1beta2
All Times minikube: [ 96.111827 94.730622 93.253393] Average minikube: 94.698614 Averages Time Per Log
|
@medyagh @tstromberg |
@alonyb our jenkins is under maintenance, I will trigger the test once that is solved |
@alonyb meanwhile please run the integration tests locally on your machine to save some time in case they fail |
/ok-to-test |
@alonyb the tests are back! but the PR needs rebase |
All Times minikube: [ 95.702665 92.912852 96.515810] Average minikube: 95.043776 Averages Time Per Log
|
All Times Minikube (PR 6150): [ 97.999055 94.501333 95.064245] Average minikube: 96.384524 Averages Time Per Log
|
@medyagh |
…update-kubeadm-v1beta2
All Times minikube: [ 100.278850 96.590662 98.063022] Average minikube: 98.310845 Averages Time Per Log
|
Thank you! |
This PR makes a new validation for Kubernetes v1.17.0 or higher
Updating the template v1beta1, since it is for previous versions of Kubernetes
Add new kubeadm template (v1beta2) for Kubernetes v1.17.0 or higher
This PR should fix #6106