Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start --extra-config=controller-manager is silently ignored #7028

Closed
wallrj opened this issue Mar 13, 2020 · 2 comments · Fixed by #7030
Closed

minikube start --extra-config=controller-manager is silently ignored #7028

wallrj opened this issue Mar 13, 2020 · 2 comments · Fixed by #7030

Comments

@wallrj
Copy link
Contributor

wallrj commented Mar 13, 2020

I ran minikube start --kubernetes-version=v1.18.0-beta.2 --extra-config 'controller-manager.experimental-cluster-signing-duration=120s' ...

minikube appeared to recognize the extra config and the cluster started,
but the arguments were not added to the kube-controller-manager pod.

minikube -v=10 start --kubernetes-version=v1.18.0-beta.2  --extra-config 'controller-manager.experimental-cluster-signing-duration=120s' --extra-config 'controller-manager.controllers=*'
😄  minikube v1.8.1 on Fedora 31
    ▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨  Using the kvm2 driver based on existing profile
💿  Downloading VM boot image ...
⌛  Reconfiguring existing host ...
🏃  Using the running kvm2 "minikube" VM ...
🐳  Preparing Kubernetes v1.18.0-beta.2 on Docker 19.03.6 ...
    ▪ controller-manager.experimental-cluster-signing-duration=120s
    ▪ controller-manager.controllers=*
🚀  Launching Kubernetes ... 
🌟  Enabling addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"
⚠️  /home/richard/.local/bin/kubectl is version 1.16.0, and is incompatible with Kubernetes 1.18.0-beta.2. You will need to update /home/richard/.local/bin/kubectl or use 'minikube kubectl' to connect with this cluster

$ kubectl -n kube-system describe pod kube-controller-manager-m01

Name:                 kube-controller-manager-m01
Namespace:            kube-system

Containers:
  kube-controller-manager:
    Image:         k8s.gcr.io/kube-controller-manager:v1.18.0-beta.2
    Command:
      kube-controller-manager
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --cluster-name=kubernetes
      --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
      --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=true
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --root-ca-file=/var/lib/minikube/certs/ca.crt
      --service-account-private-key-file=/var/lib/minikube/certs/sa.key
      --use-service-account-credentials=true

Problem seems to be that minikube generates duplicate controllerManager sections in kubeadm.yaml.


sudo env PATH=/var/lib/minikube/binaries/v1.18.0-beta.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml
W0313 16:36:03.017704   22348 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta2", Kind:"ClusterConfiguration"}: error converting YAML to JSON: yaml: unmarshal errors:
  line 14: key "controllerManager" already set in map

Looks like this was introduced in #6150

@wallrj
Copy link
Contributor Author

wallrj commented Mar 13, 2020

Here's the entire kubeadm.conf file, where you can see the duplicate controllerManager sections:

$ sudo cat /var/tmp/minikube/kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.39.213
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "m01"
  kubeletExtraArgs:
    node-ip: 192.168.39.213
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.39.213"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    controllers: "*"
    experimental-cluster-signing-duration: "120s"
certificatesDir: /var/lib/minikube/certs
clusterName: kubernetes
controlPlaneEndpoint: localhost:8443
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
kubernetesVersion: v1.18.0-beta.2
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12

@fans3210
Copy link

fans3210 commented Jul 28, 2020

Hi I'm facing the same issue.

command I'm using:

minikube start --driver=virtualbox --extra-config 'controller-manager.feature-gates=TTLAfterFinished=true'

Results:

😄  minikube v1.12.1 on Darwin 10.15.6
✨  Using the virtualbox driver based on existing profile
👍  Starting control plane node minikube in cluster minikube
🏃  Updating the running virtualbox "minikube" VM ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
    ▪ controller-manager.feature-gates=TTLAfterFinished=true
🔎  Verifying Kubernetes components...
🔎  Verifying ingress addon...
🌟  Enabled addons: default-storageclass, ingress, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

However, after describe pod:

kubectl describe pod kube-controller-manager-minikube -n kube-system

--fature-gates still not added as shown:


Name:                 kube-controller-manager-minikube
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 minikube/192.168.99.101
Start Time:           Tue, 28 Jul 2020 20:19:27 +0800
Labels:               component=kube-controller-manager
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: ba963bc1bff8609dc4fc4d359349c120
                      kubernetes.io/config.mirror: ba963bc1bff8609dc4fc4d359349c120
                      kubernetes.io/config.seen: 2020-07-28T12:19:24.088691054Z
                      kubernetes.io/config.source: file
Status:               Running
IP:                   192.168.99.101
IPs:
  IP:           192.168.99.101
Controlled By:  Node/minikube
Containers:
  kube-controller-manager:
    Container ID:  docker://1e306f29af4a257155755595269ec4a5e02c2a9c7064e7bd631be76d0d949972
    Image:         k8s.gcr.io/kube-controller-manager:v1.18.3
    Image ID:      docker-pullable://k8s.gcr.io/kube-controller-manager@sha256:d62a4f41625e1631a2683cbdf1c9c9bd27f0b9c5d8d8202990236fc0d5ef1703
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-controller-manager
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      --client-ca-file=/var/lib/minikube/certs/ca.crt
      --cluster-name=mk
      --cluster-signing-cert-file=/var/lib/minikube/certs/ca.crt
      --cluster-signing-key-file=/var/lib/minikube/certs/ca.key
      --controllers=*,bootstrapsigner,tokencleaner
      --kubeconfig=/etc/kubernetes/controller-manager.conf
      --leader-elect=false
      --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
      --root-ca-file=/var/lib/minikube/certs/ca.crt
      --service-account-private-key-file=/var/lib/minikube/certs/sa.key
      --use-service-account-credentials=true
    State:          Running
      Started:      Tue, 28 Jul 2020 20:19:28 +0800
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:        200m
    Liveness:     http-get https://127.0.0.1:10257/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
    Environment:  <none>
    Mounts:
      /etc/kubernetes/controller-manager.conf from kubeconfig (ro)
      /etc/ssl/certs from ca-certs (ro)
      /usr/libexec/kubernetes/kubelet-plugins/volume/exec from flexvolume-dir (rw)
      /usr/share/ca-certificates from usr-share-ca-certificates (ro)
      /var/lib/minikube/certs from k8s-certs (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  ca-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:  DirectoryOrCreate
  flexvolume-dir:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/libexec/kubernetes/kubelet-plugins/volume/exec
    HostPathType:  DirectoryOrCreate
  k8s-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /var/lib/minikube/certs
    HostPathType:  DirectoryOrCreate
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/controller-manager.conf
    HostPathType:  FileOrCreate
  usr-share-ca-certificates:
    Type:          HostPath (bare host directory volume)
    Path:          /usr/share/ca-certificates
    HostPathType:  DirectoryOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute
Events:
  Type    Reason   Age   From               Message
  ----    ------   ----  ----               -------
  Normal  Pulled   24m   kubelet, minikube  Container image "k8s.gcr.io/kube-controller-manager:v1.18.3" already present on machine
  Normal  Created  24m   kubelet, minikube  Created container kube-controller-manager
  Normal  Started  24m   kubelet, minikube  Started container kube-controller-manager

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants