Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable certificates controller by default #1647

Closed
mberhault opened this issue Jun 23, 2017 · 12 comments
Closed

enable certificates controller by default #1647

mberhault opened this issue Jun 23, 2017 · 12 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@mberhault
Copy link

minikube (minikube version: v0.20.0) seems to be missing the ca.pem needed to start the certificates controller.

When running a plain minikube start, I see the following in the log:

Jun 23 17:49:54 minikube localkube[3561]: E0623 17:49:54.718440    3561 certificates.go:38] Failed to start certificate controller: open /etc/kubernetes/ca/ca.pem: no such file or directory
Jun 23 17:49:54 minikube localkube[3561]: W0623 17:49:54.718470    3561 controllermanager.go:434] Skipping "certificatesigningrequests"

Given that the certificates api is now beta, it would be nice to have.

For more context, please see kubernetes/kubernetes#47911 I filed against api-machinery.

I also can't find an easy way to enable the certificate manager on minikube. @mrick mentioned using --extra-config, but further searching didn't really tell me how. Pointers would be appreciated.

@mberhault
Copy link
Author

With a ca.crt/ca.key pair in the local directory, I started minikube as follows to enable the certificate controller:

$ minikube start --extra-config=controller-manager.ClusterSigningCertFile="$(pwd)/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="$(pwd)/ca.key"

I had to pass an absolute path, it seems relative paths were not enough.

@mberhault
Copy link
Author

Now the problem is that the generated certificate is signed using the CA passed through --extra-config=.....ClusterSigningCertFile, but the ca.crt mounted on /var/run/secrets/kubernetes.io/serviceaccount/ca.crt is one generated by minikube.

This is the one I passed on startup:

$ openssl x509 -text -in ca.crt
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            53:05:bd:3a:ce:22:bb:ac:3a:1e:53:d6:6f:4d:9e:93
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: O=Cockroach, CN=Cockroach CA
        Validity
            Not Before: Jun 22 23:14:07 2017 GMT
            Not After : Jul  1 23:14:07 2027 GMT
        Subject: O=Cockroach, CN=Cockroach CA
etc...

This is the certificate mounted on each pod.

$  kubectl exec  cockroachdb-0 cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | openssl x509 -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 1 (0x1)
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=minikubeCA
        Validity
            Not Before: Jun 22 13:43:17 2017 GMT
            Not After : Jun 20 13:43:17 2027 GMT
        Subject: CN=minikubeCA
etc...

And here's the certificate obtained through the certificate API. You'll notice it's signed by the custom CA.

$ kubectl exec  cockroachdb-0 cat cockroach-certs/node.crt | openssl x509 -text
Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            7a:75:46:f9:4c:a3:d8:2d:b5:36:d8:65:1b:e2:65:ff:ea:08:be:83
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: O=Cockroach, CN=Cockroach CA
        Validity
            Not Before: Jun 24 20:03:00 2017 GMT
            Not After : Jun 24 20:03:00 2018 GMT
        Subject: O=Cockroach, CN=node
etc...

My understanding from reading the tls certs doc is that the certificate controller is supposed to use the specified ca cert and key.

@aaron-prindle aaron-prindle added the kind/bug Categorizes issue or PR as related to a bug. label Jun 25, 2017
@mberhault
Copy link
Author

Another wrinkle: while the above happened when running on OSX, I'm getting a different behavior on linux.
Specifically, it seems the controller manager can't open the CA files.
I attempted:

# In the local directory:
$ minikube start --extra-config=controller-manager.ClusterSigningCertFile="$(pwd)/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="$(pwd)/ca.key"

# In the tmp directory with broad permissions (just to make sure it wasn't a permissions issue.
$ minikube start --extra-config=controller-manager.ClusterSigningCertFile="/tmp/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/tmp/ca.key"

# In the volume mounted (as seen in minikube start --help`
$ minikube start --extra-config=controller-manager.ClusterSigningCertFile="/minikube-host/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/minikube-host/ca.key"

# And finally with the mount option:
$ minikube start --mount --extra-config=controller-manager.ClusterSigningCertFile="/minikube-host/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/minikube-host/ca.key"

In all but the last case, I ended up with something like the following in minikube logs:

Jun 26 14:09:16 minikube localkube[3545]: E0626 14:09:16.673415    3545 certificates.go:38] Failed to start certificate controller: open /tmp/ca.crt: no such file or directory
Jun 26 14:09:16 minikube localkube[3545]: W0626 14:09:16.673462    3545 controllermanager.go:434] Skipping "certificatesigningrequests"

I'm guessing there's a difference in osx for the default mount options.

Once this did work, I was back to the problem mentioned above:

$ kubectl exec cockroachdb-0 cat cockroach-certs/ca.crt|openssl x509 -text|grep -w Issuer
        Issuer: CN=minikubeCA
$ kubectl exec cockroachdb-0 cat cockroach-certs/node.crt|openssl x509 -text|grep -w Issuer
        Issuer: O=Cockroach, CN=Cockroach CA

@mberhault
Copy link
Author

mberhault commented Jun 26, 2017

Ok, I finally got it working by not attempting to use my own certs. Instead, I set the signing cert/key paths to the be ones created by minikube:

$ minikube start --extra-config=controller-manager.ClusterSigningCertFile="/var/lib/localkube/certs/ca.crt" --extra-config=controller-manager.ClusterSigningKeyFile="/var/lib/localkube/certs/ca.key"

Which yields:

Jun 26 17:53:40 minikube localkube[3717]: I0626 17:53:40.044887    3717 controllermanager.go:437] Started "certificatesigningrequests"
...
Jun 26 17:53:40 minikube localkube[3717]: I0626 17:53:40.046578    3717 certificate_controller.go:120] Starting certificate controller manager

And the certificates are properly generated once the CSRs are approved.

So to summarize the issues found here (well, some of them):

  • minikube should enable the certificate signing controller by default
  • there seems to be no way to tell minikube to use an existing CA key pair. It goes to the certificate signer, but the ca cert is not the one places in /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  • something about mount points on osx vs linux. I can check the behavior on linux later.

Some docs on all this would be nice as well now that the certificates API is beta. This has been a lot of trial and error, made worse by the fact that I'm not familiar enough with the way minikube runs k8s.

@r2d4
Copy link
Contributor

r2d4 commented Jun 29, 2017

We should enable this by default. I'm not sure why it wouldn't work with your certs, but I can debug further.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 30, 2017
@mildebrandt
Copy link

I just hit this today...would be great to get it fixed.
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 3, 2018
@gandelman-a
Copy link

Ditto!

@jar349
Copy link

jar349 commented Mar 1, 2018

Also just ran into this trying to issue certs for components in my minikube (0.25.0). Thank you so much for posting back here with your findings!

edit:
although I can see in my logs: minikube localkube[3081]: I0301 20:52:47.241441 3081 certificate_controller.go:113] Starting certificate controller

My certs remain only Approved, not Approved,Issued

next edit:
even though the certificate controller started, the controller manager skipped csrsigning for some reason?

$ mk logs | grep csrsigning
Mar 01 20:52:45 minikube localkube[3081]: W0301 20:52:45.379035    3081 controllermanager.go:490] Skipping "csrsigning"

final edit:
in order to get the controller manager to start the "csrsigning" controller, you have to stop and restart your minikube. At least, that's what worked for me.

@StevenACoffman
Copy link

For those who find this later, "minikube delete" leaves some bits behind, which if those are the source of your problem (like a bad cert) mean you need to do some more aggressive cleaning:

minikube stop
eval $(minikube docker-env)
minikube delete
rm -r -f ~/.minikube

For very difficult stains, apply cleanser vigorously:

minikube stop; minikube delete
docker stop (docker ps -aq)
rm -r ~/.kube ~/.minikube
sudo rm /usr/local/bin/localkube /usr/local/bin/minikube
systemctl stop '*kubelet*.mount'
sudo rm -rf /etc/kubernetes/
docker system prune -af --volumes

@m1o1
Copy link

m1o1 commented Jul 19, 2018

Is this still an issue? Seems resolved for me in 0.28.1. The controller-manager starts and runs just fine without the additional --extra-config parameters

rstarmer added a commit to rstarmer/istio.io that referenced this issue Sep 29, 2018
Minikube does the right thing (as of 0.28.1 at least) with creating the embedded CA.  The extra-config parameters appear to have been necessary previously and were resolved to use the "right" credentials built by Minikube directly. In fact, passing those parameters appears to break current minikube deployments, making it impossible to create new service accounts and resources that rely on them. (like a tiller service account for a helm deployment of Istio...)

I found this bug that referenced this issue: kubernetes/minikube#1647 which is now closed.
@tstromberg
Copy link
Contributor

Closing open localkube issues, as localkube was long deprecated and removed from the last two minikube releases. I hope you were able to find another solution that worked out for you - if not, please open a new PR.

bobcatfish added a commit to bobcatfish/serving that referenced this issue Oct 16, 2018
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a
kubernetes cluster (either in GKE or with minikube) which had fallen out
of date with very similar installation docs in knative/docs.

I ran into this when trying to figure out the correct scopes to use for
creating a cluster which could pass the knative/build-pipeline kaniko
integration test (tektoncd/pipeline#150)
and it turned out that the `--scopes` in the doc referenced in this
repo are different from the `--scopes` in the knative/docs repo.(I
worked around my problem my using `storage-full`, which isn't used in
either set of docs but that's a different story!)

The minikube docs that were in this repo also contained args for
specifying the location of the cluster CA certs, but I'm assuming this
is no longer needed since knative/docs doesn't have this and
kubernetes/minikube#1647 is resolved.
bobcatfish added a commit to bobcatfish/serving that referenced this issue Oct 16, 2018
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a
kubernetes cluster (either in GKE or with minikube) which had fallen out
of date with very similar installation docs in knative/docs.

I ran into this when trying to figure out the correct scopes to use for
creating a cluster which could pass the knative/build-pipeline kaniko
integration test (tektoncd/pipeline#150)
and it turned out that the `--scopes` in the doc referenced in this
repo are different from the `--scopes` in the knative/docs repo.(I
worked around my problem my using `storage-full`, which isn't used in
either set of docs but that's a different story!)

The minikube docs that were in this repo also contained args for
specifying the location of the cluster CA certs, but I'm assuming this
is no longer needed since knative/docs doesn't have this and
kubernetes/minikube#1647 is resolved.
geeknoid pushed a commit to istio/istio.io that referenced this issue Oct 22, 2018
Minikube does the right thing (as of 0.28.1 at least) with creating the embedded CA.  The extra-config parameters appear to have been necessary previously and were resolved to use the "right" credentials built by Minikube directly. In fact, passing those parameters appears to break current minikube deployments, making it impossible to create new service accounts and resources that rely on them. (like a tiller service account for a helm deployment of Istio...)

I found this bug that referenced this issue: kubernetes/minikube#1647 which is now closed.
bobcatfish added a commit to bobcatfish/serving that referenced this issue Nov 6, 2018
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a
kubernetes cluster (either in GKE or with minikube) which had fallen out
of date with very similar installation docs in knative/docs.

I ran into this when trying to figure out the correct scopes to use for
creating a cluster which could pass the knative/build-pipeline kaniko
integration test (tektoncd/pipeline#150)
and it turned out that the `--scopes` in the doc referenced in this
repo are different from the `--scopes` in the knative/docs repo.(I
worked around my problem my using `storage-full`, which isn't used in
either set of docs but that's a different story!)

The minikube docs that were in this repo also contained args for
specifying the location of the cluster CA certs, but I'm assuming this
is no longer needed since knative/docs doesn't have this and
kubernetes/minikube#1647 is resolved.
bobcatfish added a commit to bobcatfish/serving that referenced this issue Nov 6, 2018
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a
kubernetes cluster (either in GKE or with minikube) which had fallen out
of date with very similar installation docs in knative/docs.

I ran into this when trying to figure out the correct scopes to use for
creating a cluster which could pass the knative/build-pipeline kaniko
integration test (tektoncd/pipeline#150)
and it turned out that the `--scopes` in the doc referenced in this
repo are different from the `--scopes` in the knative/docs repo.(I
worked around my problem my using `storage-full`, which isn't used in
either set of docs but that's a different story!)

The minikube docs that were in this repo also contained args for
specifying the location of the cluster CA certs, but I'm assuming this
is no longer needed since knative/docs doesn't have this and
kubernetes/minikube#1647 is resolved.
knative-prow-robot pushed a commit to knative/serving that referenced this issue Nov 6, 2018
* Refer to knative/docs cluster setup instead of duplicating

The DEVELOPMENT.md in knative/serving referred to a doc on setting up a
kubernetes cluster (either in GKE or with minikube) which had fallen out
of date with very similar installation docs in knative/docs.

I ran into this when trying to figure out the correct scopes to use for
creating a cluster which could pass the knative/build-pipeline kaniko
integration test (tektoncd/pipeline#150)
and it turned out that the `--scopes` in the doc referenced in this
repo are different from the `--scopes` in the knative/docs repo.(I
worked around my problem my using `storage-full`, which isn't used in
either set of docs but that's a different story!)

The minikube docs that were in this repo also contained args for
specifying the location of the cluster CA certs, but I'm assuming this
is no longer needed since knative/docs doesn't have this and
kubernetes/minikube#1647 is resolved.

* Fix paren and K8S_USER_OVERRIDE docs

`K8S_USER_OVERRIDE` is only needed to run the command to setup your
current user as a cluster admin, so it isn't needed as part of the
general environment setup.

Updated the DEVELOPMENT.md to explain that the values for this are
different if you're using a GKE cluster vs a minikube cluster.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests