-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
enable certificates controller by default #1647
Comments
With a ca.crt/ca.key pair in the local directory, I started minikube as follows to enable the certificate controller:
I had to pass an absolute path, it seems relative paths were not enough. |
Now the problem is that the generated certificate is signed using the CA passed through This is the one I passed on startup:
This is the certificate mounted on each pod.
And here's the certificate obtained through the certificate API. You'll notice it's signed by the custom CA.
My understanding from reading the tls certs doc is that the certificate controller is supposed to use the specified ca cert and key. |
Another wrinkle: while the above happened when running on OSX, I'm getting a different behavior on linux.
In all but the last case, I ended up with something like the following in
I'm guessing there's a difference in osx for the default mount options. Once this did work, I was back to the problem mentioned above:
|
Ok, I finally got it working by not attempting to use my own certs. Instead, I set the signing cert/key paths to the be ones created by minikube:
Which yields:
And the certificates are properly generated once the CSRs are approved. So to summarize the issues found here (well, some of them):
Some docs on all this would be nice as well now that the certificates API is beta. This has been a lot of trial and error, made worse by the fact that I'm not familiar enough with the way minikube runs k8s. |
We should enable this by default. I'm not sure why it wouldn't work with your certs, but I can debug further. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
I just hit this today...would be great to get it fixed. |
Ditto! |
Also just ran into this trying to issue certs for components in my minikube (0.25.0). Thank you so much for posting back here with your findings! edit: My certs remain only next edit:
final edit: |
For those who find this later, "minikube delete" leaves some bits behind, which if those are the source of your problem (like a bad cert) mean you need to do some more aggressive cleaning:
For very difficult stains, apply cleanser vigorously:
|
Is this still an issue? Seems resolved for me in 0.28.1. The controller-manager starts and runs just fine without the additional --extra-config parameters |
Minikube does the right thing (as of 0.28.1 at least) with creating the embedded CA. The extra-config parameters appear to have been necessary previously and were resolved to use the "right" credentials built by Minikube directly. In fact, passing those parameters appears to break current minikube deployments, making it impossible to create new service accounts and resources that rely on them. (like a tiller service account for a helm deployment of Istio...) I found this bug that referenced this issue: kubernetes/minikube#1647 which is now closed.
Closing open localkube issues, as localkube was long deprecated and removed from the last two minikube releases. I hope you were able to find another solution that worked out for you - if not, please open a new PR. |
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a kubernetes cluster (either in GKE or with minikube) which had fallen out of date with very similar installation docs in knative/docs. I ran into this when trying to figure out the correct scopes to use for creating a cluster which could pass the knative/build-pipeline kaniko integration test (tektoncd/pipeline#150) and it turned out that the `--scopes` in the doc referenced in this repo are different from the `--scopes` in the knative/docs repo.(I worked around my problem my using `storage-full`, which isn't used in either set of docs but that's a different story!) The minikube docs that were in this repo also contained args for specifying the location of the cluster CA certs, but I'm assuming this is no longer needed since knative/docs doesn't have this and kubernetes/minikube#1647 is resolved.
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a kubernetes cluster (either in GKE or with minikube) which had fallen out of date with very similar installation docs in knative/docs. I ran into this when trying to figure out the correct scopes to use for creating a cluster which could pass the knative/build-pipeline kaniko integration test (tektoncd/pipeline#150) and it turned out that the `--scopes` in the doc referenced in this repo are different from the `--scopes` in the knative/docs repo.(I worked around my problem my using `storage-full`, which isn't used in either set of docs but that's a different story!) The minikube docs that were in this repo also contained args for specifying the location of the cluster CA certs, but I'm assuming this is no longer needed since knative/docs doesn't have this and kubernetes/minikube#1647 is resolved.
Minikube does the right thing (as of 0.28.1 at least) with creating the embedded CA. The extra-config parameters appear to have been necessary previously and were resolved to use the "right" credentials built by Minikube directly. In fact, passing those parameters appears to break current minikube deployments, making it impossible to create new service accounts and resources that rely on them. (like a tiller service account for a helm deployment of Istio...) I found this bug that referenced this issue: kubernetes/minikube#1647 which is now closed.
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a kubernetes cluster (either in GKE or with minikube) which had fallen out of date with very similar installation docs in knative/docs. I ran into this when trying to figure out the correct scopes to use for creating a cluster which could pass the knative/build-pipeline kaniko integration test (tektoncd/pipeline#150) and it turned out that the `--scopes` in the doc referenced in this repo are different from the `--scopes` in the knative/docs repo.(I worked around my problem my using `storage-full`, which isn't used in either set of docs but that's a different story!) The minikube docs that were in this repo also contained args for specifying the location of the cluster CA certs, but I'm assuming this is no longer needed since knative/docs doesn't have this and kubernetes/minikube#1647 is resolved.
The DEVELOPMENT.md in knative/serving referred to a doc on setting up a kubernetes cluster (either in GKE or with minikube) which had fallen out of date with very similar installation docs in knative/docs. I ran into this when trying to figure out the correct scopes to use for creating a cluster which could pass the knative/build-pipeline kaniko integration test (tektoncd/pipeline#150) and it turned out that the `--scopes` in the doc referenced in this repo are different from the `--scopes` in the knative/docs repo.(I worked around my problem my using `storage-full`, which isn't used in either set of docs but that's a different story!) The minikube docs that were in this repo also contained args for specifying the location of the cluster CA certs, but I'm assuming this is no longer needed since knative/docs doesn't have this and kubernetes/minikube#1647 is resolved.
* Refer to knative/docs cluster setup instead of duplicating The DEVELOPMENT.md in knative/serving referred to a doc on setting up a kubernetes cluster (either in GKE or with minikube) which had fallen out of date with very similar installation docs in knative/docs. I ran into this when trying to figure out the correct scopes to use for creating a cluster which could pass the knative/build-pipeline kaniko integration test (tektoncd/pipeline#150) and it turned out that the `--scopes` in the doc referenced in this repo are different from the `--scopes` in the knative/docs repo.(I worked around my problem my using `storage-full`, which isn't used in either set of docs but that's a different story!) The minikube docs that were in this repo also contained args for specifying the location of the cluster CA certs, but I'm assuming this is no longer needed since knative/docs doesn't have this and kubernetes/minikube#1647 is resolved. * Fix paren and K8S_USER_OVERRIDE docs `K8S_USER_OVERRIDE` is only needed to run the command to setup your current user as a cluster admin, so it isn't needed as part of the general environment setup. Updated the DEVELOPMENT.md to explain that the values for this are different if you're using a GKE cluster vs a minikube cluster.
minikube (
minikube version: v0.20.0
) seems to be missing theca.pem
needed to start the certificates controller.When running a plain
minikube start
, I see the following in the log:Given that the certificates api is now beta, it would be nice to have.
For more context, please see kubernetes/kubernetes#47911 I filed against api-machinery.
I also can't find an easy way to enable the certificate manager on minikube. @mrick mentioned using
--extra-config
, but further searching didn't really tell me how. Pointers would be appreciated.The text was updated successfully, but these errors were encountered: