You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tell us about your request
I am looking into using Kubernetes ServiceAccounts for inter-service authentication between different services in a Kubernetes cluster using service account token projection. To do this services will need to fetch openid-configuration to validate the signed JWTs
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We are deploying our applications both to AWS and On-prem. If possible we would like to have as similar configurations as possible, where using Kubernetes has helped us a lot.
The problem I see is that while I can call the default Kubernetes endpoint: https://kubernetes.default.svc/.well-known/openid-configuration, the jwks_uri parameter is set to an internal hostname, ex: https://ip-172-16-45-221.eu-west-1.compute.internal:443/openid/v1/jwks. This is not a part of our VPC so I assume that it is the internal IP address of the instance running the api-server for EKS, so naturally I can not connect to it.
The actual URI for the keys in EKS would be: https://oidc.eks.eu-west-1.amazonaws.com/id/<cluster-id>/keys which works.
Are you currently working around this issue?
Calling https://oidc.eks.eu-west-1.amazonaws.com/id/<cluster-id>/.well-known/openid-configuration works as expected, but it would be convenient if we could always rely on https://kubernetes.default.svc/.well-known/openid-configuration working within a cluster
The text was updated successfully, but these errors were encountered:
EDIT: ok, it seems like EKS does not actually serve all of its JWKS keys under /openid/v1/jwks, and some are only found under https://oidc.eks.AWS_REGION.amazonaws.com/id/CLUSTER_ID/keys, I am really at a loss... It seems like this is another case where EKS needs specific attention from every downstream project.
At least you can automatically get the issuer URL from the cluster's /.well-known/openid-configuration and construct the real discovery URL or JWKS endpoint (so you don't have to figure out the CLUSTER_ID).
You can actually get the JWKS keys on EKS by querying https://kubernetes.default.svc/openid/v1/jwks, like on all Kubernetes clusters. (NOTE: this endpoint cannot be accessed anonymously, you must pass an Authorization header with the JWT of any Kubernetes ServiceAccount).
Alternatively, you can retrieve them with kubectl:
kubectl get --raw /openid/v1/jwks
# OUTPUT: {"keys":[...]}
You can even set up a proxy based around kubectl to allow anonymous access, like we are considering doing in Kubeflow (kubeflow/manifests#2850):
Now, there is still one big problem which is that if you are correctly validating a JWT to spec, the iss of the JWT needs to match the defined issuer in your system, luckily EKS is truthful about that on its /.well-known/openid-configuration endpoint under the issuer field of the JSON, so you can use the same tricks as above to retrieve it.
For example, retrieving the issuer with kubectl:
kubectl get --raw /.well-known/openid-configuration | jq -r '.issuer'# OUTPUT: https://oidc.eks.<REGION_NAME>.amazonaws.com/id/<CLUSTER_ID>
Tell us about your request
I am looking into using Kubernetes ServiceAccounts for inter-service authentication between different services in a Kubernetes cluster using service account token projection. To do this services will need to fetch openid-configuration to validate the signed JWTs
Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
We are deploying our applications both to AWS and On-prem. If possible we would like to have as similar configurations as possible, where using Kubernetes has helped us a lot.
The problem I see is that while I can call the default Kubernetes endpoint:
https://kubernetes.default.svc/.well-known/openid-configuration
, thejwks_uri
parameter is set to an internal hostname, ex:https://ip-172-16-45-221.eu-west-1.compute.internal:443/openid/v1/jwks
. This is not a part of our VPC so I assume that it is the internal IP address of the instance running the api-server for EKS, so naturally I can not connect to it.The actual URI for the keys in EKS would be:
https://oidc.eks.eu-west-1.amazonaws.com/id/<cluster-id>/keys
which works.Are you currently working around this issue?
Calling
https://oidc.eks.eu-west-1.amazonaws.com/id/<cluster-id>/.well-known/openid-configuration
works as expected, but it would be convenient if we could always rely onhttps://kubernetes.default.svc/.well-known/openid-configuration
working within a clusterThe text was updated successfully, but these errors were encountered: