Reissue kube certs when assuming access request#50553
Conversation
4f69915 to
2bad500
Compare
|
|
||
| // KubeClusterFromKubeLocalProxySNI returns Kubernetes cluster name from SNI. | ||
| func KubeClusterFromKubeLocalProxySNI(serverName string) (string, error) { | ||
| kubeCluster, _, _ := strings.Cut(serverName, ".") |
There was a problem hiding this comment.
should we care about the teleport cluster?
It's possible you have the same cluster name in two different teleport clusters
There was a problem hiding this comment.
I'm reading the teleport cluster using TeleportClusterFromKubeLocalProxySNI() before reading the kube cluster.
Then m.certReissuer() is called with that pair of teleport + kube cluster.
Do you mean something else?
| defer s.mu.RUnlock() | ||
| for _, gw := range s.gateways { | ||
| targetURI := gw.TargetURI() | ||
| if !(targetURI.IsKube() && targetURI.GetRootClusterURI() == cluster.URI) { |
There was a problem hiding this comment.
is this correct? a gateway from a leaf cluster can be modified when you assume a root cluster role
There was a problem hiding this comment.
I think so. As Grzegorz wrote in the PR description:
There's one downside to this approach: we can't determine which local proxies will be affected by the access request, so we must invalidate all of them.
If you assume a root cluster role, AFAIK we cannot easily tell if it affects leaf cluster resources.
There was a problem hiding this comment.
Yeah, Rafał's explanation is correct.
ravicious
left a comment
There was a problem hiding this comment.
Looks good, but I'll wait until you address Tiago's feedback.
| defer s.mu.RUnlock() | ||
| for _, gw := range s.gateways { | ||
| targetURI := gw.TargetURI() | ||
| if !(targetURI.IsKube() && targetURI.GetRootClusterURI() == cluster.URI) { |
There was a problem hiding this comment.
I think so. As Grzegorz wrote in the PR description:
There's one downside to this approach: we can't determine which local proxies will be affected by the access request, so we must invalidate all of them.
If you assume a root cluster role, AFAIK we cannot easily tell if it affects leaf cluster resources.
# Conflicts: # integration/proxy/teleterm_test.go
* Reissue kube certs when assuming access request * Fix and add tests * Explain when the cert can be not found * Replace `clearCertsFn` with `middleware` field * Improve iterating over gateways * Improve a comment (cherry picked from commit e4e09a1)
* Reissue kube certs when assuming access request * Fix and add tests * Explain when the cert can be not found * Replace `clearCertsFn` with `middleware` field * Improve iterating over gateways * Improve a comment (cherry picked from commit e4e09a1)
* Reissue kube certs when assuming access request (#50553) * Reissue kube certs when assuming access request * Fix and add tests * Explain when the cert can be not found * Replace `clearCertsFn` with `middleware` field * Improve iterating over gateways * Improve a comment (cherry picked from commit e4e09a1) * Adjust logs to logrus * Adjust tests * Adjust logs to logrus * One more logger fix * Lint
* Reissue kube certs when assuming access request (#50553) * Reissue kube certs when assuming access request * Fix and add tests * Explain when the cert can be not found * Replace `clearCertsFn` with `middleware` field * Improve iterating over gateways * Improve a comment (cherry picked from commit e4e09a1) * Adjust logs to logrus * Adjust logs to logrus * Logger and lint
* Reissue kube certs when assuming access request * Fix and add tests * Explain when the cert can be not found * Replace `clearCertsFn` with `middleware` field * Improve iterating over gateways * Improve a comment
* Reissue kube certs when assuming access request * Fix and add tests * Explain when the cert can be not found * Replace `clearCertsFn` with `middleware` field * Improve iterating over gateways * Improve a comment
Closes https://github.com/gravitational/customer-sensitive-requests/issues/301
Currently, assuming or dropping an access request does not affect open kube local proxies. As a result, if an assumed request includes elevated permissions, the user must close the existing local proxy and reopen it for the changes to take effect. To address this issue, we can clear the certificates in the local proxy, allowing them to be reissued when a new request is made.
There's one downside to this approach: we can't determine which local proxies will be affected by the access request, so we must invalidate all of them. As a result, the user must then perform per-session MFA for all open kube proxies after assuming or dropping a request. Additionally, if the user has an open kube proxy and then assumes a request that doesn't allow access to that cluster, that proxy will become unusable. But I'm not sure if this is really an issue as it seems reasonable to expect that previous access might not be retained after assuming a request.
changelog: Assuming an access request in Teleport Connect now propagates elevated permissions to already opened Kubernetes tabs
Demo:
kube.assume.request.mov
For now I'm keeping it in draft (I need to add some tests) but any feedback is welcome!