Add support for Kuberentes streaming subprotocol v5 within Teleport#39337
Add support for Kuberentes streaming subprotocol v5 within Teleport#39337
Conversation
There was a problem hiding this comment.
We can't use this websocket executor because if the kubernetes_service doesn't support protocol v5, it will fallback to websocket v4 which doesn't support termination signals.
I see interest in dropping SPDY executor between Teleport components so we can say goodbye to the legacy and unmaintained SPDY protocol. But in order to do it, we need to ensure the other teleport component we are dialing supports protocol v5, otherwise kubectl will be mostly broken when stdin terminates.
You can either:
- check the version of the kubernetes_service and if > some version, we allow the websocket executor because we know it supports it.
- We implement our own fallback if the server doesn't support v5
- we delay this until teleport v16
I think the teleport v16 approach is the sanest for now.
There was a problem hiding this comment.
Just to be clear, we can still forward the requests internally using SPDY while the users request websockets. When I wrote the WS implementation, I had that in mind and teleport accepts connections for SPDY and Websocket but always initiates the requests as SPDY between components and even the kube cluster.
The only part that needs to be delayed is the internal requests/kube requests using web sockets. We can still support websocket v5 as user's frontend.
Nothing else needs to be changed.
There was a problem hiding this comment.
Yep, I know, I'm looking into fallback approach and generally it works great, but there's a problem with whole session context cancellation by the trackingConnection (s.connMonitorCancel), it was added to fix a flaky test, maybe there's a way around it.
There was a problem hiding this comment.
I've implemented fallback approach and tested it in all configurations and permutations - proxying to a v5 capable kube, v5 capable our kube_service, legacy service with both tty and tty-less execs, everything seems to be working fine, so we don't have to check if target kube/kube_service supports v5 protocol, and we can merge this and backport it to at least v15.
There was a problem hiding this comment.
As per discussion with Tiago - I've split this PR in two parts, first (now PR #39755 ) will add support for receiving incoming connection (e.g. kubectl) in websocket subprotocol v5, and it will be backported to all supported Teleport versions. It's a safe change, since we only receive subprotocol v5, but then make proxying requests using old stable SPDY implementation. And in this PR we add making proxying requests (to Teleport kube_services and target Kubernetes clusters) using v5 subprotocol and we'll leave it to v16.
There was a problem hiding this comment.
kube service should never forward the request to kube api as websocket without properly ensuring it supports websocket v5
39a53c2 to
e5243f1
Compare
c9ee558 to
5a4ed51
Compare
56c5787 to
4b87801
Compare
5a4ed51 to
0144084
Compare
0144084 to
a8173f9
Compare
| cfg := &rest.Config{ | ||
| // WrapTransport will replace default roundTripper created for the WebsocketExecutor | ||
| // and on successfully established connection we will set upgrader's websocket connection. | ||
| WrapTransport: func(baseRt http.RoundTripper) http.RoundTripper { |
There was a problem hiding this comment.
Are we supposed to do this?
What's the reasoning behind this wrap?
How many requests do we do using this wrapper? Doesn't the default round tripper built by remotecommand.NewWebSocketExecutor, does a request?
There was a problem hiding this comment.
We need to use our custom dialer to be able to reach kube service over the reverse tunnel, default round tripper won't be able to dial. We do the same thing for the SPDY executor - we provide our custom dialer:

The only thing that for SPDY they provide NewSPDYExecutorForTransports where we can set our custom transport directly, but for the websocket executor they didn't expose this API, so we replace transport in the wrap to achieve the same effect.
| } | ||
|
|
||
| func (f *Forwarder) getExecutor(sess *clusterSession, req *http.Request) (remotecommand.Executor, error) { | ||
| spdyExec, err := f.getSPDYExecutor(sess, req) |
There was a problem hiding this comment.
Can we use the default executor if the agent version is >= supported version?
Check 07514ea#diff-d003554d2ebabbc29e7d8188865c07e1e968baf83d650eca34fdd57d6b4808ae to gather more details on how to check the kube service version.
It's just easier to do a single request to reduce latency and resource usage. we can delete the new code on teleport 17. We should do the same for Kube clusters <=1.29
There was a problem hiding this comment.
I've added version checking instead of fallback executor 7436a53
| } | ||
|
|
||
| func (f *Forwarder) getExecutor(sess *clusterSession, req *http.Request) (remotecommand.Executor, error) { | ||
| if details, ok := f.clusterDetails[sess.kubeClusterName]; ok && |
There was a problem hiding this comment.
kube details must be protected with the lock.
|
|
||
| func (f *Forwarder) getExecutor(sess *clusterSession, req *http.Request) (remotecommand.Executor, error) { | ||
| if details, ok := f.clusterDetails[sess.kubeClusterName]; ok && | ||
| kubernetesSupportsExecSubprotocolV5(details.kubeClusterVersion) && f.allServersSupportExecSubprotocolV5(sess) { |
There was a problem hiding this comment.
We don't need to enforce both properties.
If the service is a kube_service or a legacy proxy serving the kube cluster directly, you should check details.KubeClusterVersion, if it's a proxy or a legacy proxy not serving the kube cluster, you check allServersSupportExecSubprotocolV5.
We can use the new protocol between teleport components and rely on SPDY for the final forwarding to kube cluster.
|
PTAL @tigrato |
Now we don't use fallback executor, so we will not do possible second connection during execution.
… requirement we need.
2b4dee5 to
a7074e1
Compare
This PR adds support Kuberentes streaming subprotocol v5, when making calls within Teleport (and to target Kubernetes clusters). Kubernetes team is deprecating SPDY protocol for streaming commands (exec, attach, port forward) and adding websocket based subprotocol v5. They released it as alpha in Kubernetes version 1.29 and it will be default in 1.30. Port forwarding migration to websocket was delayed, currently alpha is planned for 1.30.