-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hub
pod unable to establish connections to k8s api-server etc on port 6443 with Cilium
#3202
Comments
hub
pod unable to establish connections to k8s api-server etc on port 6443 with Cilium
Did you run into this in a GKE based cluster using Cilium via GCP's dataplane v2, or was this a cluster setup in another way? |
Ok so I do not think that it is a GKE based cluster, sorry, I am not familiar with cluster but what I found is that the runtime engine is containerd://1.6.15-k3s1 and cilium is configured. |
Ah, its a k3s based cluster. Then i think the main issue is that network policies are enforced at all (cilium, calico), but that access is restricted to the k8s internals there but not in other clusters. |
@Ph0tonic existing core network policy takes care of kube api server egress on GKE. I have been testing JupyterHub on GKE Auopilot for a few weeks now and do not see any other issues so far. You can check the details in my post, note the K8sAPIServer |
I have not installed k3s and tested but I think changing the server port to 443 should resolve this issue without any additional policy. I am including the reference links below [1]https://kubernetes.io/docs/concepts/security/controlling-access/#transport-security |
Thanks @vizeit, I will have a look at these configurations and see if it fixes my problem. |
So I looked at your link and it the difference between 443 and 6443 was not really clear to me. I see 2 possibilities :
|
@Ph0tonic Were you able to test with port 443 to confirm that it works with the existing core network policy? |
I can reproduce this problem with Cilium on a bare-metal cluster. Disabling the Access to the API server from pods inside the cluster goes through The |
Hi, The solution which work for me is the following config: hub:
networkPolicy:
egress:
- ports:
- port: 6443 |
Trying to clarify this:
The following egress rule mentioned by @Ph0tonic works, but it allows connections to any host on port 6443, not only the Kubernetes API: hub:
networkPolicy:
egress:
- ports:
- port: 6443 Alternatively, a CiliumNetworkPolicy can be used to filter traffic specifically from the apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: allow-access-to-api-server
namespace: jupyterhub-test
spec:
egress:
- toEntities:
- kube-apiserver
endpointSelector:
matchLabels:
app: jupyterhub
component: hub Also note that the same policy should be added also for the |
Bug description
Default Kube-Spwaner is not able to spawn any user pod, it fails while attempting to create the PVC with a
TimeoutError
.Expected behaviour
Should be able to able to spawn pods.
Analysis
After some research, I identified that my problem was linked with the
netpol
egress config of the hub.Here are a few cilium logs of dropped packets :
After some research I identified that destination addresses belonged to
kube-apiserver
,kube-proxy
andkube-controller-manager
.To fix the issue I identified that the problem lay in the egress and not in the ingress part. And managed to find a fix:
The issue is that the hub tries to access the
kube-apiserver
to generate a PVC but the request is blocked by the egress configuration.I am surprised that @vizeit did not have this issue in #3167.
Your personal set up
I am using the latest v3.0.0 version of this helm chart with cilium.
Full environment
Configuration
# jupyterhub_config.py
Logs
The text was updated successfully, but these errors were encountered: