-
Notifications
You must be signed in to change notification settings - Fork 885
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add proper PSPs to enforce security and safety for Kubeflow on Kubernetes #2014
Comments
HI @juliusvonkohout , Thank you |
There is not "one" PSP. Please read the whole Kubernetes documentation to PSPs first. You need to understand Kubernetes before altering Kubeflow. If your company is interested in a managed Kubeflow contact me (t-systems) or Arrikto for a managed offer. |
Hi, @juliusvonkohout,
I still had few issues though I am not sure I guess I am having an issue in installing these pods
for cache-server
Also, there are issues with pod admitting for User Namespace due to PSP
Can you please provide me with some pointer on this how I can resolve it? Thank you. |
@sunnythepatel If you would have investigated the cache-server issue yourself, you would have found out that it is fixed upstream in 1.4 and there are instruction on how to build a version for 1.3.1. kubeflow/pipelines#5742 ? I am using a patched 1.5.1 image myself withKubeflow 1.3.1 |
"Also, there are issues with pod admitting for User Namespace due to PSP Why did you deliberately omit "- PSP_SCC_clusterrole" from my instructions? If you do not add the PSP to all user namespaces using the clusterrole it will obviously not work. |
Sorry, I completely missed that it works now. Thank you |
Please check everything and confirm whether it works. Then we might be able to persuade the manifest working group to get this upstream. |
Hi, @juliusvonkohout Thank you |
The instruction is the pull request itself. If you are uncapable of building an OCI image use mtr.external.otc.telekomcloud.com/ml-pipeline/cache-deployer:1.5.1 |
Yes, It works I can confirm now.
I am now trying to fix these few pod issue
|
The instruction is the pull request itself. If you are incapable of building an OCI image use mtr.external.otc.telekomcloud.com/ml-pipeline/cache-deployer:1.5.1 For Katib-mysql you have to set the fsgroup to the actual user. That is a bug in the mysql image. |
Hi, @juliusvonkohout
I think the issue is related to this kubeflow/pipelines#4505 but not able to understand the solution I am using k8s version v1.20.8 |
Thanks, @juliusvonkohout For katib-mysql setting below in securityContext works
|
Hi, @juliusvonkohout Thanks to you I was able to fix all the issues except. I fixed the mpi-operator issue as well
Just need to fix the issue of cache-deployer-deployment and cache-server
I think the issue is related to this kubeflow/pipelines#4505 but not able to understand the solution I am using k8s version v1.20.8 |
Alright caching v1 is broken by design in my opinion. Just disable it. It works on my kubernetes 1.20 but has other limitations. Bobgy already proposed caching V2. |
Since another user was able to run without root rights, should I proceed by creating a pull request? I could
So we could integrate it into the testing pipelines and evaluate it for some time while the old insecure example is still available. What do you think? @Bobgy @yanniszark @davidspek Maybe @elikatsis @kimwnasptd What do you think @manifests-wg |
Thanks you your time in this effort @juliusvonkohout! Some initial questions I have:
Then there's also the discussion around the deprecation of |
run as non-root and blocking all capabilities as described here https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted. This is achievable with istio-cni which does not need net_admin and net_raw https://istio.io/latest/docs/ops/deployment/requirements/#pod-requirements. Istio-cni has an init container limitatation that you can workaround with a simple pod annotation https://discuss.istio.io/t/istio-cni-drops-initcontainers-outgoing-traffic/2311. i tested that with kfserving and seldon (annotations: traffic.sidecar.istio.io/excludeOutboundIPRanges: "0.0.0.0/0"). We might be able to set this on a namespace level. In the long term i would even consider enforcing readOnlyRootFilesystem and use an emptydir or pvc for stuff like https://github.com/kubeflow/pipelines/blob/ef6e01c90c2c88606a0ad56d848ecc98609410c3/backend/src/cache/deployer/deploy-cache-service.sh#L39. But this is not essential at the moment and as far as i know not even enforced by the restricted profile.
ALL namspaces including profile namespaces, kubeflow, auth, istio-system, knative-serving, knative-eventing etc.
Openshift needs SecurityContextConstraints. They have a slightly different syntax and are more annoying and ugly than podsecuritypolicies or podsecuritystandards. We can support both at the same time.
This actually does not matter much. We use a podsecuritypolicy that is equivalent to the podsecuritystandards restricted profile https://kubernetes.io/docs/concepts/security/pod-security-standards/#restricted. If podsecuritypolicies are deprecated we just have to flip a switch. I would like to get this into the official build and testing environments too such that security issues get detected in the CICD pipelines for merge requests. |
@kimwnasptd i will work on it with cloudflare in #2455 |
/reopen |
@juliusvonkohout: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
closed in favor of #2528 |
Related to #1756 @yanniszark @davidspek
and #1984 @sunnythepatel
Currently there are no PodSecurityPolicies or SecurityContextConstraints to enforce security within kubeflow
I would like to change that and put the necessary energy in pull requests.
I am using the following on my cluster for months to run everything as non-root including a rootless istio-cni.
It also works for pipelines with k8sapi or the new emissary executor kubeflow/pipelines#5718 @Bobgy
I need your feedback on the following solution. if you are satisfied, I will create a pull request.
in the main kustomization yaml
kustomize_istio.zip
kustomize_addons_psp_scc.zip
The text was updated successfully, but these errors were encountered: