Deploy csi-addons sidecar with pod network#253
Deploy csi-addons sidecar with pod network#253black-dragon74 wants to merge 5 commits intoceph:mainfrom
Conversation
2da1778 to
e4075eb
Compare
This patch introduces a new DaemonSet for csi-addons operations that uses pod network instead of the host networking. Signed-off-by: Niraj Yadav <niryadav@redhat.com>
This patch modifies the provisioner deployments to use hostpath for CSI sockets. This is done to (later) enable CSI Addons sidecar to run as a separate deployment. Signed-off-by: Niraj Yadav <niryadav@redhat.com>
This patch deploys csi addons sidecar container into its own deployment that shares the hostpath for UDS with the CSI provisioner. This model enables CSI Addons operation without its dependence on host networking. Signed-off-by: Niraj Yadav <niryadav@redhat.com>
e4075eb to
5206417
Compare
|
Deploying fails because these images can not be pulled:
These versions come from ceph-csi-operator main branch: That was merged with #252 yesterday... Not sure how it passed CI if those images are not available yet, it requires a PR to promote the versions before the image can be pulled. |
|
|
||
| // Reconcile daemonset and deployment for CSI Addons | ||
| if ptr.Deref(r.driver.Spec.DeployCsiAddons, false) { | ||
| // CSI Addons deployment |
There was a problem hiding this comment.
Update the comment to mention this is only for cephfs and rbd, i would suggest to use isRbDDriver and isCephFSDriver to avoid future problem when we add new csi driver
| } | ||
| } | ||
|
|
||
| func (r *driverReconcile) reconcileCsiAddonsDeployment() error { |
There was a problem hiding this comment.
if i set the flag to false, i dont see the code where we are removing the deployment and deamonset.
| Name: pluginDirHostVolumeName, | ||
| VolumeSource: corev1.VolumeSource{ | ||
| HostPath: &corev1.HostPathVolumeSource{ | ||
| Path: fmt.Sprintf("%s/plugins/%s/provisioner", kubeletDirPath, driverNamePrefix), |
There was a problem hiding this comment.
Will there be any problem when both deployment and deamonset runs on the same node, Are we using different socket names?
if not lets use different name and remove provisioner from the path and use csiaddons
There was a problem hiding this comment.
Yes. We are using different sockets based on the driver type, e.g:
For RBD:
DS: /var/lib/kubelet/plugins/rook-ceph.rbd.csi.ceph.com/csi.sock
Deploy: /var/lib/kubelet/plugins/rook-ceph.rbd.csi.ceph.com/provisioner/csi.sock
For CephFS:
Deploy: /var/lib/kubelet/plugins/rook-ceph.cephfs.csi.ceph.com/provisioner/csi.sock
CSIAddons doesn't make much sense as a name here?
There was a problem hiding this comment.
lets rename it to ctrl-plugin we can use this for other future sockets as well. for DS can you please make it node-plugin?
| }, | ||
| } | ||
| // Liveness Sidecar Container | ||
| if r.driver.Spec.Liveness != nil { |
There was a problem hiding this comment.
we dont need this container, can we remove it
5290204 to
e549046
Compare
Madhu-1
left a comment
There was a problem hiding this comment.
Can you please share the results from multiple testing to ensure we are good?
| Spec: corev1.PodSpec{ | ||
| ServiceAccountName: serviceAccountName, | ||
| PriorityClassName: ptr.Deref(pluginSpec.PrioritylClassName, ""), | ||
| HostPID: r.isRdbDriver(), |
There was a problem hiding this comment.
This is just ported over from existing code as-is. Pretty sure it was there for a reason in cases of RBD.
There was a problem hiding this comment.
we should remove it, it was required for RBD as its does many operations like map. we dont required it for simple csi-addons
| Name: pluginDirHostVolumeName, | ||
| VolumeSource: corev1.VolumeSource{ | ||
| HostPath: &corev1.HostPathVolumeSource{ | ||
| Path: fmt.Sprintf("%s/plugins/%s/provisioner", kubeletDirPath, driverNamePrefix), |
There was a problem hiding this comment.
lets rename it to ctrl-plugin we can use this for other future sockets as well. for DS can you please make it node-plugin?
| Image: r.images["addons"], | ||
| ImagePullPolicy: imagePullPolicy, | ||
| SecurityContext: &corev1.SecurityContext{ | ||
| Privileged: ptr.To(true), |
There was a problem hiding this comment.
This need to be privileged? can you please verify for other containers as well, we need to keep it very minimum permission
There was a problem hiding this comment.
Yes. It is required in order to access UDS created by privileged CSI provisioner container. Not having this will cause issues on systems with enforcing selinux.
There was a problem hiding this comment.
sounds good :+1 can you please add a comment to it?
e549046 to
64f5765
Compare
Signed-off-by: Niraj Yadav <niryadav@redhat.com>
64f5765 to
0f8b9e3
Compare
This patch adds functionality to delete CSI Addons pods when their matching CSI Ctrlplugin pod is deleted by the user. This is done so that the kube-scheduler can reevaluate pod placement which is crucial when host-path is shared between the two pods. Signed-off-by: Niraj Yadav <niryadav@redhat.com>
78730d4 to
6c775e7
Compare
|
Closing this one as #269 is the updated implementation. This PR/branch is kept as is to preserve the work done (if needed down the line). |
This patch introduces a set of changes that allows the csi-addons sidecars to run without host networking. Here's a brief of the changes (separated into their own commits):
kubelet_dir/plugin/driverid/ctrl-plugin