Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating MKS x CDA documentation #7320

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from
Open
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ To configure OVHcloud Cloud Disk Array, you need to use the [OVHcloud API](https

![Create user on Cloud Disk Array](images/create-ceph-csi-user.png){.thumbnail}

- Add permissions on default-fs for the Ceph CSI user:
- Add permissions on fs-default for the Ceph CSI user:

> [!api]
>
Expand All @@ -59,18 +59,59 @@ To configure OVHcloud Cloud Disk Array, you need to use the [OVHcloud API](https

![Add user permission on Cloud Disk Array](images/add-user-permissions.png){.thumbnail}

- Get your Kubernetes nodes IP:
### Step 2 - Allow Kubernetes Nodes' IPs and/or Public Cloud Gateway IP to the Cloud Disk Array service

#### Your cluster is installed with Public Network or a private network without using an OVHcloud Internet Gateway or a custom one as your default route

Once the partition is created, we need to allow our Kubernetes nodes to access our newly created partition.

Get our Kubernetes nodes IP:

```bash
kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
```

```console
kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
57.128.37.26 135.125.66.144 141.95.167.5
$ kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="InternalIP")].address }'
51.77.204.175 51.77.205.79
```

#### Your cluster is installed with Private Network and a default route via your Private Network (OVHcloud Internet Gateway/OpenStack Router or a custom one)

Because your nodes are configured to be routed by the private network gateway, you need to add the gateway IP address to the ACLs.

By using Public Cloud Gateway through our Managed Kubernetes Service, Public IPs on nodes are only for management purposes: [MKS Known Limits](/pages/public_cloud/containers_orchestration/managed_kubernetes/known-limits)

You can get your OVHcloud Internet Gateway's Public IP by navigating through the OVHcloud Control Panel:

`Public Cloud`{.action} > Select your tenant > `Network / Gateway`{.action} > `Public IP`{.action}

You can also get your OVHcloud Internet Gateway's Public IP by using our APIs:

> [!api]
>
> @api {v1} /cloud GET /cloud/project/{serviceName}/region/{regionName}/gateway/{id}
>

You can find more details about how to use OVHcloud APIs with this guide: [First Steps with the OVHcloud APIs](/pages/manage_and_operate/api/first-steps)

If you want to use your Kubernetes cluster to know your Gateway Public's IP, you can run these commands:

```bash
kubectl run get-gateway-ip --image=ubuntu:latest -i --tty --rm
```

This command will create a temporary pod and open a console.

You may have to wait a bit to let the pod be created. Once the shell appears, you can run this command:

```bash
apt update && apt upgrade -y && apt install -y curl && curl ifconfig.me
thomasrts marked this conversation as resolved.
Show resolved Hide resolved
```

- Add the list of nodes IP to allow access to the Cloud Disk Array cluster:
The Public IP of the Gateway you're using should appear.
thomasrts marked this conversation as resolved.
Show resolved Hide resolved

- Add the list of nodes IP or the Gateway IP to allow access to the Cloud Disk Array cluster:

> [!api]
>
Expand Down Expand Up @@ -121,7 +162,7 @@ keyring = /root/ceph.client.ceph-csi.keyring
```

```bash
vim ceph.client.ceph-csi.keyring
vim /root/ceph.client.ceph-csi.keyring

[client.ceph-csi]
key = <your_ceph_csi_user_key>
Expand Down Expand Up @@ -173,8 +214,8 @@ csiConfig:
- clusterID: "abcd123456789" # You can change this, but it needs to have at least one letter character
monitors:
- "<your_ceph_monitor_ip_1>:6789"
- "<your_ceph_monitor_ip_2>::6789"
- "<your_ceph_monitor_ip_3>::6789"
- "<your_ceph_monitor_ip_2>:6789"
- "<your_ceph_monitor_ip_3>:6789"
storageClass:
create: true
name: "cephfs"
Expand Down Expand Up @@ -221,49 +262,33 @@ kubectl apply -f cephfs-persistent-volume-claim.yaml
Let’s now create two Nginx pods using the persistent volume claim as their webroot folder on two different kubernetes nodes. In this example the kubernetes nodes are called `kubernetes-node-1` and `kubernetes-node-2`, please modify this accordingly. Let’s create a `cephfs-nginx-pods.yaml` file:

```yaml
apiVersion: v1
kind: Pod
metadata:
name: cephfs-nginx-1
namespace: default
spec:
nodeName: kubernetes-node-1
volumes:
- name: cephfs-volume
persistentVolumeClaim:
claimName: cephfs-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-volume

---

apiVersion: v1
kind: Pod
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: cephfs-nginx-2
namespace: default
name: cephfs-nginx
namespace: default
spec:
nodeName: kubernetes-node-2
volumes:
- name: cephfs-volume
persistentVolumeClaim:
claimName: cephfs-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-volume
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
volumes:
- name: cephfs-volume
persistentVolumeClaim:
claimName: cephfs-pvc
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: cephfs-volume
```

And apply this to create the Nginx pods:
Expand All @@ -275,7 +300,8 @@ kubectl apply -f cephfs-nginx-pods.yaml
Let’s enter inside the first Nginx container to create a file on the NFS persistent volume:

```bash
kubectl exec -it cephfs-nginx-1 -n default -- bash
FIRST_POD=$(kubectl get pod -l name=nginx --no-headers=true -o custom-columns=:metadata.name | head -1)
kubectl exec -it $FIRST_POD -n default -- bash
```

Create a new `index.html` file:
Expand All @@ -290,15 +316,30 @@ And exit the Nginx container:
exit
```

Generate the URL to open in your broswer:

```bash
$ URL=$(echo "http://localhost:8001/api/v1/namespaces/default/pods/http:$FIRST_POD:/proxy/")
echo $URL
```

You can open the URL which is displayed to access the Nginx Service.
thomasrts marked this conversation as resolved.
Show resolved Hide resolved

Now let’s try to see if the data is shared with the second pod (if you have more than one node deployed).
thomasrts marked this conversation as resolved.
Show resolved Hide resolved

```bash
$ SECOND_POD=$(kubectl get pod -l name=nginx --no-headers=true -o custom-columns=:metadata.name | head -2 | tail -1)
URL2=$(echo "http://localhost:8001/api/v1/namespaces/default/pods/http:$SECOND_POD:/proxy/")
echo $URL2
```

Let’s try to access our new web page:

```bash
kubectl proxy
```

And open the URL <http://localhost:8001/api/v1/namespaces/default/pods/http:cephfs-nginx-1:/proxy/>

Now let’s try to see if the data is shared with the second pod. Open the URL <http://localhost:8001/api/v1/namespaces/default/pods/http:cephfs-nginx-2:/proxy/>
And open both URLs given by the commands above to see if the data is shared with all the pods connected to the Ceph volume.
thomasrts marked this conversation as resolved.
Show resolved Hide resolved

As you can see the data is correctly shared between the two Nginx pods running on two different Kubernetes nodes.
Congratulations, you have successfully set up a multi-attach persistent volume with OVHcloud Cloud Disk Array!
Expand Down