Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mounted Blobfuse2 volume / container owned by root #809

Closed
mortenjoenby opened this issue Jan 3, 2023 · 35 comments
Closed

Mounted Blobfuse2 volume / container owned by root #809

mortenjoenby opened this issue Jan 3, 2023 · 35 comments

Comments

@mortenjoenby
Copy link

We are testing blobfuse2 as we want to compare performance to blobfuse (v1.4.5).
We managed to get our pod running with a mounted volume (only after updating Ubuntu image from august to december build - due to "exec: "blobfuse2": executable file not found in $PATH" error").

What happened:
We successfully mount a blobfuse2 volume on an existing blobcontainer which we have previously used for testing with blobfuse v1.4.5 - CSI driver v1.17.0).
Problem: Container mounts as root and not application owner

What you expected to happen:
We want the container to mount as the application user that runs our application.
When using azure-storage-fuse standalone on a VM I can mount the blobfuse2 volume as the OS user who should own the content.
I would like to know how I can do this within AKS?

How to reproduce it:
$ ls -l
drwxrwxrwx 2 root root 4096 Jan 3 15:11 background-processarea

Environment:

  • CSI Driver version: 1.18.0
  • Kubernetes version (use kubectl version): 1.24.3
  • OS (e.g. from /etc/os-release): Oracle Linux Server 8.4
  • Kernel (e.g. uname -a): 5.4.0-1098-azure doc: fix doc typo #104~18.04.2-Ubuntu
@andyzhangx
Copy link
Member

there is -o allow_other option for v1, could you also try that option? @mortenjoenby
cc @cvvz

@andyzhangx
Copy link
Member

with -o allow_other option for v2, it's like this:

# k exec -it statefulset-blob-0 -- sh
# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         124G   30G   95G  24% /
tmpfs            64M     0   64M   0% /dev
blobfuse2      1000M     0 1000M   0% /mnt/blob
/dev/root       124G   30G   95G  24% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           5.3G   12K  5.3G   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           3.9G     0  3.9G   0% /proc/acpi
tmpfs           3.9G     0  3.9G   0% /proc/scsi
tmpfs           3.9G     0  3.9G   0% /sys/firmware

# ls /mnt/blob/ -lt
total 0
-rwxrwxrwx 1 root root 28 Jan  4 06:11 outfile

@mortenjoenby
Copy link
Author

mortenjoenby commented Jan 4, 2023

Hi @andyzhangx.
So with Blobfuse2 we see the files/blob's when checking the Blob container through Azure Portal, but if I have a terminal on the pod/container as the application user, we don't see the files.
With Blobfuse v1.4 we were able to ude UID/GID and it worked. Why is this not working with Blobfuse2?

This is the PV yaml used with Blobfuse v1 using UID/GID:

apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ $releaseName }}-pv-bgp-area-blobfuse
labels:
{{- include "app.labels" . | nindent 4 }}
{{- include "app.selectorLabels" . | nindent 4 }}
chart: {{ .Chart.Name }}
version: {{ .Chart.Version }}
selectorLabel: {{ $bgpStorage.selectorLabel }}
spec:
capacity:
storage: {{ $bgpStorage.size }}
accessModes:
- {{ $bgpStorage.accessModes }}
persistentVolumeReclaimPolicy: Retain
csi:
driver: blob.csi.azure.com
readOnly: false
volumeHandle: {{ $releaseName }}-blob
volumeAttributes:
containerName: {{ $bgpStorage.containerName }}
protocol: "fuse"
nodeStageSecretRef:
name: {{ $releaseName }}-bgp-storage-secret
namespace: {{ $releaseNamespace }}
mountOptions:
- -o default_permissions
- -o allow_other
- -o umask=007
- -o gid=1000
- -o uid=1000

@andyzhangx
Copy link
Member

@mortenjoenby I think your blobfuse2 mount does not work, could you ssh to your application, and run following command to check blobfuse2 mount?

# k exec -it statefulset-blob-0 -- sh
# mount | grep blobfuse
blobfuse2 on /mnt/blob type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
# df -h
Filesystem      Size  Used Avail Use% Mounted on
overlay         124G   30G   95G  24% /
tmpfs            64M     0   64M   0% /dev
blobfuse2      1000M     0 1000M   0% /mnt/blob
/dev/root       124G   30G   95G  24% /etc/hosts
shm              64M     0   64M   0% /dev/shm

@mortenjoenby
Copy link
Author

Hi @andyzhangx.
Is this on the pod needing the blobfuse2 mount or is it on the csi blob node?
This is what I see on my application pod:

[stibosw@testk8sdev1-perftest01-backend-sts-0 /]$ mount | grep blobfuse2
blobfuse2 on /shared/workarea/background-processarea2 type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)

And it actually shows UID/GID, so why can't I set it?
I think this is a setback from v1.

@andyzhangx
Copy link
Member

@vibhansa-msft @souravgupta-msft do you know whether -o gid=1000, -o uid=1000 options are supported in v2?

@mortenjoenby
Copy link
Author

From what I can tell there's at least a difference. Seems that UID/GID are actually 0 in both cases though.
We have setup both v1 and v2 on this pod for testing:

[perftest01-backend-sts-0 /]$ mount | grep blobfuse
blobfuse on /shared/workarea/background-processarea type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
blobfuse2 on /shared/workarea/background-processarea2 type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
[perftest01-backend-sts-0 /]$ ls -l /shared/workarea/
total 155855
drwxrwx--- 2 stibosw stibosw     4096 Jan  5 11:46 background-processarea
drwxrwxrwx 2 root    root        4096 Jan  5 12:28 background-processarea2

Only difference here is the "defaul_permissions" set on v1 mount.

@mortenjoenby
Copy link
Author

mortenjoenby commented Jan 5, 2023

This is the container log from the csi-blob-node:

protocol fuse

volumeId testk8sdev1-perftest01-blob
context map[containerName:background-processarea-blobv1 protocol:fuse]
mountflags [-o default_permissions -o allow_other -o umask=007 -o uid=1000 -o gid=1000]
mountOptions [-o default_permissions -o allow_other -o umask=007 -o uid=1000 -o gid=1000 --cancel-list-on-mount-seconds=10 --empty-dir-check=false --tmp-path=/mnt/testk8sdev1-perftest01-blob --container-name=background-processarea-blobv1 --pre-mount-validate=true --use-https=true]
args /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount -o default_permissions -o allow_other -o umask=007 -o uid=1000 -o gid=1000 --cancel-list-on-mount-seconds=10 --empty-dir-check=false --tmp-path=/mnt/testk8sdev1-perftest01-blob --container-name=background-processarea-blobv1 --pre-mount-validate=true --use-https=true
serverAddress testk8sdevweperftestblob.blob.core.windows.net
I0105 09:15:57.735065   10501 nodeserver.go:144] mouting using blobfuse proxy
I0105 09:15:57.735788   10501 nodeserver.go:158] calling BlobfuseProxy: MountAzureBlob function
I0105 09:15:57.815063   10501 nodeserver.go:397] volume(testk8sdev1-perftest01-blob) mount on "/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount" succeeded
I0105 09:15:57.815132   10501 utils.go:82] GRPC response: {}
I0105 09:15:57.817848   10501 utils.go:75] GRPC call: /csi.v1.Node/NodePublishVolume
I0105 09:15:57.817865   10501 utils.go:76] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount","target_path":"/var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["-o default_permissions","-o allow_other","-o umask=007","-o uid=1000","-o gid=1000"]}},"access_mode":{"mode":5}},"volume_context":{"containerName":"background-processarea-blobv1","csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"testk8sdev1-perftest01-backend-sts-0","csi.storage.k8s.io/pod.namespace":"testk8sdev1-perftest01","csi.storage.k8s.io/pod.uid":"f678584a-997e-4389-9181-4049abf76c9a","csi.storage.k8s.io/serviceAccount.name":"default","protocol":"fuse"},"volume_id":"testk8sdev1-perftest01-blob"}
I0105 09:15:57.818407   10501 nodeserver.go:122] NodePublishVolume: volume testk8sdev1-perftest01-blob mounting /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount at /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse/mount with mountOptions: [bind]
I0105 09:15:57.818430   10501 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse/mount)
I0105 09:15:57.820282   10501 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse/mount)
I0105 09:15:57.821626   10501 nodeserver.go:138] NodePublishVolume: volume testk8sdev1-perftest01-blob mount /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/9904a349b0b038ec764bcf23e384abd850e3054afd75bb98cf2caff177dbff33/globalmount at /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse/mount successfully
I0105 09:15:57.821642   10501 utils.go:82] GRPC response: {}
I0105 09:15:58.734357   10501 utils.go:75] GRPC call: /csi.v1.Node/NodeStageVolume
I0105 09:15:58.734380   10501 utils.go:76] GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["-o default_permissions","-o allow_other","-o umask=007"]}},"access_mode":{"mode":5}},"volume_context":{"containerName":"background-processarea-blobv2","protocol":"fuse2"},"volume_id":"testk8sdev1-perftest01-blob2"}
I0105 09:15:58.734973   10501 blob.go:350] parsing volumeID(testk8sdev1-perftest01-blob2) return with error: error parsing volume id: "testk8sdev1-perftest01-blob2", should at least contain two #
I0105 09:15:58.734992   10501 blob.go:411] volumeID(testk8sdev1-perftest01-blob2) authEnv: []
I0105 09:15:58.735040   10501 nodeserver.go:349] target /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount

protocol fuse2

volumeId testk8sdev1-perftest01-blob2
context map[containerName:background-processarea-blobv2 protocol:fuse2]
mountflags [-o default_permissions -o allow_other -o umask=007]
mountOptions [-o default_permissions -o allow_other -o umask=007 --cancel-list-on-mount-seconds=10 --empty-dir-check=false --tmp-path=/mnt/testk8sdev1-perftest01-blob2 --container-name=background-processarea-blobv2 --pre-mount-validate=true --use-https=true]
args /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount -o default_permissions -o allow_other -o umask=007 --cancel-list-on-mount-seconds=10 --empty-dir-check=false --tmp-path=/mnt/testk8sdev1-perftest01-blob2 --container-name=background-processarea-blobv2 --pre-mount-validate=true --use-https=true
serverAddress testk8sdevweperftestblob.blob.core.windows.net
I0105 09:15:58.735065   10501 nodeserver.go:144] mouting using blobfuse proxy
I0105 09:15:58.735608   10501 nodeserver.go:158] calling BlobfuseProxy: MountAzureBlob function
I0105 09:15:59.171765   10501 nodeserver.go:397] volume(testk8sdev1-perftest01-blob2) mount on "/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount" succeeded
I0105 09:15:59.171821   10501 utils.go:82] GRPC response: {}
I0105 09:15:59.174537   10501 utils.go:75] GRPC call: /csi.v1.Node/NodePublishVolume
I0105 09:15:59.174553   10501 utils.go:76] GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount","target_path":"/var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse2/mount","volume_capability":{"AccessType":{"Mount":{"mount_flags":["-o default_permissions","-o allow_other","-o umask=007"]}},"access_mode":{"mode":5}},"volume_context":{"containerName":"background-processarea-blobv2","csi.storage.k8s.io/ephemeral":"false","csi.storage.k8s.io/pod.name":"testk8sdev1-perftest01-backend-sts-0","csi.storage.k8s.io/pod.namespace":"testk8sdev1-perftest01","csi.storage.k8s.io/pod.uid":"f678584a-997e-4389-9181-4049abf76c9a","csi.storage.k8s.io/serviceAccount.name":"default","protocol":"fuse2"},"volume_id":"testk8sdev1-perftest01-blob2"}
I0105 09:15:59.175136   10501 nodeserver.go:122] NodePublishVolume: volume testk8sdev1-perftest01-blob2 mounting /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount at /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse2/mount with mountOptions: [bind]
I0105 09:15:59.175161   10501 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse2/mount)
I0105 09:15:59.176731   10501 mount_linux.go:183] Mounting cmd (mount) with arguments ( -o bind,remount /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse2/mount)
I0105 09:15:59.178304   10501 nodeserver.go:138] NodePublishVolume: volume testk8sdev1-perftest01-blob2 mount /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount at /var/lib/kubelet/pods/f678584a-997e-4389-9181-4049abf76c9a/volumes/kubernetes.io~csi/testk8sdev1-perftest01-pv-bgp-area-blobfuse2/mount successfully
I0105 09:15:59.178335   10501 utils.go:82] GRPC response: {}

@souravgupta-msft
Copy link

@vibhansa-msft @souravgupta-msft do you know whether -o gid=1000, -o uid=1000 options are supported in v2?

@andyzhangx we do accept uid and gid as input. But we have observed some issues in libfuse layer with that. So, it does not work as expected.

@mortenjoenby
Copy link
Author

@souravgupta-msft / @andyzhangx , currently I don't think uid/gid is even accepted. It failed for us at least.
But if you have observed issues and it doesn't work as expected, why was v2 released as GA?
To me this is a setback, and we need this. But is that why it's preview.4 that's included and not the final GA version?

@vibhansa-msft
Copy link

@mortenjoenby : Are you saying these options were accepted in preview.4 and not in the GA build? That's not the case they are accepted in GA version as well. However, we have observed that passing them down to libfuse does not work as intended and hence we started ignoring those options at our end. User expectation here was that mount will be accessible to only the user whose uid/gid is given but mounting through /etc/fstab was going via root user and setting these values was resulting into some access issues.
Can you share what use case you are trying to solve here which is blocked by not supporting uid,gid?

@vibhansa-msft
Copy link

@mortenjoenby : I looked at the code and it does seem to have issue with parsing uid and gid. We can fix that parsing logic from our end but that will still not serve the purpose.

I tried setting it up on the mount and here is my observation:

  • If user1 mounts with uid/gid set for user2 with allow_other then both users are able to access the mount path
  • If user1 mounts with uid/gid set for user2 without allow_other then only user1 is able to access the mount
  • here both user1 and user2 are in different groups
  • not able to make any combination where only user2 is allowed to access the path
  • By default we do not take uid/gid as 0 rather it goes based on who is mounting

@mortenjoenby
Copy link
Author

Hi @vibhansa-msft (not sure I can tag you - don't know why).
I am saying the UID/GID accepted with Blobfuse v1.4.x / CSI Driver 1.17.0.
With Blobfuse v2 / CSI Driver 1.18.0 they are not even accepted.
That's what I demonstrated in previous comments.
preview.4 is the version included in CSI Driver 1.18.0, which I also find strange, that you release a CSI driver version including preview version of Blobfuse v2. It should be the GA release.

When UID/GID is not supported, we simply can't see the files, so in a troubleshooting scenario on the application side, we won't be able to see that the files are there. If the files were at least visible that might be ok, but they are not. I can only view them using Azure Portal and viewing the Blob container.
But again, this worked with CSI Driver 1.17.0 using Blobfuse v1.4.5, so it's a setback.

@vibhansa-msft
Copy link

When you say it was working with blobfuse-1.4.5, is that working with "allow_other"?
Even with blobfuse-1.4.5 my observation is

  • uid 1000 mounted blobfuse giving uid=1002, gid=1002
  • uid 1000 is still able to list and view the files
  • uid 1002 when tries to list gets "permission denied"
  • if I add allow_other then both are able to view the files

@vibhansa-msft
Copy link

if things are working with allow_other then there is no point of giving uid.gid at all, because allow_other just means mount is accessible to all users.

@andyzhangx
Copy link
Member

allow_other is already the default mount option in both v1 & v2 in blob csi driver.

@vibhansa-msft
Copy link

if allow_other is there then it does not make sense to pass down uid and gid. I agree to the point that we need to fix our parsing logic but want to understand what role uid/gid are playing in this workflow. If allow_other is there in mount option then any user is free to access to mount point. Its just that file ownership will be shown with the set uid nothing else.

@mortenjoenby
Copy link
Author

Hi @andyzhangx / @vibhansa-msft.
Just to make sure I understand ... Are you guys testing in AKS? Or are you testing standalone on a VM?
I will do some more testing. Also with allow_other = false.

In AKS, if using allow_other=true (default), won't it then be possible to exploit that and e.g. see the files from another pod?

@andyzhangx
Copy link
Member

Hi @andyzhangx / @vibhansa-msft. Just to make sure I understand ... Are you guys testing in AKS? Or are you testing standalone on a VM? I will do some more testing. Also with allow_other = false.

In AKS, if using allow_other=true (default), won't it then be possible to exploit that and e.g. see the files from another pod?

@mortenjoenby with -o allow_other, it's 0777 permission for files and folders, and seems your requirement is not using allow_other and sst uid, gid, then you need to modify your storage class setting:

mountOptions:
- -o allow_other
- --file-cache-timeout-in-seconds=120

it's 0777 permission for files and folders with -o allow_other in storage class example :

# k exec -it statefulset-blob-1 -- sh
# cd /mnt/blob
# ls -lt
total 0
-rwxrwxrwx 1 root root 2557240 Jan  6  2023 outfile
# mkdir test
# ls -lt
total 0
-rwxrwxrwx 1 root root 2558808 Jan  6 13:23 outfile
drwxrwxrwx 2 root root    4096 Jan  6 13:23 test

@mortenjoenby
Copy link
Author

Hi @andyzhangx .
We do have allow_other set.
Not sure exactly what we did wrong, but I we have run another test where our application generates files and I do see the files in the terminal:

[stibosw@testk8sdev1-perftest01-backend-sts-0 background-processarea2]$ ls -l
total 0
drwxrwxrwx 2 root root 4096 Jan  6 13:15 Inbound
drwxrwxrwx 2 root root 4096 Jan  6 13:15 InboundPoller
drwxrwxrwx 2 root root 4096 Jan  6 13:15 RefreshConfiguration

Only difference from Blobfuse v1 is that the ownership is root and not the application owner.
So what will happen if I set "allow_other = false" ? Can I even do that at all ?

And one other question. Where do I find all the default values used in the CSI Driver for Blobfuse v2?

@andyzhangx
Copy link
Member

@mortenjoenby just remove - -o allow_other in storage class or pv, and that means there is no allow_other setting:

mountOptions:
- -o allow_other
- --file-cache-timeout-in-seconds=120

You could find all mount options in blob csi driver logs in around NodeStageVolume logs, in your logs, it's:

mountOptions [-o default_permissions -o allow_other -o umask=007 --cancel-list-on-mount-seconds=10 --empty-dir-check=false --tmp-path=/mnt/testk8sdev1-perftest01-blob2 --container-name=background-processarea-blobv2 --pre-mount-validate=true --use-https=true]
args /var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/dfbed847be5da9f988a69f778d9397c84fc29260c8c258d435a62b11a390a463/globalmount -o default_permissions -o allow_other -o umask=007 --cancel-list-on-mount-seconds=10 --empty-dir-check=false --tmp-path=/mnt/testk8sdev1-perftest01-blob2 --container-name=background-processarea-blobv2 --pre-mount-validate=true --use-https=true
serverAddress testk8sdevweperftestblob.blob.core.windows.net

@vibhansa-msft
Copy link

@mortenjoenby: user and owner of files are derived with uid/gid value submitted in mount command. In v1 we were accepting those values and passing it down to libfuse to show the ownership correctly. But we in v2 we decided to ignore that because it was not serving the purpose correctly. As I mentioned in my above example even if user1 mounts blobfuse with uid/gid set for 'user2' and no 'allow_other' user1 can still access the files while user2 cannot. Kindly validate this behavior with both v1 and v2 with AKS, that can help us understand what the correct defaults shall be and whether we shall still support uid/gid or not.

@vibhansa-msft
Copy link

As for the original issue of mount failing when uid/gid are supplied I have corrected the code in this

@mortenjoenby
Copy link
Author

@mortenjoenby: user and owner of files are derived with uid/gid value submitted in mount command. In v1 we were accepting those values and passing it down to libfuse to show the ownership correctly. But we in v2 we decided to ignore that because it was not serving the purpose correctly. As I mentioned in my above example even if user1 mounts blobfuse with uid/gid set for 'user2' and no 'allow_other' user1 can still access the files while user2 cannot. Kindly validate this behavior with both v1 and v2 with AKS, that can help us understand what the correct defaults shall be and whether we shall still support uid/gid or not.

@vibhansa-msft , we will try this, but I am not sure we can squeeze it in today.

@vibhansa-msft
Copy link

Sure, I tried it in my seutp and even in v1 I see this issue. As per observation giving uid/gid does not make any difference (other then showing a different owner to a file). allow_other is what controls who can access the files. If you confirm the same, we need to think on whether to set this uid/gid at all in case of AKS.

@andyzhangx
Copy link
Member

As for the original issue of mount failing when uid/gid are supplied I have corrected the code in this

@vibhansa-msft so uid=xxx, gid=xxx setting are already supported in v2 driver?

@vibhansa-msft
Copy link

For backward compatibility Blobfuse2 accepts uid and gid parameters as input. However, in past we have observed these parameters do not work as expected. Even when uid is set and mount is done from another user, user with the uid provided is not able to access the files and folder unless allow_other is provided in mount options. As flags were creating a confusion for the customers, we have stopped passing them down to libfuse layer. They will be accepted as valid inputs but later discarded at blobfuse layer.

@andyzhangx
Copy link
Member

thanks @vibhansa-msft will close this issue.

@andyzhangx
Copy link
Member

andyzhangx commented Mar 24, 2023

seems there is issue here, currently on AKS, it's -o allow_other by default, but gid, uid is not passed to blobfuse v2, user with gid, uid cannot access the files any more even -o allow_other is set. @vibhansa-msft
This is used to be working in blobfuse v1 (with -o allow_other, and gid, uid setting)

@andyzhangx
Copy link
Member

can we reopen this issue?

@mortenjoenby
Copy link
Author

Of course you can. Do I need to do something?

@1CuriousPenguin
Copy link

Hi we are currently facing the same issue, is there any temporary workaround ?

@andyzhangx
Copy link
Member

Hi we are currently facing the same issue, is there any temporary workaround ?

@1CuriousPenguin you could ssh to the agent node, and then run apt update && apt install blobfuse2 -y to install blobfuse 2.0.3 as a workaround.

@1CuriousPenguin
Copy link

Thanks @andyzhangx. Since we are using Helm, I'm trying to update to v1.21.0 and then apply patch to install blobfuse v2. Will that fix the issue or do we need to wait for a new release ?

@andyzhangx
Copy link
Member

Thanks @andyzhangx. Since we are using Helm, I'm trying to update to v1.21.0 and then apply patch to install blobfuse v2. Will that fix the issue or do we need to wait for a new release ?

@1CuriousPenguin pls wait for v1.21.1 release, it would be ready this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants