Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add --privileged-datamover-pods option to installer and node agent #8243

Closed
wants to merge 1 commit into from

Conversation

sseago
Copy link
Collaborator

@sseago sseago commented Sep 24, 2024

Thank you for contributing to Velero!

Please add a summary of your change

For some kubernetes variants (for example, OpenShift), the 1.15 podified datamover pods fail with permission denied errors unless the pod is privileged like the node agent pod is.

This PR adds a new installer flag --privileged-datamover-pods and a corresponding velero node-agent server flag.

Does your change fix a particular issue?

Fixes #(issue)

Please indicate you've done the following:

@kaovilai
Copy link
Contributor

kaovilai commented Sep 24, 2024

We should probably find out which perm exactly was needed that caused permission denied errors.

Copy link

codecov bot commented Sep 24, 2024

Codecov Report

Attention: Patch coverage is 47.27273% with 29 lines in your changes missing coverage. Please review.

Please upload report for BASE (main@11f771f). Learn more about missing BASE report.
Report is 2 commits behind head on main.

Files with missing lines Patch % Lines
pkg/cmd/cli/nodeagent/server.go 0.00% 14 Missing ⚠️
pkg/exposer/csi_snapshot.go 25.00% 2 Missing and 1 partial ⚠️
pkg/exposer/generic_restore.go 50.00% 2 Missing and 1 partial ⚠️
pkg/install/deployment.go 0.00% 3 Missing ⚠️
pkg/cmd/cli/install/install.go 0.00% 2 Missing ⚠️
pkg/install/daemonset.go 0.00% 1 Missing and 1 partial ⚠️
pkg/install/resources.go 0.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##             main    #8243   +/-   ##
=======================================
  Coverage        ?   59.16%           
=======================================
  Files           ?      367           
  Lines           ?    30870           
  Branches        ?        0           
=======================================
  Hits            ?    18263           
  Misses          ?    11143           
  Partials        ?     1464           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@sseago
Copy link
Collaborator Author

sseago commented Sep 24, 2024

We should probably find out which perm exactly was needed that caused permission denied errors.

It's probably worth eventually figuring out why we need the permissions, but we still need the configurability. This is an action that in Velero 1.14 and earlier would have always been done under privileged pod conditions with the Node Agent (OpenShift always required a privileged node agent pod), so requiring it for datamover pods is not a regression. If in the future we can eliminate the need for OpenShift to have these pods privileged, that would be nice, but we can't guarantee that some other kubernetes environment won't still need this, so I think figuring that out is outside of the scope of this bugfix.

@Lyndon-Li
Copy link
Contributor

As discussed, let's find the root cause before making these changes, as the data mover micro service design, requiring on privileged mode is not expected. Therefore, we should make sure that it is really required according to the root cause and also document the reasons and scenarios.

@Lyndon-Li
Copy link
Contributor

@sseago One more question, when you see the permission denied error, does it happen to the pod, so data mover pod cannot start or to the execution of the backup, so the data mover backup starts but fails?

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

@Lyndon-Li Looks like the pod runs. Here's the full log:

time="2024-09-25T13:27:17Z" level=info msg="Setting log-level to INFO"
time="2024-09-25T13:27:17Z" level=info msg="Starting Velero data-mover backup 1.14.0 (-)" logSource="pkg/cmd/cli/datamover/backup.go:75"
time="2024-09-25T13:27:17Z" level=info msg="Starting micro service in node ip-10-0-69-144.us-east-2.compute.internal for du mysql-datamover8-l8b5n" logSource="pkg/cmd/cli/datamover/backup.go:214"
time="2024-09-25T13:27:17Z" level=info msg="Starting data path service mysql-datamover8-l8b5n" logSource="pkg/cmd/cli/datamover/backup.go:223"
time="2024-09-25T13:27:17Z" level=info msg="Running data path service mysql-datamover8-l8b5n" logSource="pkg/cmd/cli/datamover/backup.go:233"
time="2024-09-25T13:27:18Z" level=info msg="Run cancelable dataUpload" dataupload=mysql-datamover8-l8b5n logSource="pkg/datamover/backup_micro_service.go:164"
time="2024-09-25T13:27:18Z" level=info msg="Founding existing repo" backupLocation=velero-sample-1 logSource="pkg/repository/ensurer.go:86" repositoryType=kopia volumeNamespace=mysql-persistent
time="2024-09-25T13:27:19Z" level=warning msg="active indexes [xs0_44aa41fcd83d4ae671a727e6a2918864-s56440dfc1093d2be-c1 xs1_5b679af2306d29e9dde610c3992aeee6-s600ae5c0ee739585-c1 xn2_0232f612ab007beec274399a671fd5f3-s40ebf055b3c0b5f412c-c1 xn2_11f0b964e73e61c5828ac0ad8d219f65-sa6527884d45fb06612d-c1 xn2_2e7e35a9565cece9cb06238ff681d9a4-s23c14eeaca7b3c6612d-c1 xn2_55690358c099fde9cfaafa67592515d7-s5319e5078d66f66812d-c1 xn2_58d0f6ef8ed0b8f2e870ac49130721a7-s56388476dad4cd0012d-c1 xn2_63a64ae624b447c835ed3330be0a52bf-s6acfdd5b4e790fbd12c-c1 xn2_7137d146730e9986f0530dc88344a1f2-s3a6324ae74b3163312d-c1 xn2_7a408a1ef54253c2ecc4b7b34e491b8e-s3249d09146938bf612d-c1 xn2_7eb78e2f9d8335277ee07624e20a3dd5-s93df0504540f2aca12d-c1 xn2_85551345e95053a9a1f086f61d65b672-sd0e5c49c3acd0d0d12c-c1 xn2_99bf5b4a6848d2b1e358009f9bd1a057-s5a1dddeefdcfe9cd12d-c1 xn2_a01a37d9e3347dfe30e05d3f6a67e08f-s1588f1f742cc169912c-c1 xn2_a5971b1d9bc46a1f28e603377e4ea132-s1ab02f4eb11415c012d-c1 xn2_b117dc106a7830e1836cdc782608eaec-s25f3eb6287af0ba212c-c1 xn2_b550c2a09b761243719aa0404e5c5cf0-s53286d9da9e4c55d12d-c1 xn2_c54e75e1cd8507a4e82b1057d53f3b61-s28473d1b21dd250012d-c1 xn2_ccd6f5262c3d165cc428ca2b1ac3105a-sf94bfc957f99f83212d-c1 xn2_e57adc8bafc7c13c2a291b10b3f74493-s7eb9ac1dc60a0b7b12c-c1 xn2_f4ad203979fa5a3d33b5027054686295-s493d01f88cac8eeb12c-c1 xn2_fa1f3029ef72a839e71c4fbfad4119e8-sa2692899cbed427c12d-c1] deletion watermark 2024-09-24 05:41:24 +0000 UTC" dataupload=mysql-datamover8-l8b5n logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" logger name="[index-blob-manager]" sublevel=error
time="2024-09-25T13:27:19Z" level=info msg="Opening backup repo" dataupload=mysql-datamover8-l8b5n logSource="pkg/uploader/provider/kopia.go:78" repoUID=da72c4f8-7ff4-4214-8418-fbc2eb04f394
time="2024-09-25T13:27:19Z" level=warning msg="active indexes [xs0_44aa41fcd83d4ae671a727e6a2918864-s56440dfc1093d2be-c1 xs1_5b679af2306d29e9dde610c3992aeee6-s600ae5c0ee739585-c1 xn2_0232f612ab007beec274399a671fd5f3-s40ebf055b3c0b5f412c-c1 xn2_11f0b964e73e61c5828ac0ad8d219f65-sa6527884d45fb06612d-c1 xn2_2e7e35a9565cece9cb06238ff681d9a4-s23c14eeaca7b3c6612d-c1 xn2_55690358c099fde9cfaafa67592515d7-s5319e5078d66f66812d-c1 xn2_58d0f6ef8ed0b8f2e870ac49130721a7-s56388476dad4cd0012d-c1 xn2_63a64ae624b447c835ed3330be0a52bf-s6acfdd5b4e790fbd12c-c1 xn2_7137d146730e9986f0530dc88344a1f2-s3a6324ae74b3163312d-c1 xn2_7a408a1ef54253c2ecc4b7b34e491b8e-s3249d09146938bf612d-c1 xn2_7eb78e2f9d8335277ee07624e20a3dd5-s93df0504540f2aca12d-c1 xn2_85551345e95053a9a1f086f61d65b672-sd0e5c49c3acd0d0d12c-c1 xn2_99bf5b4a6848d2b1e358009f9bd1a057-s5a1dddeefdcfe9cd12d-c1 xn2_a01a37d9e3347dfe30e05d3f6a67e08f-s1588f1f742cc169912c-c1 xn2_a5971b1d9bc46a1f28e603377e4ea132-s1ab02f4eb11415c012d-c1 xn2_b117dc106a7830e1836cdc782608eaec-s25f3eb6287af0ba212c-c1 xn2_b550c2a09b761243719aa0404e5c5cf0-s53286d9da9e4c55d12d-c1 xn2_c54e75e1cd8507a4e82b1057d53f3b61-s28473d1b21dd250012d-c1 xn2_ccd6f5262c3d165cc428ca2b1ac3105a-sf94bfc957f99f83212d-c1 xn2_e57adc8bafc7c13c2a291b10b3f74493-s7eb9ac1dc60a0b7b12c-c1 xn2_f4ad203979fa5a3d33b5027054686295-s493d01f88cac8eeb12c-c1 xn2_fa1f3029ef72a839e71c4fbfad4119e8-sa2692899cbed427c12d-c1] deletion watermark 2024-09-24 05:41:24 +0000 UTC" dataupload=mysql-datamover8-l8b5n logModule=kopia/kopia/format logSource="pkg/kopia/kopia_log.go:101" logger name="[index-blob-manager]" sublevel=error
time="2024-09-25T13:27:19Z" level=info msg="FileSystemBR is initialized" bsl=velero-sample-1 dataupload=mysql-datamover8-l8b5n jobName=mysql-datamover8-l8b5n logSource="pkg/datapath/file_system.go:135" repository=kopia source namespace=mysql-persistent uploader=kopia
time="2024-09-25T13:27:19Z" level=info msg="Async fs br init" dataupload=mysql-datamover8-l8b5n logSource="pkg/datamover/backup_micro_service.go:192"
time="2024-09-25T13:27:19Z" level=info msg="Async fs backup data path started" dataupload=mysql-datamover8-l8b5n logSource="pkg/datamover/backup_micro_service.go:207"
time="2024-09-25T13:27:19Z" level=info msg="Start data path backup" dataupload=mysql-datamover8-l8b5n logSource="pkg/datapath/file_system.go:177"
time="2024-09-25T13:27:19Z" level=info msg="Starting backup" dataupload=mysql-datamover8-l8b5n logSource="pkg/uploader/provider/kopia.go:146" parentSnapshot= path=/ac30a356-c4c2-4ff4-9f57-59333a84d4ad realSource=mysql-persistent/mysql
time="2024-09-25T13:27:19Z" level=info msg="Start to snapshot..." dataupload=mysql-datamover8-l8b5n logSource="pkg/uploader/kopia/snapshot.go:238" parentSnapshot= path=/ac30a356-c4c2-4ff4-9f57-59333a84d4ad realSource=mysql-persistent/mysql
time="2024-09-25T13:27:19Z" level=info msg="Searching for parent snapshot" dataupload=mysql-datamover8-l8b5n logSource="pkg/uploader/kopia/snapshot.go:253" parentSnapshot= path=/ac30a356-c4c2-4ff4-9f57-59333a84d4ad realSource=mysql-persistent/mysql
time="2024-09-25T13:27:19Z" level=info msg="Using parent snapshot 7ba51896c8a23d220ab5d0fb96162194, start time 2024-09-24 17:26:14.383679751 +0000 UTC, end time 2024-09-24 17:26:14.408513147 +0000 UTC, description Kopia Uploader" dataupload=mysql-datamover8-l8b5n logSource="pkg/uploader/kopia/snapshot.go:267" parentSnapshot= path=/ac30a356-c4c2-4ff4-9f57-59333a84d4ad realSource=mysql-persistent/mysql
time="2024-09-25T13:27:19Z" level=error msg="Async fs backup data path failed" dataupload=mysql-datamover8-l8b5n error="Failed to run kopia backup: Failed to upload the kopia snapshot for si default@default:snapshot-data-upload-download/kopia/mysql-persistent/mysql: permission denied" logSource="pkg/datamover/backup_micro_service.go:264"
time="2024-09-25T13:27:19Z" level=info msg="Action finished" dataupload=mysql-datamover8-l8b5n logSource="pkg/uploader/provider/kopia.go:91"
time="2024-09-25T13:27:19Z" level=error msg="Async fs backup was not completed" dataupload=mysql-datamover8-l8b5n error="Data path for data upload mysql-datamover8-l8b5n failed: Failed to run kopia backup: Failed to upload the kopia snapshot for si default@default:snapshot-data-upload-download/kopia/mysql-persistent/mysql: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/datamover/backup_micro_service.go:268" error.function="github.com/vmware-tanzu/velero/pkg/datamover.(*BackupMicroService).OnDataUploadFailed" logSource="pkg/datamover/backup_micro_service.go:222"
time="2024-09-25T13:27:19Z" level=info msg="Waiting sentinel before shutdown" logSource="pkg/util/kube/event.go:138"
time="2024-09-25T13:27:19Z" level=info msg="Closing FileSystemBR" dataupload=mysql-datamover8-l8b5n logSource="pkg/datapath/file_system.go:145" user=mysql-datamover8-l8b5n
time="2024-09-25T13:27:19Z" level=info msg="FileSystemBR is closed" dataupload=mysql-datamover8-l8b5n logSource="pkg/datapath/file_system.go:151" user=mysql-datamover8-l8b5n

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

@Lyndon-Li so the permission denied error is happening during the actual kopia Upload call.

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

@Lyndon-Li This is a mysql PV, here's the full dir listing, but I'm not seeing any unusual file permissions here:

sh-4.4$ ls -l
total 20
drwx--S---. 5 mysql mysql  4096 Aug 21 18:50 data
-rw-r--r--. 1 mysql mysql     0 Aug 21 19:36 foo
drwxrws---. 2 root  mysql 16384 Aug 21 18:50 lost+found
srwxrwxrwx. 1 mysql mysql     0 Aug 21 18:50 mysql.sock
sh-4.4$ ls -lR
sh-4.4$ ls -lR
.:
total 20
drwx--S---. 5 mysql mysql  4096 Aug 21 18:50 data
-rw-r--r--. 1 mysql mysql     0 Aug 21 19:36 foo
drwxrws---. 2 root  mysql 16384 Aug 21 18:50 lost+found
srwxrwxrwx. 1 mysql mysql     0 Aug 21 18:50 mysql.sock

./data:
total 32824
-rw-rw----. 1 mysql mysql    24576 Aug 21 18:50 aria_log.00000001
-rw-rw----. 1 mysql mysql       52 Aug 21 18:50 aria_log_control
-rw-rw----. 1 mysql mysql      976 Aug 21 18:50 ib_buffer_pool
-rw-rw----. 1 mysql mysql  8388608 Aug 21 18:50 ib_logfile0
-rw-rw----. 1 mysql mysql 12582912 Aug 21 18:50 ibdata1
-rw-rw----. 1 mysql mysql 12582912 Aug 21 18:50 ibtmp1
-rw-rw----. 1 mysql mysql        0 Aug 21 18:50 multi-master.info
drwx--S---. 2 mysql mysql     4096 Aug 21 18:50 mysql
-rw-rw----. 1 mysql mysql        2 Aug 21 18:50 mysql-6888f8f786-8qhn6.pid
-rw-r--r--. 1 mysql mysql       15 Aug 21 18:50 mysql_upgrade_info
drwx--S---. 2 mysql mysql     4096 Aug 21 18:50 performance_schema
drwx--S---. 2 mysql mysql     4096 Aug 21 18:50 todolist

./data/mysql:
total 3312
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 column_stats.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 column_stats.MAI
-rw-rw----. 1 mysql mysql    2600 Aug 21 18:50 column_stats.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 columns_priv.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 columns_priv.MAI
-rw-rw----. 1 mysql mysql    2108 Aug 21 18:50 columns_priv.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 db.MAD
-rw-rw----. 1 mysql mysql   24576 Aug 21 18:50 db.MAI
-rw-rw----. 1 mysql mysql    2713 Aug 21 18:50 db.frm
-rw-rw----. 1 mysql mysql      65 Aug 21 18:50 db.opt
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 event.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 event.MAI
-rw-rw----. 1 mysql mysql    3752 Aug 21 18:50 event.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 func.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 func.MAI
-rw-rw----. 1 mysql mysql    1580 Aug 21 18:50 func.frm
-rw-rw----. 1 mysql mysql      35 Aug 21 18:50 general_log.CSM
-rw-rw----. 1 mysql mysql       0 Aug 21 18:50 general_log.CSV
-rw-rw----. 1 mysql mysql     804 Aug 21 18:50 general_log.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 global_priv.MAD
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 global_priv.MAI
-rw-rw----. 1 mysql mysql    1451 Aug 21 18:50 global_priv.frm
-rw-rw----. 1 mysql mysql    1024 Aug 21 18:50 gtid_slave_pos.frm
-rw-rw----. 1 mysql mysql   65536 Aug 21 18:50 gtid_slave_pos.ibd
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 help_category.MAD
-rw-rw----. 1 mysql mysql   24576 Aug 21 18:50 help_category.MAI
-rw-rw----. 1 mysql mysql    1704 Aug 21 18:50 help_category.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 help_keyword.MAD
-rw-rw----. 1 mysql mysql   24576 Aug 21 18:50 help_keyword.MAI
-rw-rw----. 1 mysql mysql    1636 Aug 21 18:50 help_keyword.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 help_relation.MAD
-rw-rw----. 1 mysql mysql   24576 Aug 21 18:50 help_relation.MAI
-rw-rw----. 1 mysql mysql    1457 Aug 21 18:50 help_relation.frm
-rw-rw----. 1 mysql mysql 2318336 Aug 21 18:50 help_topic.MAD
-rw-rw----. 1 mysql mysql   40960 Aug 21 18:50 help_topic.MAI
-rw-rw----. 1 mysql mysql    1774 Aug 21 18:50 help_topic.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 index_stats.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 index_stats.MAI
-rw-rw----. 1 mysql mysql    1627 Aug 21 18:50 index_stats.frm
-rw-rw----. 1 mysql mysql    5404 Aug 21 18:50 innodb_index_stats.frm
-rw-rw----. 1 mysql mysql   65536 Aug 21 18:50 innodb_index_stats.ibd
-rw-rw----. 1 mysql mysql    1909 Aug 21 18:50 innodb_table_stats.frm
-rw-rw----. 1 mysql mysql   65536 Aug 21 18:50 innodb_table_stats.ibd
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 plugin.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 plugin.MAI
-rw-rw----. 1 mysql mysql    1516 Aug 21 18:50 plugin.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 proc.MAD
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 proc.MAI
-rw-rw----. 1 mysql mysql    3549 Aug 21 18:50 proc.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 procs_priv.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 procs_priv.MAI
-rw-rw----. 1 mysql mysql    2893 Aug 21 18:50 procs_priv.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 proxies_priv.MAD
-rw-rw----. 1 mysql mysql   24576 Aug 21 18:50 proxies_priv.MAI
-rw-rw----. 1 mysql mysql    2837 Aug 21 18:50 proxies_priv.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 roles_mapping.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 roles_mapping.MAI
-rw-rw----. 1 mysql mysql    1659 Aug 21 18:50 roles_mapping.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 servers.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 servers.MAI
-rw-rw----. 1 mysql mysql   10000 Aug 21 18:50 servers.frm
-rw-rw----. 1 mysql mysql      35 Aug 21 18:50 slow_log.CSM
-rw-rw----. 1 mysql mysql       0 Aug 21 18:50 slow_log.CSV
-rw-rw----. 1 mysql mysql    2374 Aug 21 18:50 slow_log.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 table_stats.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 table_stats.MAI
-rw-rw----. 1 mysql mysql    1372 Aug 21 18:50 table_stats.frm
-rw-rw----. 1 mysql mysql   16384 Aug 21 18:50 tables_priv.MAD
-rw-rw----. 1 mysql mysql   24576 Aug 21 18:50 tables_priv.MAI
-rw-rw----. 1 mysql mysql    2978 Aug 21 18:50 tables_priv.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone.MAI
-rw-rw----. 1 mysql mysql     971 Aug 21 18:50 time_zone.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_leap_second.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_leap_second.MAI
-rw-rw----. 1 mysql mysql     969 Aug 21 18:50 time_zone_leap_second.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_name.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_name.MAI
-rw-rw----. 1 mysql mysql    1144 Aug 21 18:50 time_zone_name.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_transition.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_transition.MAI
-rw-rw----. 1 mysql mysql    1011 Aug 21 18:50 time_zone_transition.frm
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_transition_type.MAD
-rw-rw----. 1 mysql mysql    8192 Aug 21 18:50 time_zone_transition_type.MAI
-rw-rw----. 1 mysql mysql    1077 Aug 21 18:50 time_zone_transition_type.frm
-rw-rw----. 1 mysql mysql    2618 Aug 21 18:50 transaction_registry.frm
-rw-rw----. 1 mysql mysql  114688 Aug 21 18:50 transaction_registry.ibd
-rw-rw----. 1 mysql mysql   13588 Aug 21 18:50 user.frm

./data/performance_schema:
total 4
-rw-rw----. 1 mysql mysql 61 Aug 21 18:50 db.opt

./data/todolist:
total 72
-rw-rw----. 1 mysql mysql    65 Aug 21 18:50 db.opt
-rw-rw----. 1 mysql mysql   995 Aug 21 18:50 todo_item_models.frm
-rw-rw----. 1 mysql mysql 65536 Aug 21 18:50 todo_item_models.ibd

./lost+found:
total 0

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

@Lyndon-Li with debug logging enabled, here's the actual perm denied call. Directory access:

time="2024-09-25T14:53:32Z" level=debug msg="snapshotted directory" dataupload=mysql-datamover9-psj8b dur="119.195µs" error="unable to read directory: open /85416a60-7d08-4451-9773-760f3c3a6eee: permission denied" logModule=kopia/uploader logSource="pkg/kopia/kopia_log.go:92" parentSnapshot= path=. realSource=mysql-persistent/mysql

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

Narrowing it down. It looks like the mounted cloned volume cannot be accessed by the (root user but non-privileged container) environment where kopia runs. Here's what I got by keeping the pod open and connecting directly via rsh:

sh-5.1# ls -l /
total 96656
drwxrwsr-x.   3 root 1000690000     4096 Sep 25 15:32 951cf9c0-11d6-4d56-8e8f-b0173022eccc

sh-5.1# ls /951cf9c0-11d6-4d56-8e8f-b0173022eccc
ls: cannot open directory '/951cf9c0-11d6-4d56-8e8f-b0173022eccc': Permission denied

sh-5.1# whoami
root

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

I suspect that in an OpenShift environment, we may either need some additional permissons on the Velero SA, or modify the OpenShift SecurityContextConstraints, or modify the pod or container security context for the datamover backup pod.

The basic issue here seems to be that the cloned pod is owned/created by a different user and even as root, in OpenShift, the backup pod doesn't have read access. Restore works fine because we're working with a newly-provisioned volume that the restore pod owns, and then the ownership change operation at the end is also fine, since the pod runs as root already.

@sseago
Copy link
Collaborator Author

sseago commented Sep 25, 2024

@Lyndon-Li I think we isolated the problem. The way the pod is currently created, both the volumeMounts and the volumes entries have "readOnly: true" -- if we remove that from the volumes entry, but leave it for volumeMounts, backup works just fine on OpenShift. Here's my diff:

 sseago@p1gen3:~/ocp-migrations/velero$ git diff
diff --git a/pkg/exposer/csi_snapshot.go b/pkg/exposer/csi_snapshot.go
index 4dcc50d12..78d451127 100644
--- a/pkg/exposer/csi_snapshot.go
+++ b/pkg/exposer/csi_snapshot.go
@@ -461,7 +461,7 @@ func (e *csiSnapshotExposer) createBackupPod(
                VolumeSource: corev1.VolumeSource{
                        PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
                                ClaimName: backupPVC.Name,
-                               ReadOnly:  true,
+                               //ReadOnly:  true,
                        },
                },
        }}

In my pod definition, this results in readOnly for volumeMounts but not volumes -- but effectively the volume is read-only in the pod:

    volumeMounts:
    - mountPath: /3c4070e7-6782-44cd-a26b-3c91d29c327b
      name: 3c4070e7-6782-44cd-a26b-3c91d29c327b
      readOnly: true
...
  volumes:
  - name: 3c4070e7-6782-44cd-a26b-3c91d29c327b
    persistentVolumeClaim:
      claimName: nginx-datamover6-pgfgk

Read-only is enforced:

/3c4070e7-6782-44cd-a26b-3c91d29c327b # touch hello.txt
touch: hello.txt: Read-only file system
/3c4070e7-6782-44cd-a26b-3c91d29c327b # 

@shubham-pampattiwar
Copy link
Collaborator

@Lyndon-Li PR for the proposed fix: #8248

@Lyndon-Li
Copy link
Contributor

Lyndon-Li commented Sep 26, 2024

@sseago @shubham-pampattiwar

I have two concerns with the fix #8248:

  1. Will this change cause the ceph storage still go to unflatten mode so issue Allow setting the /spec/accessModes of a PVC created for the CSI snapshot data movement #7747 cannot be fixed?
  2. The ReadOnly field in VolumeSource is to tell the CSI driver to use ro flag for the PublishNodeVolume call. Literally it should be set as true since the volume itself is read only. Therefore, I am afraid removing the flag but keeping the readOnly in the volumeMount may cause problem in some environments with strict checks

Therefore, let's further troubleshoot it, simply removing ReadOnly may not be a good option.
Please do below tests in your env, let's see what permission is required and what the current user is having:

  • Inside the problematic pod run ls -n / and check the uid/gid for folder 951cf9c0-11d6-4d56-8e8f-b0173022eccc
  • Inside the problematic pod run cat /proc/self/uid_map and see the uid/gid inside/outside of the pod
  • Create a pod with the same configuration as the backupPod --- podSpec->securityContext->runAsUser: 0, podSpec->volumeSource->readOnly: true, podSpec->container[0]->volumeMount->readOnly:true, pvcSpec->accessMode: readOnlyMany, and see if you can access the volume data or not

@sseago
Copy link
Collaborator Author

sseago commented Sep 26, 2024

@Lyndon-Li
selinux relabeling (or lack thereof when the pod.spec.volumes[0] entry is readOnly) seems to be the culprit. From the node:

sh-5.1# ls -Z pvc-aa024669-d91a-4368-a65d-5dfadc3ccbce/
        system_u:object_r:unlabeled_t:s0 mount	system_u:object_r:container_var_lib_t:s0 vol_data.json
sh-5.1# ls -Z pvc-b76a13da-78e4-4f94-9440-0438cb3f80ce/
system_u:object_r:container_file_t:s0:c25,c26 mount       system_u:object_r:container_var_lib_t:s0 vol_data.json

We created a test pod with 2 volumes. both have readOnly in the volumeMounts entry, but only the first one listed above has readOnly in the volumes entry in pod.spec.
So the volume mounted with unlabeled_t:s0 gives a permission denied error when trying to access it from the pod, while the one with container_file_t:s0:c25,c26 works fine.

It seems that the only thing OpenShift-specific here is that OpenShift enables selinux by default. I suspect non-OpenShift clusters would see the same problem with the 1.15 datamover if selinux were enabled.

I think there are several possible resolutions, although some of these may not be feasible:

  1. Figure out a way via the pod security context to provide additional selinux privileges to allow the pod user to be able to access the unlabeled_t mount.
  2. Figure out a way for selinux relabeling to actually work on the mount when pod.spec.volumes entry has readOnly set (this may not be possible)
  3. Provide a configurable way to disable readOnly on the volumes entry -- i.e. don't always do it like Remove ReadOnly flag from backupPod Volumes spec #8248 but make it a (non-default) option and document that selinux-enabled clusters such as OpenShift require the change.
  4. Provide a configurable way to enable privileged mode for backup (such as this PR, but maybe without the restore pod changes).

I share your concern about the ceph shallow copy use case -- removing readOnly may break that. If it does, we may need both 3) and 4) above (if we can't do 1) or 2) ) -- and document that users using selinux and not needing challow copy should use 3), but users needing both selinux and shallow copy should use 4).

@shubham-pampattiwar @msfrucht @weshayutin @shawn-hurley @kaovilai

@sseago
Copy link
Collaborator Author

sseago commented Sep 26, 2024

@Lyndon-Li I found something that works without setting "privileged=true" or changing the pod mounts. Since the problem is selinux-specific, we found a way to make this work via the securityContext:
Here's my change:

$ git diff
diff --git a/pkg/exposer/csi_snapshot.go b/pkg/exposer/csi_snapshot.go
index 4dcc50d12..3d2890cb7 100644
--- a/pkg/exposer/csi_snapshot.go
+++ b/pkg/exposer/csi_snapshot.go
@@ -546,6 +546,9 @@ func (e *csiSnapshotExposer) createBackupPod(
                        RestartPolicy:                 corev1.RestartPolicyNever,
                        SecurityContext: &corev1.PodSecurityContext{
                                RunAsUser: &userID,
+                               SELinuxOptions: &corev1.SELinuxOptions{
+                                       Type: "spc_t",
+                               },
                        },
                },
        }

As @msfrucht pointed out to me today, this also has a performance advantage. For volumes with a large number of files, the selinux relabeling on mount is a slow process. Setting type: spc_t bypasses that and allows the pod to access unlabeled_t volumes.

So this should be completely ignored in non-selinux environments, as long as it's not a windows pod -- but since it's the velero container image, it's never a windows pod. It might be good to test this out in your non-OpenShift env just to make sure it doesn't break or change anything, but it shouldn't -- it should only alter the selinux context it's running in if selinux is enabled.

@weshayutin @kaovilai @shubham-pampattiwar @shawn-hurley

@Lyndon-Li
Copy link
Contributor

@sseago @msfrucht
Thanks for the investigation, the cause is now very clear.
I've created a separate issue #8249 to track this problem. I tested the proposed solution in my non-selinux env, I didn't see any problem.
I also vote the judgement that the backupPVC should not be relabeled because it is not necessary considering how the data is being used and the time consuming of relabel.
So I created a PR #8250 for it, please test in your selinux env, and if the fix works for the problem, you can go ahead to merge the PR.

Besides, I have two more questions, though I think the current fix is good enough for now, they might be something we need to follow up in future:

  1. Since the containers with spc_t option are treated as super privileged containers somehow, is it possible some runtime env checks this option in the yaml and prevent the pod from running unless some exception is explicitly filed somehow?
  2. Is this understanding of why the readOnly volume is not relabeled correct? --- since the volume is readOnly, no one could add the .autorelabel file to the volume or no one could add the relabel info into the volume. If this understanding is correct, the current problem is more like a problem of Kubernetes --- it mounts the volume readOnly, then it cannot relabel the volume, but the container process still being checked against selinux access control. If so, do we need to document it anywhere?

@sseago
Copy link
Collaborator Author

sseago commented Sep 27, 2024

@Lyndon-Li I'm not 100% sure on 1) -- regarding 2) I think that's at least reasonably close to the answer. I don't know about the mechanics of why relabel fails, but from reading docs (and talking to some people who know this stuff better than I do), it's pretty clear that relabeling won't happen with a readonly access mode (and, also when the volumes entry is marked readonly). It may be that for the first concern there will be users who don't want to open up permissions like this (although it's still an improvement over making the pod privileged which disables selinux enforcement and more), so for those users we may in the future want to provide an option so they can either use spc_t or remove the readonly attribute. But for now lets just get this working. We'll be testing the PR today.

@sseago
Copy link
Collaborator Author

sseago commented Sep 27, 2024

Closing in favor of #8250

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants