Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ceph refactor #1538

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from
Draft

Ceph refactor #1538

wants to merge 7 commits into from

Conversation

MadnessASAP
Copy link
Contributor

In continuation of #1473 it is now approaching completion. It passes the full test suite, in addition incusd now can use ceph on my own cluster where previously it could not.

Also initial tests seem to indicate it fixes #1023:

user@host ~> incus storage show ceph
config:
  cephfs.cluster_name: ceph
  cephfs.path: cephfs/volumes/test-group/test-subvol/e8459ffc-ac7e-41de-bb96-aaef2ee0e8b5
  cephfs.user.name: admin
  source: cephfs/volumes/test-group/test-subvol/e8459ffc-ac7e-41de-bb96-aaef2ee0e8b5
description: ""
name: ceph
driver: cephfs
used_by: []
status: Created
locations:
- none

Remaining is to write a few tests to capture some of the additional Ceph superpowers and to squash the whole mess of commits down.

With that in mind, comments and concerns are now very much welcome

Previously during Ceph storage mounting Incus would try and parse the
`ceph.conf` files itself. While this approach works in the majority of
cases it failed to cover more then a few common setups and was fragile.

This commit changes that to instead makes calls to the Ceph admin tool
`ceph` which provides easy to parse JSON output and is by definition
"correct".

I also introduce a `CephBuildMount` function that essentially takes the
output of `CephMonitors`, `CephFsid`, `CephKeyring`, and the CephFS path
to produce a source path and options list suitable for passing to
`mount`. This also uses the new mount syntax stlye enabling support for
(sub)volumes.

With this the `cephfs.getConfig` function is made redundant and `cephfs`
driver is tweaked to use the modified discovery functions as well as
`CephBuildMount`.

Signed-off-by: Michael 'ASAP' Weinrich <[email protected]>
QEMU interfaces with librados for providing Ceph RBD support and as such
already has quite robust cluster discovery abilities. Instead of telling
QEMU everything it could possibly want to know and risking telling it
something wrong, tell it what it needs to know (user, pool, image). Then
QEMU can almost certainly do a better and faster job of collecting the
necessary information for initiating a cluster connection.

Signed-off-by: Michael 'ASAP' Weinrich <[email protected]>
@MadnessASAP MadnessASAP force-pushed the ceph-refactor branch 2 times, most recently from 0496169 to ee4955e Compare December 24, 2024 02:04
Add a `INCUS_CEPH_CLIENT` variable for testing with a restricted caps
client.

Moved cephfs testing to a nonstandard fs name for testing mount syntax.

Signed-off-by: Michael 'ASAP' Weinrich <[email protected]>
for testing different capabilities combos it's easiest to delete and recreate
the client, obviously this doesn't work if the client is admin.

Signed-off-by: Michael 'ASAP' Weinrich <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Add support for CephFS volumes / sub-volumes
1 participant