-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: Basic support for creating shared logical volumes #341
Comments
I think this should be relatively simple to implement, most of the changes will be in blivet (the storage library the role uses).
So would this be for the the volume (in storage role this is the LV) or for the pool (for us this is the VG). If I understand it correctly the VG itself is shared so we'll simply create all volumes (LVs) with the Few additional questions:
|
It is possible for an LV in a shared VG to be activated in exclusive mode so that only the first activating node in the cluster can use it. It might make sense to have a The "LV activation" section in this doc describes the options: https://man7.org/linux/man-pages/man8/lvmlockd.8.html
In that case there's an assumption in the playbook that doesn't match how the shared storage is being used in the cluster. I would treat it as an error to be safe.
As far as I know, removal has to be done in this order:
Perhaps we could loop in @teigland on this to check my assertions. |
all the possible options:
In a local VG (not shared), the e|s characters are ignored, and all activation is -ay.
Right, pick one node to do lvremove and vgremove. That node would skip the lockstop which is built into vgremove. |
So I guess we'll just assume that "someone else" did the lockstop calls and we are only going to do a standard |
For the gfs2 role, that sounds fine. The HA cluster resources will manage the locks in the normal case and we don't support removing the volume groups in the role because that's a destructive operation that the user should consider carefully. |
Fixed via #388 |
In the new gfs2 role we idempotently create LVs and set them up for shared storage using community.general.* modules to set up PVs as normal and then:
--shared
option tovgcreate
vgchange --lock-start <VG>
and--activate sy
option tolvcreate
We would like to use the storage role for this purpose instead, to avoid bundling modules from community.general into linux-system-roles. The storage role currently does not provide a way to use these options.
The proposal is to add a new
shared: (true|false)
option for volumes to abstract this functionality in the storage role.Step 2 is required for step 3 to work, but if step 2 cannot be implemented in the storage role, it should be sufficient for steps 1 and 3 to be supported separately so that the gfs2 role can run step 2 itself.
The text was updated successfully, but these errors were encountered: