Skip to content

Commit

Permalink
fix: raid volume pre cleanup
Browse files Browse the repository at this point in the history
Cause: Existing data were not removed from member disks before RAID
volume creation.

Fix: RAID volumes now remove existing data from member disks as needed before creation.

Signed-off by: Jan Pokorny <[email protected]>
  • Loading branch information
japokorn committed Jul 19, 2023
1 parent d95e590 commit 19af7d8
Show file tree
Hide file tree
Showing 2 changed files with 106 additions and 2 deletions.
9 changes: 7 additions & 2 deletions library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -1002,8 +1002,13 @@ def _create(self):
if self._device:
return

if safe_mode:
raise BlivetAnsibleError("cannot create new RAID in safe mode")
for spec in self._volume["disks"]:
disk = self._blivet.devicetree.resolve_device(spec)
if not disk.isleaf or disk.format.type is not None:
if safe_mode and (disk.format.type is not None or disk.format.name != get_format(None).name):
raise BlivetAnsibleError("cannot remove existing formatting and/or devices on disk '%s' in safe mode" % disk.name)

Check warning on line 1009 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1005-L1009

Added lines #L1005 - L1009 were not covered by tests
else:
self._blivet.devicetree.recursive_remove(disk)

Check warning on line 1011 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1011

Added line #L1011 was not covered by tests

# begin creating the devices
members = self._create_raid_members(self._volume["disks"])
Expand Down
99 changes: 99 additions & 0 deletions tests/tests_raid_volume_cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
---
- name: Test RAID cleanup
hosts: all
become: true
vars:
storage_safe_mode: false
storage_use_partitions: true
mount_location1: '/opt/test1'
mount_location2: '/opt/test2'
volume1_size: '5g'
volume2_size: '4g'

tasks:
- name: Run the role
include_role:
name: linux-system-roles.storage

- name: Mark tasks to be skipped
set_fact:
storage_skip_checks:
- blivet_available
- packages_installed
- service_facts

- name: Get unused disks
include_tasks: get_unused_disk.yml
vars:
max_return: 3
disks_needed: 3

- name: Create two LVM logical volumes under volume group 'foo'
include_role:
name: linux-system-roles.storage
vars:
storage_pools:
- name: foo
disks: "{{ unused_disks }}"
volumes:
- name: test1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
- name: test2
size: "{{ volume2_size }}"
mount_point: "{{ mount_location2 }}"

- name: Enable safe mode
set_fact:
storage_safe_mode: true

- name: >-
Try to overwrite existing device with raid volume
and safe mode on (expect failure)
include_tasks: verify-role-failed.yml
vars:
__storage_failed_regex: cannot remove existing formatting.*in safe mode
__storage_failed_msg: >-
Unexpected behavior when overwriting existing device with RAID volume
__storage_failed_params:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: present
- name: Disable safe mode
set_fact:
storage_safe_mode: false

- name: Create a RAID0 device mounted on "{{ mount_location1 }}"
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: present

- name: Cleanup - remove the disk device created above
include_role:
name: linux-system-roles.storage
vars:
storage_volumes:
- name: test1
type: raid
raid_level: "raid1"
raid_device_count: 2
raid_spare_count: 1
disks: "{{ unused_disks }}"
mount_point: "{{ mount_location1 }}"
state: absent

0 comments on commit 19af7d8

Please sign in to comment.