Skip to content

Commit

Permalink
feat: Added support for creating shared LVM setups
Browse files Browse the repository at this point in the history
- feature requested by GFS2
- adds support for creating shared VGs
- shared LVM setup needs lvmlockd service with dlm lock manager to be running
- to test this change ha_cluster system role is used to set up degenerated cluster on localhost
- requires blivet version with shared LVM setup support (storaged-project/blivet#1123)
  • Loading branch information
japokorn committed Oct 12, 2023
1 parent c4147d2 commit 9e201d3
Show file tree
Hide file tree
Showing 6 changed files with 146 additions and 1 deletion.
11 changes: 11 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,17 @@ keys:
This specifies the type of pool to manage.
Valid values for `type`: `lvm`.

- `shared`

If set to `true`, the role creates or manages a shared volume group. Requires lvmlockd and
dlm services configured and running.

Default: `false`

__WARNING__: Modifying the `shared` value on an existing pool is a
destructive operation. The pool itself will be removed as part of the
process.

- `disks`

A list which specifies the set of disks to use as backing storage for the pool.
Expand Down
2 changes: 2 additions & 0 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ storage_pool_defaults:
raid_chunk_size: null
raid_metadata_version: null

shared: false

storage_volume_defaults:
state: "present"
type: lvm
Expand Down
3 changes: 2 additions & 1 deletion library/blivet.py
Original file line number Diff line number Diff line change
Expand Up @@ -1527,7 +1527,7 @@ def _create(self):
if not self._device:
members = self._manage_encryption(self._create_members())
try:
pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members)
pool_device = self._blivet.new_vg(name=self._pool['name'], parents=members, shared=self._pool['shared'])

Check warning on line 1530 in library/blivet.py

View check run for this annotation

Codecov / codecov/patch

library/blivet.py#L1530

Added line #L1530 was not covered by tests
except Exception as e:
raise BlivetAnsibleError("failed to set up pool '%s': %s" % (self._pool['name'], str(e)))

Expand Down Expand Up @@ -1823,6 +1823,7 @@ def run_module():
raid_spare_count=dict(type='int'),
raid_metadata_version=dict(type='str'),
raid_chunk_size=dict(type='str'),
shared=dict(type='bool'),
state=dict(type='str', default='present', choices=['present', 'absent']),
type=dict(type='str'),
volumes=dict(type='list', elements='dict', default=list(),
Expand Down
3 changes: 3 additions & 0 deletions tests/collection-requirements.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
collections:
- fedora.linux_system_roles
14 changes: 14 additions & 0 deletions tests/test-verify-pool.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,20 @@
# compression
# deduplication

- name: Get VG shared value status
command: vgs --noheadings --binary -o shared {{ storage_test_pool.name }}
register: vgs_dump
when: storage_test_pool.type == 'lvm' and storage_test_pool.state == 'present'
changed_when: false

- name: Verify that VG shared value checks out
assert:
that: (storage_test_pool.shared | bool) and ('1' in vgs_dump.stdout)
msg: >-
Shared VG presence ({{ storage_test_pool.shared }})
does not match its expected state ({{ '1' in vgs_dump.stdout }})
when: storage_test_pool.type == 'lvm' and storage_test_pool.state == 'present'

- name: Verify pool subset
include_tasks: "test-verify-pool-{{ storage_test_pool_subset }}.yml"
loop: "{{ _storage_pool_tests }}"
Expand Down
114 changes: 114 additions & 0 deletions tests/tests_lvm_pool_shared.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
---
- name: Test LVM shared pools
hosts: all
become: true
vars:
storage_safe_mode: false
storage_use_partitions: true
mount_location1: '/opt/test1'
volume1_size: '4g'

tasks:
- name: Create cluster
ansible.builtin.include_role:
name: fedora.linux-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: rhel9-1node
# Users should vault-encrypt the password
ha_cluster_hacluster_password: hapasswd
ha_cluster_extra_packages:
- dlm
- lvm2-lockd
ha_cluster_cluster_properties:
- attrs:
# Don't do this in production
- name: stonith-enabled
value: 'false'
ha_cluster_resource_primitives:
- id: dlm
agent: 'ocf:pacemaker:controld'
instance_attrs:
- attrs:
# Don't do this in production
- name: allow_stonith_disabled
value: 'true'
- id: lvmlockd
agent: 'ocf:heartbeat:lvmlockd'
ha_cluster_resource_groups:
- id: locking
resource_ids:
- dlm
- lvmlockd

- name: Run the role
include_role:
name: fedora.linux-system-roles.storage

- name: Get unused disks
include_tasks: get_unused_disk.yml
vars:
max_return: 1

- name: >-
Create a disk device; specify disks as non-list mounted on
{{ mount_location }}
include_role:
name: fedora.linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
shared: true
state: present
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
- name: Verify role results
include_tasks: verify-role-results.yml

- name: Repeat the previous step to verify idempotence
include_role:
name: fedora.linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
shared: true
state: present
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"

- name: Verify role results
include_tasks: verify-role-results.yml

- name: >-
Remove the device created above
{{ mount_location }}
include_role:
name: fedora.linux-system-roles.storage
vars:
storage_pools:
- name: vg1
disks: "{{ unused_disks }}"
type: lvm
shared: true
state: absent
volumes:
- name: lv1
size: "{{ volume1_size }}"
mount_point: "{{ mount_location1 }}"
- include_tasks: verify-role-results.yml

Check failure on line 107 in tests/tests_lvm_pool_shared.yml

View workflow job for this annotation

GitHub Actions / ansible_lint

name[missing]

All tasks should be named.

- name: Remove cluster
ansible.builtin.include_role:
name: fedora.linux-system-roles.ha_cluster
vars:
ha_cluster_cluster_name: rhel9-1node
ha_cluster_cluster_present: false

0 comments on commit 9e201d3

Please sign in to comment.