[ClusterLabs] DRBD / LVM Global Filter Question
Strahil Nikolov
hunter86_bg at yahoo.com
Sat Apr 4 01:58:12 EDT 2020
On April 3, 2020 5:30:29 PM GMT+03:00, "Nickle, Richard" <rnickle at holycross.edu> wrote:
>On Fri, Apr 3, 2020 at 7:43 AM Eric Robinson <eric.robinson at psmnv.com>
>wrote:
>
>> Greetings,
>>
>>
>>
>> 1. If DRBD lives **below** or **between** LVM volumes, then:
>>
>>
>>
>> 1. set global_filter to reject the DRBD backing devices
>> 2. set write_cache_state = 0
>> 3. set use_lvmetad = 0
>> 4. set volume_list to include block devices required to boot
>> 5. remove /etc/lvm/cache/.cache.
>> 6. run lvscan
>> 7. regenerate initrd
>> 8. reboot
>> 9. Use a cluster resource agent to activate/de-activate LVs as
>> required by cluster operation
>>
>>
>Eric,
>
>I have three DRBD 9 clusters (1 test, 2 in production) and I'm using
>SaltStack as a desired state configuration manager.
>
>My SaltStack formula uses 'filter' instead of 'global_filter' for 2a.,
>my
>formula is following 2b., 2c., 2e. It is not following 2d., and for
>2f., I
>am using 'vgscan', which I run after I modify 'filter' in lvm.conf.
>
>I do not regenerate my initrd as part of my build.
>
>I am using Pacemaker as my cluster resource manager.
>
>Additionally, I do shutoff lvm2-metad:
>
>DRBD_Stop_lvmetad:
>
> service.dead:
>
> - name: lvm2-lvmetad
>
> - enable: False
>
> - watch:
>
> - file: /etc/lvm/lvm.conf
>
>2a. (filter, not global_filter):
>
># https://docs.linbit.com/docs/users-guide-9.0/#s-nested-lvm
>
># 1a. Set an appropriate filter option in your /etc/lvm/lvm.conf
>
>DRBD_LVM_Device_Filter:
>
> file.replace:
>
> - name: /etc/lvm/lvm.conf
>
> # filter = [ "a|.*/|" ]
>
> - pattern: ^(\s*#\s*This configuration option has an automatic default
>value.\s*\#\s*filter\s*=\s*\[\s*"a\|\.\*\/\|"\s*\]|filter\s+=\s+\[.*\])
>
> - repl: filter = ["a|/dev/sd.*|", "a|/dev/drbd[0-9].*|", "r|.*|"]
>
> - append_if_not_found: True
>
>2f. (not require: statement that fires after successful 2a.)
>
># https://docs.linbit.com/docs/users-guide-9.0/#s-nested-lvm
>
># 1b. After modifying the lvm.conf file, you must run the vgscan
>command
>
># so LVM discards its configuration cache and re-scans devices for
>PV
>signatures.
>
>DRBD_LVM_Device_Rescan:
>
> cmd.run:
>
> - name: vgscan
>
> - require:
>
> - DRBD_LVM_Device_Filter
>
>2b.
>
># https://docs.linbit.com/docs/users-guide-9.0/#s-nested-lvm
>
># 2. Disable the LVM cache by setting write_cache_state = 0
>
>DRBD_LVM_Write_Cache:
>
> file.replace:
>
> - name: /etc/lvm/lvm.conf
>
> - pattern: ^\s*write_cache_state\s*=\s*1\s*.*$
>
> - repl: write_cache_state = 0
>
> - append_if_not_found: True
>
>2c.
>
>#$ sudo pcs resource debug-start hanfs_lvm_r0
>
>jjjjjjjjjjjjkkkkj#Operation start for hanfs_lvm_r0 (ocf:heartbeat:LVM)
>returned:
>
># 'unknown error' (1)
>
># > stderr: WARNING: Disable lvmetad in lvm.conf. lvmetad should never
>
># be enabled in a clustered environment. Set use_lvmetad=0 and kill the
>
># lvmetad process
>
>#use_lvmetad = 0
>
>DRBD_Disable_lvmetad:
>
> file.replace:
>
> - name: /etc/lvm/lvm.conf
>
> - pattern: ^\s*\#*\s*use_lvmetad\s*=\s*[01]\s*
>
> - repl: use_lvmetad = 0\n
>
> - append_if_not_found: True
>
>2e.
>
># https://docs.linbit.com/docs/users-guide-9.0/#s-nested-lvm
>
># 2a. Disable the LVM cache by setting write_cache_state = 0
>
># 2b. After disabling the LVM cache, make sure you remove any stale
>cache
>entries by
>
># deleting /etc/lvm/cache/.cache
>
>DRBD_LVM_Clear_Cache:
>
> file.absent:
>
> - name: /etc/lvm/cache/.cache
>
>Thanks,
>
>Rick
I can tell you that filtering is important.
You can use a LV - ontop DRBD and ontop LVM again.
Yet in order to failover, you need to be sure that the last layer is not holding the drbd (for examlle active LV).
For a HA iSCSI ontop of DRBD (https://forums.centos.org/viewtopic.php?t=65539), you need to put an LVM filter because you never know if the iSCSI initiators are using the lun as a PV in their own setup.
I use global filter as regular filters might be ignored by LVM.
The main question is if you can provide the details of your setup and what worries you ?
Usually the best test is a simple failover.
Best Regards,
Strahil Nikolov
More information about the Users
mailing list