[ClusterLabs] ocf:heartbeat:LVM or /etc/lvm/lvm.conf settings question

Darren Kinley dkinley at mdacorporation.com
Wed Aug 10 19:38:21 UTC 2016


Hi,

I have an LVM logical volume and used DRBD to replicate it to another server.
The /dev/drbd0 has PV/VG/LVs which are mostly working.
I have colocation and order constraints that bring up a VIP, promote DRBD and start LVM plus file systems.

The problem arises when I take the active node offline.
At that point the VIP and DRBD master move but the PV/VG are not scanned/activated, the file systems are not mounted
and "crm status" reports an error for the ocf:heartbeat:LVM resource

"Volume group [replicated] does not exist or contains an error!
Using volume group(s) on command line."

At this point the /dev/drbd0 physical volume is not known to the server and  the fix requires

root# pvscan -cache /dev/drbd0
root# crm resource cleanup grp-ars-lvm-fs

Is there an ocf:heartbeat:LVM setting or /etc/lvm/lvm.conf settings to force the PV/VGs to come online?
It is not clear whether the RA script "exclusive" or "tag" settings are needed or there is a corresponding lvm.conf setting.

Is l"vm.conf write_cache_state = 0" recommended by the DRBD User Guide correct?

Thanks,
Darren


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20160810/91db928e/attachment-0003.html>


More information about the Users mailing list