[ClusterLabs] ocf:heartbeat:LVM or /etc/lvm/lvm.conf settings question
emi2fast at gmail.com
Wed Aug 10 17:32:32 EDT 2016
your lvm filter include the drbd devices /dev/drbdX ?
2016-08-10 21:38 GMT+02:00 Darren Kinley <dkinley at mdacorporation.com>:
> I have an LVM logical volume and used DRBD to replicate it to another
> The /dev/drbd0 has PV/VG/LVs which are mostly working.
> I have colocation and order constraints that bring up a VIP, promote DRBD
> and start LVM plus file systems.
> The problem arises when I take the active node offline.
> At that point the VIP and DRBD master move but the PV/VG are not
> scanned/activated, the file systems are not mounted
> and “crm status” reports an error for the ocf:heartbeat:LVM resource
> “Volume group [replicated] does not exist or contains an error!
> Using volume group(s) on command line.”
> At this point the /dev/drbd0 physical volume is not known to the server and
> the fix requires
> root# pvscan –cache /dev/drbd0
> root# crm resource cleanup grp-ars-lvm-fs
> Is there an ocf:heartbeat:LVM setting or /etc/lvm/lvm.conf settings to force
> the PV/VGs to come online?
> It is not clear whether the RA script “exclusive” or “tag” settings are
> needed or there is a corresponding lvm.conf setting.
> Is l”vm.conf write_cache_state = 0” recommended by the DRBD User Guide
> Users mailing list: Users at clusterlabs.org
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users