[ClusterLabs] Resources stopped due to unmanage
Pavel Levshin
lpk at 581.spb.su
Mon Mar 12 15:36:28 EDT 2018
Hello.
I've just expiriensed a fault in my pacemaker-based cluster. Seriously,
I'm completely disoriented after this. Hopefully someone can give me a
hint...
Two-node cluster runs few VirtualDomains along with their common
infrastructure (libvirtd, NFS and so on). It is Pacemaker 1.1.16
currently. Resources have ordering and colocation constraints, which
must ensure proper start and stop order. Unfortunately, these
constraints have unwanted side effects. In particular, due to mandatory
ordering constraints, the cluster tends to restart libvirtd when I need
to stop a VM. This was on the list prevoiusly:
https://lists.clusterlabs.org/pipermail/users/2016-October/004288.html
And today I've tried to perform maintenance task on one of VMs. I've typed:
pcs resource unmanage vm3
and all other VMs have been suddenly stopped. Seriously?!!!
Logs show that the cluster performed internal custom_action "vm3_stop
(unmanaged)" and "vm3_start (unmanaged)", and then this triggered
libvirtd_stop, which leads to every VM stopping due to colocation.
The question is: is there a sane way to run VMs under pacemaker's
control? If yes, is it described somewhere?
--
Pavel
More information about the Users
mailing list