[ClusterLabs] Resources stopped due to unmanage

Ken Gaillot kgaillot at redhat.com
Mon Mar 12 16:00:25 EDT 2018

On Mon, 2018-03-12 at 22:36 +0300, Pavel Levshin wrote:
> Hello.
> I've just expiriensed a fault in my pacemaker-based cluster.
> Seriously, 
> I'm completely disoriented after this. Hopefully someone can give me
>> hint...
> Two-node cluster runs few VirtualDomains along with their common 
> infrastructure (libvirtd, NFS and so on). It is Pacemaker 1.1.16 
> currently. Resources have ordering and colocation constraints, which 
> must ensure proper start and stop order. Unfortunately, these 
> constraints have unwanted side effects. In particular, due to
> mandatory 
> ordering constraints, the cluster tends to restart libvirtd when I
> need 
> to stop a VM. This was on the list prevoiusly: 
> https://lists.clusterlabs.org/pipermail/users/2016-October/004288.htm
> l
> And today I've tried to perform maintenance task on one of VMs. I've
> typed:
> pcs resource unmanage vm3
> and all other VMs have been suddenly stopped. Seriously?!!!
> Logs show that the cluster performed internal custom_action
> "vm3_stop 
> (unmanaged)" and "vm3_start (unmanaged)", and then this triggered 
> libvirtd_stop, which leads to every VM stopping due to colocation.

That's really perplexing. Unmanaging by itself should never lead to any
actions being done.

Feel free to report it at bugs.clusterlabs.org, and attach the output
of a crm_report.

> The question is: is there a sane way to run VMs under pacemaker's 
> control? If yes, is it described somewhere?
> --
> Pavel
Ken Gaillot <kgaillot at redhat.com>

More information about the Users mailing list