[ClusterLabs] Preventing pacemaker from attempting to start a VirtualDomain resource on a pacemaker-remote guest node

Ken Gaillot kgaillot at redhat.com
Tue Jul 12 15:32:16 EDT 2016


On 07/12/2016 12:46 PM, Scott Loveland wrote:
> What is the most efficient way to prevent pacemaker from attempting to
> start a VirtualDomain resource on pacemaker-remote guest nodes?
> 
> I’m running pacemaker 1.1.13 in a KVM host cluster with a large number
> of VirtualDomain (VD) resources (which come and go via automation as
> users add/delete them), a subset of which are also running the
> pacemaker-remote service and acting as guest nodes. The number of KVM
> host nodes in the cluster can vary over time, and the VD resources can
> run on any KVM host node in the cluster. Explicitly defining a set of
> location constraints for each VD specifying only the KVM host nodes
> would be unwieldy, and the constraints would need to change for every VD
> whenever the number of KVM host nodes in the cluster changes. So I would
> prefer to run this as a symmetric cluster in which all VDs can
> implicitly run on all KVM host nodes, but somehow tell the VD’s they
> should not try to start on the pacemaker-remote guest nodes (where they
> will just fail). I’m just not sure the most efficient way to accomplish
> this.
> 
> The approach I’ve hit on so far is to explicitly define an instance
> attribute on each pacemaker-remote guest node which labels it as such,
> and then define a location constraint rule for all VDs that tells them
> to avoid all such guest nodes.
> 
> Specifically, I issue a command such as this for each pacemaker-remote
> guest node after its corresponding VD is defined (in this example, for a
> guest node named “GuestNode1”):
> 
> # crm_attribute --node GuestNode1 --name type --update remote
> 
> And then for each VD (in this example, for the VD named “VM2”):
> 
> # pcs constraint location VM2 rule score=-INFINITY type eq remote
> 
> These commands have nothing unique in them other than the guest node or
> VD name, so they are easy to add to our automation that provisions the
> actual virtual machines, and do not require revision when KVM host nodes
> are added to the cluster.
> 
> Is that the generally recommended approach, or is there a more efficient
> way of accomplishing the same thing?
> 
> PS: For an asymmetric cluster, a similar approach would work as well,
> such as:
> 
> # crm_attribute --node KVMHost1 --name type --update normal
> 
> # pcs constraint location VM2 rule score=100 type eq normal
> 
> 
> 
> - Scott

That is exactly the approach I'd recommend -- but there is a shortcut:
there is a special, undocumented node attribute called "#kind" that is
either "cluster" (normal node), "remote" (ocf:pacemaker:remote node) or
"container" (guest node, unfortunately using old terminology that has
nothing to do with Docker-style containers). So you can use that in
place of creating your own. It also has the benefit that it will exist
as soon as the node exists.

There's no reason it's undocumented, other than features get added more
often than documentation gets updated.

FYI, the cluster will automatically ensure that stonith resources and
guest node resources do not run on remote/guest nodes.




More information about the Users mailing list