[ClusterLabs] Preventing pacemaker from attempting to start a VirtualDomain resource on a pacemaker-remote guest node
d10swl1 at us.ibm.com
Tue Jul 12 13:46:45 EDT 2016
What is the most efficient way to prevent pacemaker from attempting to
start a VirtualDomain resource on pacemaker-remote guest nodes?
I’m running pacemaker 1.1.13 in a KVM host cluster with a large number of
VirtualDomain (VD) resources (which come and go via automation as users
add/delete them), a subset of which are also running the pacemaker-remote
service and acting as guest nodes. The number of KVM host nodes in the
cluster can vary over time, and the VD resources can run on any KVM host
node in the cluster. Explicitly defining a set of location constraints for
each VD specifying only the KVM host nodes would be unwieldy, and the
constraints would need to change for every VD whenever the number of KVM
host nodes in the cluster changes. So I would prefer to run this as a
symmetric cluster in which all VDs can implicitly run on all KVM host
nodes, but somehow tell the VD’s they should not try to start on the
pacemaker-remote guest nodes (where they will just fail). I’m just not sure
the most efficient way to accomplish this.
The approach I’ve hit on so far is to explicitly define an instance
attribute on each pacemaker-remote guest node which labels it as such, and
then define a location constraint rule for all VDs that tells them to
avoid all such guest nodes.
Specifically, I issue a command such as this for each pacemaker-remote
guest node after its corresponding VD is defined (in this example, for a
guest node named “GuestNode1”):
# crm_attribute --node GuestNode1 --name type --update remote
And then for each VD (in this example, for the VD named “VM2”):
# pcs constraint location VM2 rule score=-INFINITY type eq remote
These commands have nothing unique in them other than the guest node or VD
name, so they are easy to add to our automation that provisions the actual
virtual machines, and do not require revision when KVM host nodes are added
to the cluster.
Is that the generally recommended approach, or is there a more efficient
way of accomplishing the same thing?
PS: For an asymmetric cluster, a similar approach would work as well, such
# crm_attribute --node KVMHost1 --name type --update normal
# pcs constraint location VM2 rule score=100 type eq normal
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Users