[ClusterLabs] 3 node cluster to 2 with quorum device

Jason Pfingstmann jason at pfingstmann.com
Sun Jan 6 09:00:23 UTC 2019


> Am 05.01.2019 um 22:42 schrieb Andrei Borzenkov <arvidjaar at gmail.com>:
> 
> 06.01.2019 8:16, Jason Pfingstmann пишет:
>> I am new to corosync and pacemaker, having only used heartbeat in the
>> past (which is barely even comparable, now that I’m in the middle of
>> this).  I’m working on a system for RDQM (IBM’s MQ software,
>> clustering solution) and it uses corosync with pacemaker.  I set it
>> up and had a 3 node cluster with resources available everywhere (any
>> node could be made active).  However, we need to set this up as 2
>> nodes as being available for the resources and 1 node to only
>> function as a quorum device.
>> 
>> To try to accomplish this, I first banned the services from one of
>> the nodes (pcs resource ban <resource> <node> and uninstalled those
>> components from the node.
> 
> "pcs ban" is really intended for temporary moving resource off node; it
> even has timeout parameter to cancel its effect automatically. You
> should create normal location constraint if you want to permanently
> exclude some node from being used for specific resource.
> 

Yeah, I saw that it had a timeout option available, it’s set for infinity right now, but it sounds like constraints are more the way to go.

>> Now when I do pcs status, I get Failed
>> Actions: * <resource_monitor_0> on node1 ‚not installed‘
>> 
> 
> This is resource probe which is run one time when node is started;
> pacemaker is using it to get informed which resources are currently active.
> 
>> Is there a „proper“ way to remove a resource from a single node?
>> 
> 
> One possibility is to set resource-discovery=never or
> resource-discovery=exclusive option on location constraint. Which one
> depends on whether you use opt-out or opt-in cluster (i.e. whether by
> default every resource can run anywhere or nowhere).
> 
> It may be possible to simply set node in standby mode, although I am not
> sure whether probes are still run in this case.
> 

I’ll look into these options, I think the constraint option is where I needed to look further, I’ll look into that on Tuesday (I’m not in the office until then).

> Finally you can simply convert your cluster into two-node cluster :)
> thus avoiding all these issues. What is the point of having the third
> node if it won't ever run any resource?
> 

The reason for 3 nodes is for quorum activities to prevent split-brain with DRBD.

>> I did look up removing a resource from a specific node, but the only
>> reference I could find was how to remove a resource from ALL nodes.
>> Perhaps having the IBM tools create the cluster to begin with left me
>> missing some fundamental knowledge I would have had if I did it from
>> scratch, but for IBM to support our configuration, they require the
>> use of their setup tools.  They don’t have any documentation on how
>> to do a 2 node cluster with additional qdevice, so we’re on our own
>> for this part.
>> 
> 
> It sounds like you already modify configuration in incompatible way thus
> possibly losing support.

By default, the IBM setup doesn’t even include pcs, but manages pacemaker directly.  So far none of the changes I’ve made effect the IBM tools in their ability to work, but they are all built with certain assumptions about the pacemaker cluster (which are not spelled out anywhere).  So I used them to set it all up to begin with.


Thanks for the info, I think this helps me enough to get me looking at the right options.

-Jason Pfingstmann
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20190106/adc784e7/attachment-0001.html>


More information about the Users mailing list