[ClusterLabs] Colocation constraint for grouping all master-mode stateful resources with important stateless resources

Ken Gaillot kgaillot at redhat.com
Fri Mar 23 17:34:18 UTC 2018


On Tue, 2018-03-20 at 16:34 +0000, Sam Gardner wrote:
> Hi All -
> 
> I've implemented a simple two-node cluster with DRBD and a couple of
> network-based Master/Slave resources.
> 
> Using the ethmonitor RA, I set up failover whenever the
> Master/Primary node loses link on the specified ethernet physical
> device by constraining the Master role only on nodes where the ethmon
> variable is "1".
> 
> Something is going wrong with my colocation constraint, however - if
> I set up the DRBDFS resource to monitor link on eth1, unplugging eth1
> on the Primary node causes a failover as expected - all Master
> resources are demoted to "slave" and promoted on the opposite node,
> and the "normal" DRBDFS moves to the other node as expected.
> 
> However, if I put the same ethmonitor constraint on the network-based 
> Master/Slave resource, only that specific resource fails over -
> DRBDFS stays in the same location (though it stops) as do the other
> Master/Slave resources.
> 
> This *smells* like a constraints issue to me - does anyone know what
> I might be doing wrong?
>
> PCS before:
> Cluster name: node1.hostname.com_node2.hostname.com
> Stack: corosync
> Current DC: node2.hostname.com_0 (version 1.1.16-12.el7_4.4-94ff4df)
> - partition with quorum
> Last updated: Tue Mar 20 16:25:47 2018
> Last change: Tue Mar 20 16:00:33 2018 by hacluster via crmd on
> node2.hostname.com_0
> 
> 2 nodes configured
> 11 resources configured
> 
> Online: [ node1.hostname.com_0 node2.hostname.com_0 ]
> 
> Full list of resources:
> 
>  Master/Slave Set: drbd.master [drbd.slave]
>      Masters: [ node1.hostname.com_0 ]
>      Slaves: [ node2.hostname.com_0 ]
>  drbdfs (ocf::heartbeat:Filesystem):    Started node1.hostname.com_0
>  Master/Slave Set: inside-interface-sameip.master [inside-interface-
> sameip.slave]
>      Masters: [ node1.hostname.com_0 ]
>      Slaves: [ node2.hostname.com_0 ]
>  Master/Slave Set: outside-interface-sameip.master [outside-
> interface-sameip.slave]
>      Masters: [ node1.hostname.com_0 ]
>      Slaves: [ node2.hostname.com_0 ]
>  Clone Set: monitor-eth1-clone [monitor-eth1]
>      Started: [ node1.hostname.com_0 node2.hostname.com_0 ]
>  Clone Set: monitor-eth2-clone [monitor-eth2]
>      Started: [ node1.hostname.com_0 node2.hostname.com_0 ]

What agent are the two IP resources using? I'm not familiar with any IP
resource agents that are master/slave clones.

> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: inactive/disabled
> 
> PCS after:
> Cluster name: node1.hostname.com_node2.hostname.com
> Stack: corosync
> Current DC: node2.hostname.com_0 (version 1.1.16-12.el7_4.4-94ff4df)
> - partition with quorum
> Last updated: Tue Mar 20 16:29:40 2018
> Last change: Tue Mar 20 16:00:33 2018 by hacluster via crmd on
> node2.hostname.com_0
> 
> 2 nodes configured
> 11 resources configured
> 
> Online: [ node1.hostname.com_0 node2.hostname.com_0 ]
> 
> Full list of resources:
> 
>  Master/Slave Set: drbd.master [drbd.slave]
>      Masters: [ node1.hostname.com_0 ]
>      Slaves: [ node2.hostname.com_0 ]
>  drbdfs (ocf::heartbeat:Filesystem):    Stopped
>  Master/Slave Set: inside-interface-sameip.master [inside-interface-
> sameip.slave]
>      Masters: [ node2.hostname.com_0 ]
>      Stopped: [ node1.hostname.com_0 ]
>  Master/Slave Set: outside-interface-sameip.master [outside-
> interface-sameip.slave]
>      Masters: [ node1.hostname.com_0 ]
>      Slaves: [ node2.hostname.com_0 ]
>  Clone Set: monitor-eth1-clone [monitor-eth1]
>      Started: [ node1.hostname.com_0 node2.hostname.com_0 ]
>  Clone Set: monitor-eth2-clone [monitor-eth2]
>      Started: [ node1.hostname.com_0 node2.hostname.com_0 ]
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: inactive/disabled
> 
> This is the "constraints" section of my CIB (full CIB is attached):
>       <rsc_colocation
> id="pcs_rsc_colocation_set_drbdfs_set_drbd.master_inside-interface-
> sameip.master_outside-interface-sameip.master" score="INFINITY">
>         <resource_set id="pcs_rsc_set_drbdfs" sequential="false">
>           <resource_ref id="drbdfs"/>
>         </resource_set>
>         <resource_set id="pcs_rsc_set_drbd.master_inside-interface-
> sameip.master_outside-interface-sameip.master" role="Master"
> sequential="false">
>           <resource_ref id="drbd.master"/>
>           <resource_ref id="inside-interface-sameip.master"/>
>           <resource_ref id="outside-interface-sameip.master"/>
>         </resource_set>
>       </rsc_colocation>

Resource sets can be confusing in the best of cases.

The above constraint says: Place drbdfs only on a node where the master
instances of drbd.master and the two IPs are running (without any
dependencies between those resources).

This explains why the master instances can run on different nodes, and
why drbdfs was stopped when they did.

>       <rsc_order id="pcs_rsc_order_set_drbd.master_inside-interface-
> sameip.master_outside-interface-sameip.master_set_drbdfs"
> kind="Serialize" symmetrical="false">
>         <resource_set action="promote"
> id="pcs_rsc_set_drbd.master_inside-interface-sameip.master_outside-
> interface-sameip.master-1" role="Master">
>           <resource_ref id="drbd.master"/>
>           <resource_ref id="inside-interface-sameip.master"/>
>           <resource_ref id="outside-interface-sameip.master"/>
>         </resource_set>
>         <resource_set id="pcs_rsc_set_drbdfs-1">
>           <resource_ref id="drbdfs"/>
>         </resource_set>
>       </rsc_order>

The above constraint says: if promoting any of drbd.master and the two
interfaces and/or starting drbdfs, do each action one at a time (in any
order). Other actions (including demoting and stopping) can happen in
any order.

>       <rsc_location id="location-inside-interface-sameip.master"
> rsc="inside-interface-sameip.master">
>         <rule id="location-inside-interface-sameip.master-rule"
> score="-INFINITY">
>           <expression attribute="ethmon_result-eth1" id="location-
> inside-interface-sameip.master-rule-expr" operation="ne" value="1"/>
>         </rule>
>       </rsc_location>
>       <rsc_location id="location-outside-interface-sameip.master"
> rsc="outside-interface-sameip.master">
>         <rule id="location-outside-interface-sameip.master-rule"
> score="-INFINITY">
>           <expression attribute="ethmon_result-eth2" id="location-
> outside-interface-sameip.master-rule-expr" operation="ne" value="1"/>
>         </rule>
>       </rsc_location>

The above constraints keep inside-interface on a node where eth1 is
good, and outside-interface on a node where eth2 is good.

I'm guessing you want to keep these two constraints, and start over
from scratch on the others. What are your intended relationships
between the various resources?

>     </constraints>
> -- 
> Sam Gardner  
> Trustwave | SMART SECURITY ON DEMAND
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
-- 
Ken Gaillot <kgaillot at redhat.com>


More information about the Users mailing list