[ClusterLabs] Resource seems to not obey constraint

Ken Gaillot kgaillot at redhat.com
Fri May 20 22:01:13 UTC 2016


On 05/20/2016 10:29 AM, Leon Botes wrote:
> I push the following config.
> The iscsi-target fails as it tries to start on iscsiA-node1
> This is because I have no target installed on iscsiA-node1 which is by
> design. All services listed here should only start on  iscsiA-san1
> iscsiA-san2.
> I am using using the iscsiA-node1 basically for quorum and some other
> minor functions.
> 
> Can someone please show me where I am going wrong?
> All services should start on the same node, order is drbd-master
> vip-blue vip-green iscsi-target iscsi-lun
> 
> pcs -f ha_config property set symmetric-cluster="true"
> pcs -f ha_config property set no-quorum-policy="stop"
> pcs -f ha_config property set stonith-enabled="false"
> pcs -f ha_config resource defaults resource-stickiness="200"
> 
> pcs -f ha_config resource create drbd ocf:linbit:drbd drbd_resource=r0
> op monitor interval=60s
> pcs -f ha_config resource master drbd master-max=1 master-node-max=1
> clone-max=2 clone-node-max=1 notify=true
> pcs -f ha_config resource create vip-blue ocf:heartbeat:IPaddr2
> ip=192.168.101.100 cidr_netmask=32 nic=blue op monitor interval=20s
> pcs -f ha_config resource create vip-green ocf:heartbeat:IPaddr2
> ip=192.168.102.100 cidr_netmask=32 nic=green op monitor interval=20s
> pcs -f ha_config resource create iscsi-target ocf:heartbeat:iSCSITarget
> params iqn="iqn.2016-05.trusc.net" implementation="lio-t" op monitor
> interval="30s"
> pcs -f ha_config resource create iscsi-lun
> ocf:heartbeat:iSCSILogicalUnit params target_iqn="iqn.2016-05.trusc.net"
> lun="1" path="/dev/drbd0"
> 
> pcs -f ha_config constraint colocation add vip-blue drbd-master INFINITY
> with-rsc-role=Master
> pcs -f ha_config constraint colocation add vip-green drbd-master
> INFINITY with-rsc-role=Master
> 
> pcs -f ha_config constraint location drbd-master prefers stor-san1=500
> pcs -f ha_config constraint location drbd-master avoids stor-node1=INFINITY

The above constraint is an example of how to ban a resource from a node.
However stor-node1 is not a valid node name in your setup (maybe an
earlier design?), so this particular constraint won't have any effect.
If you want to ban certain resources from iscsiA-node1, add constraints
like the above for each resource, using the correct node name.

> pcs -f ha_config constraint order promote drbd-master then start vip-blue
> pcs -f ha_config constraint order start vip-blue then start vip-green
> pcs -f ha_config constraint order start vip-green then start iscsi-target
> pcs -f ha_config constraint order start iscsi-target then start iscsi-lun
> 
> Results:
> 
> [root at san1 ~]# pcs status
> Cluster name: storage_cluster
> Last updated: Fri May 20 17:21:10 2016          Last change: Fri May 20
> 17:19:43 2016 by root via cibadmin on iscsiA-san1
> Stack: corosync
> Current DC: iscsiA-san1 (version 1.1.13-10.el7_2.2-44eb2dd) - partition
> with quorum
> 3 nodes and 6 resources configured
> 
> Online: [ iscsiA-node1 iscsiA-san1 iscsiA-san2 ]
> 
> Full list of resources:
> 
>  Master/Slave Set: drbd-master [drbd]
>      Masters: [ iscsiA-san1 ]
>      Slaves: [ iscsiA-san2 ]
>  vip-blue       (ocf::heartbeat:IPaddr2):       Started iscsiA-san1
>  vip-green      (ocf::heartbeat:IPaddr2):       Started iscsiA-san1
>  iscsi-target   (ocf::heartbeat:iSCSITarget):   FAILED iscsiA-node1
> (unmanaged)
>  iscsi-lun      (ocf::heartbeat:iSCSILogicalUnit):      Stopped
> 
> Failed Actions:
> * drbd_monitor_0 on iscsiA-node1 'not installed' (5): call=6, status=Not
> installed, exitreason='none',
>     last-rc-change='Fri May 20 17:19:44 2016', queued=0ms, exec=0ms
> * iscsi-target_stop_0 on iscsiA-node1 'not installed' (5): call=24,
> status=complete, exitreason='Setup problem: couldn't find command:
> targetcli',
>     last-rc-change='Fri May 20 17:19:45 2016', queued=0ms, exec=18ms
> * iscsi-lun_monitor_0 on iscsiA-node1 'not installed' (5): call=22,
> status=complete, exitreason='Undefined iSCSI target implementation',
>     last-rc-change='Fri May 20 17:19:44 2016', queued=0ms, exec=27ms

The above failures will still occur even if you add the proper
constraints, because these are probes. Before starting a resource,
Pacemaker probes it on all nodes, to make sure it's not already running
somewhere. You can prevent this when you know it is impossible that the
resource could be running on a particular node, by adding
resource-discovery=never when creating the constraint banning it from
that node.

> 
> PCSD Status:
>   iscsiA-san1: Online
>   iscsiA-san2: Online
>   iscsiA-node1: Online
> 
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/disabled
> 





More information about the Users mailing list