[ClusterLabs] Help with service banning on a node

Leon Botes leon at trusc.net
Mon May 16 02:36:10 EDT 2016


Hi List.

I have the following configuration:

pcs -f ha_config property set symmetric-cluster="true"
pcs -f ha_config property set no-quorum-policy="stop"
pcs -f ha_config property set stonith-enabled="false"
pcs -f ha_config resource defaults resource-stickiness="200"

pcs -f ha_config resource create drbd ocf:linbit:drbd drbd_resource=r0 
op monitor interval=60s
pcs -f ha_config resource master drbd master-max=1 master-node-max=1 
clone-max=2 clone-node-max=1 notify=true
pcs -f ha_config resource create vip-blue ocf:heartbeat:IPaddr2 
ip=192.168.101.100 cidr_netmask=32 nic=blue op monitor interval=20s
pcs -f ha_config resource create vip-green ocf:heartbeat:IPaddr2 
ip=192.168.102.100 cidr_netmask=32 nic=blue op monitor interval=20s

pcs -f ha_config constraint colocation add vip-blue drbd-master INFINITY 
with-rsc-role=Master
pcs -f ha_config constraint colocation add vip-green drbd-master 
INFINITY with-rsc-role=Master

pcs -f ha_config constraint location drbd-master prefers stor-san1=50
pcs -f ha_config constraint location drbd-master avoids stor-node1=INFINITY
pcs -f ha_config constraint location vip-blue prefers stor-san1=50
pcs -f ha_config constraint location vip-blue avoids stor-node1=INFINITY
pcs -f ha_config constraint location vip-green prefers stor-san1=50
pcs -f ha_config constraint location vip-green avoids stor-node1=INFINITY

pcs -f ha_config constraint order promote drbd-master then start vip-blue
pcs -f ha_config constraint order start vip-blue then start vip-green

Which results in:

[root at san1 ~]# pcs status
Cluster name: ha_cluster
Last updated: Mon May 16 08:21:28 2016          Last change: Mon May 16 
08:21:25 2016 by root via crm_resource on iscsiA-san1
Stack: corosync
Current DC: iscsiA-node1 (version 1.1.13-10.el7_2.2-44eb2dd) - partition 
with quorum
3 nodes and 4 resources configured

Online: [ iscsiA-node1 iscsiA-san1 iscsiA-san2 ]

Full list of resources:

  Master/Slave Set: drbd-master [drbd]
      drbd       (ocf::linbit:drbd):     FAILED iscsiA-node1 (unmanaged)
      Masters: [ iscsiA-san1 ]
      Stopped: [ iscsiA-san2 ]
  vip-blue       (ocf::heartbeat:IPaddr2):       Started iscsiA-san1
  vip-green      (ocf::heartbeat:IPaddr2):       Started iscsiA-san1

Failed Actions:
* drbd_stop_0 on iscsiA-node1 'not installed' (5): call=18, 
status=complete, exitreason='none',
     last-rc-change='Mon May 16 08:20:16 2016', queued=0ms, exec=45ms


PCSD Status:
   iscsiA-san1: Online
   iscsiA-san2: Online
   iscsiA-node1: Online

Daemon Status:
   corosync: active/disabled
   pacemaker: active/disabled
   pcsd: active/enabled


Is there any way in the configuration to make the drbd sections 
completely be ignored on  iscsiA-node1  to avoide this:
  drbd (ocf::linbit:drbd): FAILED iscsiA-node1 (unmanaged)
and
Failed Actions:
* drbd_stop_0 on iscsiA-node1 'not installed' (5): call=18, 
status=complete, exitreason='none',
last-rc-change='Mon May 16 08:20:16 2016', queued=0ms, exec=45ms

Tried the ban statements but that seesm to have the same result.

Also is there any better way to write the configuration so that the drbd 
starts first then the vip's and colocate together. Also ensure that they 
run on only san1 or san2. Tried grouping but that seems to fail with 
Master / Slave resourcess.

-- 
Regards
Leon




More information about the Users mailing list