[ClusterLabs] Help: Cluster resource relocating to rebooted node automatically

Ken Gaillot kgaillot at redhat.com
Wed Feb 10 12:52:49 EST 2021


On Wed, 2021-02-10 at 17:21 +0300, Ben .T.George wrote:
> HI
> 
> I have created PCS based 2 node cluster on centos 7 almost
> everything is working fine,
> 
> My client machine is on vmware and when I reboot the active node, the
> service group is relocating to the passive node and the resources are
> starting fine(one IP and application).
> 
> But whenever the other node reboots and joins back to the cluster,
> the resources are moved back to that node.
> 
> please find below config :
> --------------------------------------------
> Cluster Name: EMS
> Corosync Nodes:
>  zkwemsapp01.example.com zkwemsapp02.example.com
> Pacemaker Nodes:
>  zkwemsapp01.example.com zkwemsapp02.example.com
> 
> Resources:
>  Group: ems_rg
>   Resource: ems_vip (class=ocf provider=heartbeat type=IPaddr2)
>    Attributes: cidr_netmask=24 ip=10.96.11.39
>    Meta Attrs: resource-stickiness=1
>    Operations: monitor interval=30s (ems_vip-monitor-interval-30s)
>                start interval=0s timeout=20s (ems_vip-start-interval-
> 0s)
>                stop interval=0s timeout=20s (ems_vip-stop-interval-
> 0s)
>   Resource: ems_app (class=systemd type=ems-app)
>    Meta Attrs: resource-stickiness=1
>    Operations: monitor interval=60 timeout=100 (ems_app-monitor-
> interval-60)
>                start interval=0s timeout=100 (ems_app-start-interval-
> 0s)
>                stop interval=0s timeout=100 (ems_app-stop-interval-
> 0s)
> 
> Stonith Devices:
>  Resource: ems_vmware_fence (class=stonith type=fence_vmware_soap)
>   Attributes: ip=10.151.37.110 password=!CM4!!6j7yiApFT
> pcmk_host_map=zkwemsapp01.example.com:ZKWEMSAPP01;zkwemsapp02.example
> .com:ZKWEMSAPP02 ssl_insecure=1 username=mtc_tabs\redhat.fadmin
>   Operations: monitor interval=60s (ems_vmware_fence-monitor-
> interval-60s)
> Fencing Levels:
>   Target: zkwemsapp01.example.com
>     Level 1 - ems_vmware_fence
>   Target: zkwemsapp02.example.com
>     Level 1 - ems_vmware_fence
> 
> Location Constraints:
>   Resource: ems_rg
>     Enabled on: zkwemsapp01.example.com (score:INFINITY) (role:
> Started) (id:cli-prefer-ems_rg)

The above constraint says to prefer that node whenever it is available.
The id starting with "cli-" means that it was added by a command-line
tool (most likely "pcs resource move"). When you "move" a resource,
you're actually telling the cluster to prefer a specific node, and it
remembers that preference until you tell it otherwise. You can remove
the preference with "pcs resource clear" (or equivalently crm_resource
--clear).

I see your resources have resource-stickiness=1. That is how much
preference an active resource has for the node that it is currently on.
You can also see the above constraint has a score of INFINITY. If the
scores were set such that the stickiness was higher than the
constraint, then the stickiness would win and the resource would stay
put.

> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:
> 
> Alerts:
>  No alerts defined
> 
> Resources Defaults:
>  resource-stickiness=1000
> Operations Defaults:
>  No defaults set
> 
> Cluster Properties:
>  cluster-infrastructure: corosync
>  cluster-name: EMS
>  dc-version: 2.0.2-3.el8-744a30d655
>  have-watchdog: false
>  last-lrm-refresh: 1612951127
>  symmetric-cluster: true
> 
> Quorum:
>   Options:
> 
> --------------------------
> 
> Regards,
> Ben
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list