[ClusterLabs] Antw: [EXT] Re: Help: Cluster resource relocating to rebooted node automatically

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Thu Feb 11 03:34:38 EST 2021


>>> "Ben .T.George" <bentech4you at gmail.com> schrieb am 10.02.2021 um 16:14 in
Nachricht
<CA+C_GOUmkREd9HrzKOHmV4r_Q6tdsyrjk8N9SS1LWVALdTh76A at mail.gmail.com>:
> HI
> 
> thanks for the Help and i have done "pcs resource clear" and tried the same
> method again, now the resource is not going back.
> 
> One more thing I noticed is that my service was from systemd and I have
> created a custom systemd.service file.
> 
> If i freeze the resource group, start and stop the service my using
> systemctl, is happening immediately
> 
> When I reboot the active node, the cluster is trying to stop the service,
> it is taking around 1 minutes to stop the service. and at the same time if
> i check the vm console, the shutdown of the vm process is stuck for some
> time for stopping high availability services.

To give any advice on that we need details, typically logs.

> 
> Sorry for asking this, i am very new to this cluster
> 
> Regards,
> Ben
> 
> Is this the expected behaviour?
> 
> On Wed, Feb 10, 2021 at 8:53 PM Ken Gaillot <kgaillot at redhat.com> wrote:
> 
>> On Wed, 2021-02-10 at 17:21 +0300, Ben .T.George wrote:
>> > HI
>> >
>> > I have created PCS based 2 node cluster on centos 7 almost
>> > everything is working fine,
>> >
>> > My client machine is on vmware and when I reboot the active node, the
>> > service group is relocating to the passive node and the resources are
>> > starting fine(one IP and application).
>> >
>> > But whenever the other node reboots and joins back to the cluster,
>> > the resources are moved back to that node.
>> >
>> > please find below config :
>> > --------------------------------------------
>> > Cluster Name: EMS
>> > Corosync Nodes:
>> >  zkwemsapp01.example.com zkwemsapp02.example.com
>> > Pacemaker Nodes:
>> >  zkwemsapp01.example.com zkwemsapp02.example.com
>> >
>> > Resources:
>> >  Group: ems_rg
>> >   Resource: ems_vip (class=ocf provider=heartbeat type=IPaddr2)
>> >    Attributes: cidr_netmask=24 ip=10.96.11.39
>> >    Meta Attrs: resource-stickiness=1
>> >    Operations: monitor interval=30s (ems_vip-monitor-interval-30s)
>> >                start interval=0s timeout=20s (ems_vip-start-interval-
>> > 0s)
>> >                stop interval=0s timeout=20s (ems_vip-stop-interval-
>> > 0s)
>> >   Resource: ems_app (class=systemd type=ems-app)
>> >    Meta Attrs: resource-stickiness=1
>> >    Operations: monitor interval=60 timeout=100 (ems_app-monitor-
>> > interval-60)
>> >                start interval=0s timeout=100 (ems_app-start-interval-
>> > 0s)
>> >                stop interval=0s timeout=100 (ems_app-stop-interval-
>> > 0s)
>> >
>> > Stonith Devices:
>> >  Resource: ems_vmware_fence (class=stonith type=fence_vmware_soap)
>> >   Attributes: ip=10.151.37.110 password=!CM4!!6j7yiApFT
>> > pcmk_host_map=zkwemsapp01.example.com:ZKWEMSAPP01;zkwemsapp02.example
>> > .com:ZKWEMSAPP02 ssl_insecure=1 username=mtc_tabs\redhat.fadmin
>> >   Operations: monitor interval=60s (ems_vmware_fence-monitor-
>> > interval-60s)
>> > Fencing Levels:
>> >   Target: zkwemsapp01.example.com
>> >     Level 1 - ems_vmware_fence
>> >   Target: zkwemsapp02.example.com
>> >     Level 1 - ems_vmware_fence
>> >
>> > Location Constraints:
>> >   Resource: ems_rg
>> >     Enabled on: zkwemsapp01.example.com (score:INFINITY) (role:
>> > Started) (id:cli-prefer-ems_rg)
>>
>> The above constraint says to prefer that node whenever it is available.
>> The id starting with "cli-" means that it was added by a command-line
>> tool (most likely "pcs resource move"). When you "move" a resource,
>> you're actually telling the cluster to prefer a specific node, and it
>> remembers that preference until you tell it otherwise. You can remove
>> the preference with "pcs resource clear" (or equivalently crm_resource
>> --clear).
>>
>> I see your resources have resource-stickiness=1. That is how much
>> preference an active resource has for the node that it is currently on.
>> You can also see the above constraint has a score of INFINITY. If the
>> scores were set such that the stickiness was higher than the
>> constraint, then the stickiness would win and the resource would stay
>> put.
>>
>> > Ordering Constraints:
>> > Colocation Constraints:
>> > Ticket Constraints:
>> >
>> > Alerts:
>> >  No alerts defined
>> >
>> > Resources Defaults:
>> >  resource-stickiness=1000
>> > Operations Defaults:
>> >  No defaults set
>> >
>> > Cluster Properties:
>> >  cluster-infrastructure: corosync
>> >  cluster-name: EMS
>> >  dc-version: 2.0.2-3.el8-744a30d655
>> >  have-watchdog: false
>> >  last-lrm-refresh: 1612951127
>> >  symmetric-cluster: true
>> >
>> > Quorum:
>> >   Options:
>> >
>> > --------------------------
>> >
>> > Regards,
>> > Ben
>> >
>> >
>> > _______________________________________________
>> > Manage your subscription:
>> > https://lists.clusterlabs.org/mailman/listinfo/users 
>> >
>> > ClusterLabs home: https://www.clusterlabs.org/ 
>> --
>> Ken Gaillot <kgaillot at redhat.com>
>>
>>





More information about the Users mailing list