<div dir="ltr"><div dir="ltr">HI<div><br></div><div>thanks for the Help and i have done "pcs resource clear" and tried the same method again, now the resource is not going back.</div><div><br></div><div>One more thing I noticed is that my service was from systemd and I have created a custom systemd.service file. </div><div><br></div><div>If i freeze the resource group, start and stop the service my using systemctl, is happening immediately </div><div><br></div><div>When I reboot the active node, the cluster is trying to stop the service, it is taking around 1 minutes to stop the service. and at the same time if i check the vm console, the shutdown of the vm process is stuck for some time for stopping high availability services.</div><div><br></div><div>Sorry for asking this, i am very new to this cluster </div><div><br></div><div>Regards,</div><div>Ben</div><div><br></div><div>Is this the expected behaviour? </div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Feb 10, 2021 at 8:53 PM Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Wed, 2021-02-10 at 17:21 +0300, Ben .T.George wrote:<br>
> HI<br>
> <br>
> I have created PCS based 2 node cluster on centos 7 almost<br>
> everything is working fine,<br>
> <br>
> My client machine is on vmware and when I reboot the active node, the<br>
> service group is relocating to the passive node and the resources are<br>
> starting fine(one IP and application).<br>
> <br>
> But whenever the other node reboots and joins back to the cluster,<br>
> the resources are moved back to that node.<br>
> <br>
> please find below config :<br>
> --------------------------------------------<br>
> Cluster Name: EMS<br>
> Corosync Nodes:<br>
> <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a> <a href="http://zkwemsapp02.example.com" rel="noreferrer" target="_blank">zkwemsapp02.example.com</a><br>
> Pacemaker Nodes:<br>
> <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a> <a href="http://zkwemsapp02.example.com" rel="noreferrer" target="_blank">zkwemsapp02.example.com</a><br>
> <br>
> Resources:<br>
> Group: ems_rg<br>
> Resource: ems_vip (class=ocf provider=heartbeat type=IPaddr2)<br>
> Attributes: cidr_netmask=24 ip=10.96.11.39<br>
> Meta Attrs: resource-stickiness=1<br>
> Operations: monitor interval=30s (ems_vip-monitor-interval-30s)<br>
> start interval=0s timeout=20s (ems_vip-start-interval-<br>
> 0s)<br>
> stop interval=0s timeout=20s (ems_vip-stop-interval-<br>
> 0s)<br>
> Resource: ems_app (class=systemd type=ems-app)<br>
> Meta Attrs: resource-stickiness=1<br>
> Operations: monitor interval=60 timeout=100 (ems_app-monitor-<br>
> interval-60)<br>
> start interval=0s timeout=100 (ems_app-start-interval-<br>
> 0s)<br>
> stop interval=0s timeout=100 (ems_app-stop-interval-<br>
> 0s)<br>
> <br>
> Stonith Devices:<br>
> Resource: ems_vmware_fence (class=stonith type=fence_vmware_soap)<br>
> Attributes: ip=10.151.37.110 password=!CM4!!6j7yiApFT<br>
> pcmk_host_map=zkwemsapp01.example.com:ZKWEMSAPP01;zkwemsapp02.example<br>
> .com:ZKWEMSAPP02 ssl_insecure=1 username=mtc_tabs\redhat.fadmin<br>
> Operations: monitor interval=60s (ems_vmware_fence-monitor-<br>
> interval-60s)<br>
> Fencing Levels:<br>
> Target: <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a><br>
> Level 1 - ems_vmware_fence<br>
> Target: <a href="http://zkwemsapp02.example.com" rel="noreferrer" target="_blank">zkwemsapp02.example.com</a><br>
> Level 1 - ems_vmware_fence<br>
> <br>
> Location Constraints:<br>
> Resource: ems_rg<br>
> Enabled on: <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a> (score:INFINITY) (role:<br>
> Started) (id:cli-prefer-ems_rg)<br>
<br>
The above constraint says to prefer that node whenever it is available.<br>
The id starting with "cli-" means that it was added by a command-line<br>
tool (most likely "pcs resource move"). When you "move" a resource,<br>
you're actually telling the cluster to prefer a specific node, and it<br>
remembers that preference until you tell it otherwise. You can remove<br>
the preference with "pcs resource clear" (or equivalently crm_resource<br>
--clear).<br>
<br>
I see your resources have resource-stickiness=1. That is how much<br>
preference an active resource has for the node that it is currently on.<br>
You can also see the above constraint has a score of INFINITY. If the<br>
scores were set such that the stickiness was higher than the<br>
constraint, then the stickiness would win and the resource would stay<br>
put.<br>
<br>
> Ordering Constraints:<br>
> Colocation Constraints:<br>
> Ticket Constraints:<br>
> <br>
> Alerts:<br>
> No alerts defined<br>
> <br>
> Resources Defaults:<br>
> resource-stickiness=1000<br>
> Operations Defaults:<br>
> No defaults set<br>
> <br>
> Cluster Properties:<br>
> cluster-infrastructure: corosync<br>
> cluster-name: EMS<br>
> dc-version: 2.0.2-3.el8-744a30d655<br>
> have-watchdog: false<br>
> last-lrm-refresh: 1612951127<br>
> symmetric-cluster: true<br>
> <br>
> Quorum:<br>
> Options:<br>
> <br>
> --------------------------<br>
> <br>
> Regards,<br>
> Ben<br>
> <br>
> <br>
> _______________________________________________<br>
> Manage your subscription:<br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> <br>
> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
-- <br>
Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>><br>
<br>
</blockquote></div>