<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Feb 11, 2021 at 12:35 AM Ulrich Windl <<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-regensburg.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">>>> "Ben .T.George" <<a href="mailto:bentech4you@gmail.com" target="_blank">bentech4you@gmail.com</a>> schrieb am 10.02.2021 um 16:14 in<br>
Nachricht<br>
<<a href="mailto:CA%2BC_GOUmkREd9HrzKOHmV4r_Q6tdsyrjk8N9SS1LWVALdTh76A@mail.gmail.com" target="_blank">CA+C_GOUmkREd9HrzKOHmV4r_Q6tdsyrjk8N9SS1LWVALdTh76A@mail.gmail.com</a>>:<br>
> HI<br>
> <br>
> thanks for the Help and i have done "pcs resource clear" and tried the same<br>
> method again, now the resource is not going back.<br></blockquote><div><br></div><div>To be perfectly clear, did you run `pcs resource clear ems_eg`? That's the full command line to remove the cli-prefer-ems_rg constraint.</div><div><br> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
> <br>
> One more thing I noticed is that my service was from systemd and I have<br>
> created a custom systemd.service file.<br>
> <br>
> If i freeze the resource group, start and stop the service my using<br>
> systemctl, is happening immediately<br>
> <br>
> When I reboot the active node, the cluster is trying to stop the service,<br>
> it is taking around 1 minutes to stop the service. and at the same time if<br>
> i check the vm console, the shutdown of the vm process is stuck for some<br>
> time for stopping high availability services.<br>
<br>
To give any advice on that we need details, typically logs.<br></blockquote><div><br></div><div>+1. Generally, a snippet from /var/log/pacemaker/pacemaker.log (on pacemaker version 2) or /var/log/cluster/corosync.log (on pacemaker version 1) is ideal. In some cases, system logs (e.g., /var/log/messages or journalctl output) can also be helpful. <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> <br>
> Sorry for asking this, i am very new to this cluster<br>
> <br>
> Regards,<br>
> Ben<br>
> <br>
> Is this the expected behaviour?<br>
> <br>
> On Wed, Feb 10, 2021 at 8:53 PM Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>> wrote:<br>
> <br>
>> On Wed, 2021-02-10 at 17:21 +0300, Ben .T.George wrote:<br>
>> > HI<br>
>> ><br>
>> > I have created PCS based 2 node cluster on centos 7 almost<br>
>> > everything is working fine,<br>
>> ><br>
>> > My client machine is on vmware and when I reboot the active node, the<br>
>> > service group is relocating to the passive node and the resources are<br>
>> > starting fine(one IP and application).<br>
>> ><br>
>> > But whenever the other node reboots and joins back to the cluster,<br>
>> > the resources are moved back to that node.<br>
>> ><br>
>> > please find below config :<br>
>> > --------------------------------------------<br>
>> > Cluster Name: EMS<br>
>> > Corosync Nodes:<br>
>> > <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a> <a href="http://zkwemsapp02.example.com" rel="noreferrer" target="_blank">zkwemsapp02.example.com</a><br>
>> > Pacemaker Nodes:<br>
>> > <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a> <a href="http://zkwemsapp02.example.com" rel="noreferrer" target="_blank">zkwemsapp02.example.com</a><br>
>> ><br>
>> > Resources:<br>
>> > Group: ems_rg<br>
>> > Resource: ems_vip (class=ocf provider=heartbeat type=IPaddr2)<br>
>> > Attributes: cidr_netmask=24 ip=10.96.11.39<br>
>> > Meta Attrs: resource-stickiness=1<br>
>> > Operations: monitor interval=30s (ems_vip-monitor-interval-30s)<br>
>> > start interval=0s timeout=20s (ems_vip-start-interval-<br>
>> > 0s)<br>
>> > stop interval=0s timeout=20s (ems_vip-stop-interval-<br>
>> > 0s)<br>
>> > Resource: ems_app (class=systemd type=ems-app)<br>
>> > Meta Attrs: resource-stickiness=1<br>
>> > Operations: monitor interval=60 timeout=100 (ems_app-monitor-<br>
>> > interval-60)<br>
>> > start interval=0s timeout=100 (ems_app-start-interval-<br>
>> > 0s)<br>
>> > stop interval=0s timeout=100 (ems_app-stop-interval-<br>
>> > 0s)<br>
>> ><br>
>> > Stonith Devices:<br>
>> > Resource: ems_vmware_fence (class=stonith type=fence_vmware_soap)<br>
>> > Attributes: ip=10.151.37.110 password=!CM4!!6j7yiApFT<br>
>> > pcmk_host_map=zkwemsapp01.example.com:ZKWEMSAPP01;zkwemsapp02.example<br>
>> > .com:ZKWEMSAPP02 ssl_insecure=1 username=mtc_tabs\redhat.fadmin<br>
>> > Operations: monitor interval=60s (ems_vmware_fence-monitor-<br>
>> > interval-60s)<br>
>> > Fencing Levels:<br>
>> > Target: <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a><br>
>> > Level 1 - ems_vmware_fence<br>
>> > Target: <a href="http://zkwemsapp02.example.com" rel="noreferrer" target="_blank">zkwemsapp02.example.com</a><br>
>> > Level 1 - ems_vmware_fence<br>
>> ><br>
>> > Location Constraints:<br>
>> > Resource: ems_rg<br>
>> > Enabled on: <a href="http://zkwemsapp01.example.com" rel="noreferrer" target="_blank">zkwemsapp01.example.com</a> (score:INFINITY) (role:<br>
>> > Started) (id:cli-prefer-ems_rg)<br>
>><br>
>> The above constraint says to prefer that node whenever it is available.<br>
>> The id starting with "cli-" means that it was added by a command-line<br>
>> tool (most likely "pcs resource move"). When you "move" a resource,<br>
>> you're actually telling the cluster to prefer a specific node, and it<br>
>> remembers that preference until you tell it otherwise. You can remove<br>
>> the preference with "pcs resource clear" (or equivalently crm_resource<br>
>> --clear).<br>
>><br>
>> I see your resources have resource-stickiness=1. That is how much<br>
>> preference an active resource has for the node that it is currently on.<br>
>> You can also see the above constraint has a score of INFINITY. If the<br>
>> scores were set such that the stickiness was higher than the<br>
>> constraint, then the stickiness would win and the resource would stay<br>
>> put.<br>
>><br>
>> > Ordering Constraints:<br>
>> > Colocation Constraints:<br>
>> > Ticket Constraints:<br>
>> ><br>
>> > Alerts:<br>
>> > No alerts defined<br>
>> ><br>
>> > Resources Defaults:<br>
>> > resource-stickiness=1000<br>
>> > Operations Defaults:<br>
>> > No defaults set<br>
>> ><br>
>> > Cluster Properties:<br>
>> > cluster-infrastructure: corosync<br>
>> > cluster-name: EMS<br>
>> > dc-version: 2.0.2-3.el8-744a30d655<br>
>> > have-watchdog: false<br>
>> > last-lrm-refresh: 1612951127<br>
>> > symmetric-cluster: true<br>
>> ><br>
>> > Quorum:<br>
>> > Options:<br>
>> ><br>
>> > --------------------------<br>
>> ><br>
>> > Regards,<br>
>> > Ben<br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > Manage your subscription:<br>
>> > <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a> <br>
>> ><br>
>> > ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a> <br>
>> --<br>
>> Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>><br>
>><br>
>><br>
<br>
<br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
<br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div>Regards,<br><br></div>Reid Wahl, RHCA<br></div><div>Senior Software Maintenance Engineer, Red Hat<br></div>CEE - Platform Support Delivery - ClusterHA</div></div></div></div></div></div></div></div></div></div></div></div></div></div></div>