<div dir="auto">+Ayush</div><div dir="auto"><br></div><div dir="auto">Thanks</div><div dir="auto"><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, 15 Mar 2023 at 8:17 PM, Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">Hi,<br>
<br>
If you can reproduce the problem, the following info would be helpful:<br>
<br>
* "cibadmin -Q | grep standby" : to show whether it was successfully<br>
recorded in the CIB (will show info for any node with standby, but the<br>
XML ID likely has the node name or ID in it)<br>
<br>
* "attrd_updater -Q -n standby -N FILE-2" : to show whether the<br>
attribute manager has the right value in memory for the affected node<br>
<br>
<br>
On Wed, 2023-03-15 at 15:51 +0530, Ayush Siddarath wrote:<br>
> Hi All, <br>
> <br>
> We are seeing an issue as part of crm maintenance operations. As part<br>
> of the upgrade process, the crm nodes are put into standby mode. <br>
> But it's observed that one of the nodes fails to go into standby mode<br>
> despite the "crm node standby" returning success. <br>
> <br>
> Commands issued to put nodes into maintenance : <br>
> <br>
> > [2023-03-15 06:07:08 +0000] [468] [INFO] changed: [FILE-1] =><br>
> > {"changed": true, "cmd": "/usr/sbin/crm node standby FILE-1",<br>
> > "delta": "0:00:00.442615", "end": "2023-03-15 06:07:08.150375",<br>
> > "rc": 0, "start": "2023-03-15 06:07:07.707760", "stderr": "",<br>
> > "stderr_lines": [], "stdout": "\u001b[32mINFO\u001b[0m: standby<br>
> > node FILE-1", "stdout_lines": ["\u001b[32mINFO\u001b[0m: standby<br>
> > node FILE-1"]}<br>
> > .<br>
> > [2023-03-15 06:07:08 +0000] [468] [INFO] changed: [FILE-2] =><br>
> > {"changed": true, "cmd": "/usr/sbin/crm node standby FILE-2",<br>
> > "delta": "0:00:00.459407", "end": "2023-03-15 06:07:08.223749",<br>
> > "rc": 0, "start": "2023-03-15 06:07:07.764342", "stderr": "",<br>
> > "stderr_lines": [], "stdout": "\u001b[32mINFO\u001b[0m: standby<br>
> > node FILE-2", "stdout_lines": ["\u001b[32mINFO\u001b[0m: standby<br>
> > node FILE-2"]}<br>
> <br>
> ........ <br>
> <br>
> Crm status o/p after above command execution: <br>
> <br>
> > FILE-2:/var/log # crm status<br>
> > Cluster Summary:<br>
> > * Stack: corosync<br>
> > * Current DC: FILE-1 (version 2.1.2+20211124.ada5c3b36-<br>
> > 150400.2.43-2.1.2+20211124.ada5c3b36) - partition with quorum<br>
> > * Last updated: Wed Mar 15 08:32:27 2023<br>
> > * Last change: Wed Mar 15 06:07:08 2023 by root via cibadmin on<br>
> > FILE-4<br>
> > * 4 nodes configured<br>
> > * 11 resource instances configured (5 DISABLED)<br>
> > Node List:<br>
> > * Node FILE-1: standby (with active resources)<br>
> > * Node FILE-3: standby (with active resources)<br>
> > * Node FILE-4: standby (with active resources)<br>
> > * Online: [ FILE-2 ]<br>
> <br>
> pacemaker logs indicate that FILE-2 received the commands to put it<br>
> into standby. <br>
> <br>
> > FILE-2:/var/log # grep standby /var/log/pacemaker/pacemaker.log<br>
> > Mar 15 06:07:08.098 FILE-2 pacemaker-based [8635]<br>
> > (cib_perform_op) info: ++ <br>
> > <nvpair id="num-1-instance_attributes-standby" name="standby"<br>
> > value="on"/><br>
> > Mar 15 06:07:08.166 FILE-2 pacemaker-based [8635]<br>
> > (cib_perform_op) info: ++ <br>
> > <nvpair id="num-3-instance_attributes-standby" name="standby"<br>
> > value="on"/><br>
> > Mar 15 06:07:08.170 FILE-2 pacemaker-based [8635]<br>
> > (cib_perform_op) info: ++ <br>
> > <nvpair id="num-2-instance_attributes-standby" name="standby"<br>
> > value="on"/><br>
> > Mar 15 06:07:08.230 FILE-2 pacemaker-based [8635]<br>
> > (cib_perform_op) info: ++ <br>
> > <nvpair id="num-4-instance_attributes-standby" name="standby"<br>
> > value="on"/><br>
> <br>
> <br>
> Issue is quite intermittent and observed on other nodes as well. <br>
> We have seen a similar issue when we try to remove the node from<br>
> standby mode (using crm node online) command. One/more nodes fails to<br>
> get removed from standby mode. <br>
> <br>
> We suspect it could be an issue with parallel execution of node<br>
> standby/online command for all nodes but this issue wasn't observed<br>
> with pacemaker packaged with SLES15 SP2 OS. <br>
> <br>
> I'm attaching the pacemaker.log from FILE-2 for analysis. Let us know<br>
> if any additional information is required. <br>
> <br>
> OS: SLES15 SP4<br>
> Pacemaker version --> <br>
> crmadmin --version<br>
> Pacemaker 2.1.2+20211124.ada5c3b36-150400.2.43<br>
> <br>
> Thanks,<br>
> Ayush <br>
> <br>
> _______________________________________________<br>
> Manage your subscription:<br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> <br>
> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
-- <br>
Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>><br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div>