<div dir="ltr">Ken, very much appreciate your help on this - I am wondering what you might have gleaned from the logs.<div>Thanks!</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 8, 2021 at 2:43 PM Stuart Massey <<a href="mailto:djangoschef@gmail.com">djangoschef@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr">Wonderful, thank you for looking at this!<div>I have posted uncompressed "saving inputs" files at the links below - 3241 is the immediately preceding one that exists, and 3242 is the one created upon encountering the problem state. In both cases, it looks to me like node02 is DC. There are none of these on node01 for the intervening time period. I've also posted a patch diff of the two with xml formatted for one attribute per line, and am reiterating the link to the related pacemaker.log extract.</div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div><a href="https://project.ibss.net/samples/pe-input-3242.txt" target="_blank">https://project.ibss.net/samples/pe-input-3242.txt</a> (upon encountering the problem demotion)<br></div><div><a href="https://project.ibss.net/samples/pe-input-3241.txt" target="_blank">https://project.ibss.net/samples/pe-input-3241.txt</a> (most recent previous pe-input-*)<br></div><div><a href="https://project.ibss.net/samples/pe-input-diff.txt" target="_blank">https://project.ibss.net/samples/pe-input-diff.txt</a><br></div><div><div><a href="https://project.ibss.net/samples/deidPacemakerLog.2021-01-25.txt" target="_blank">https://project.ibss.net/samples/deidPacemakerLog.2021-01-25.txt</a> </div></div></blockquote>Thank you,</div><div dir="ltr">Stuart<br><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div><br></div></blockquote></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Feb 8, 2021 at 12:36 PM Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">On Mon, 2021-02-08 at 12:01 -0500, Stuart Massey wrote:<br>
> I'm wondering if anyone can advise us on next steps here and/or<br>
> correct our understanding. This seems like a race condition that<br>
> causes resources to be stopped unnecessarily. Is there a way to<br>
> prevent a node from processing cib updates from a peer while DC<br>
> negotiations are underway? Our "node02" is running resources fine, <br>
<br>
It shouldn't be necessary -- when node02 becomes DC, it shouldn't see<br>
itself as unable to run resources, it should probe the current state of<br>
everything, and then come to the right conclusion.<br>
<br>
If you look in the detail log, there should be "saving inputs" messages<br>
on the DC at any given time, with a file name. If you can attach the<br>
file from when node02 first becomes DC, I can check whether probes are<br>
being scheduled.<br>
<br>
> and since it winds up winning the DC election, would continue to run<br>
> them uninterrupted if it just ignored or pended the cib updates it<br>
> receives in the middle of the negotiation.<br>
> Very much appreciate all the help and discussion available on this<br>
> board.<br>
> Regards,<br>
> Stuart<br>
> <br>
> On Mon, Feb 1, 2021 at 11:43 AM Stuart Massey <<a href="mailto:djangoschef@gmail.com" target="_blank">djangoschef@gmail.com</a>><br>
> wrote:<br>
> > Sequence seems to be:<br>
> > node02 is DC and master/primary, node01 is maintenance mode and<br>
> > slave/secondary<br>
> > comms go down<br>
> > node01 elects itself master, and deletes node01 status from its cib<br>
> > comms come up<br>
> > cluster starts reforming<br>
> > node01 sends cib updates to node02<br>
> > DC negotiations start, both nodes unset DC<br>
> > node02 receives cib updates and process them, deleting its own<br>
> > status<br>
> > DC negotiations complete with node02 winning<br>
> > node02, having lost it's status, believes it cannot host resources<br>
> > and stops them all<br>
> > for whatever reason, perhaps somehow due to the completely missing<br>
> > transient_attributes, node02 nevers schedules a probe for itself<br>
> > we have to "refresh" manually<br>
> > <br>
> > On Mon, Feb 1, 2021 at 11:31 AM Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>><br>
> > wrote:<br>
> > > On Mon, 2021-02-01 at 11:09 -0500, Stuart Massey wrote:<br>
> > > > Hi Ken,<br>
> > > > Thanks. In this case, transient_attributes for node02 in the<br>
> > > cib on<br>
> > > > node02 which never lost quorum seem to be deleted by a request<br>
> > > from<br>
> > > > node01 when node01 rejoins the cluster - IF I understand the<br>
> > > > pacemaker.log correctly. This causes node02 to stop resources,<br>
> > > which<br>
> > > > will not be restarted until we manually refresh on node02.<br>
> > > <br>
> > > Good point, it depends on which node is DC. When a cluster<br>
> > > splits, each<br>
> > > side sees the other side as the one that left. When the split<br>
> > > heals,<br>
> > > whichever side has the newly elected DC is the one that clears<br>
> > > the<br>
> > > other.<br>
> > > <br>
> > > However the DC should schedule probes for the other side, and<br>
> > > probes<br>
> > > generally set the promotion score, so manual intervention<br>
> > > shouldn't be<br>
> > > needed. I'd make sure that probes were scheduled, then<br>
> > > investigate how<br>
> > > the agent sets the score.<br>
> > > <br>
> > > > On Mon, Feb 1, 2021 at 10:59 AM Ken Gaillot <<br>
> > > <a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>><br>
> > > > wrote:<br>
> > > > > On Fri, 2021-01-29 at 12:37 -0500, Stuart Massey wrote:<br>
> > > > > > Can someone help me with this?<br>
> > > > > > Background:<br>
> > > > > > > "node01" is failing, and has been placed in "maintenance"<br>
> > > mode.<br>
> > > > > It<br>
> > > > > > > occasionally loses connectivity.<br>
> > > > > > > "node02" is able to run our resources<br>
> > > > > > <br>
> > > > > > Consider the following messages from pacemaker.log on<br>
> > > "node02",<br>
> > > > > just<br>
> > > > > > after "node01" has rejoined the cluster (per "node02"):<br>
> > > > > > > Jan 28 14:48:03 [21933] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> cib: <br>
> > > > > info:<br>
> > > > > > > cib_perform_op: --<br>
> > > > > > ><br>
> > > /cib/status/node_state[@id='2']/transient_attributes[@id='2']<br>
> > > > > > > Jan 28 14:48:03 [21933] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> cib: <br>
> > > > > info:<br>
> > > > > > > cib_perform_op: + /cib: @num_updates=309<br>
> > > > > > > Jan 28 14:48:03 [21933] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> cib: <br>
> > > > > info:<br>
> > > > > > > cib_process_request: Completed cib_delete operation for<br>
> > > > > section<br>
> > > > > > ><br>
> > > //node_state[@uname='<a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a>']/transient_attributes:<br>
> > > > > OK<br>
> > > > > > > (rc=0, origin=<a href="http://node01.example.com/crmd/3784" rel="noreferrer" target="_blank">node01.example.com/crmd/3784</a>,<br>
> > > version=0.94.309)<br>
> > > > > > > Jan 28 14:48:04 [21938] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> crmd: <br>
> > > > > info:<br>
> > > > > > > abort_transition_graph: Transition aborted by<br>
> > > deletion of<br>
> > > > > > > transient_attributes[@id='2']: Transient attribute change<br>
> > > |<br>
> > > > > > > cib=0.94.309 source=abort_unless_down:357<br>
> > > > > > ><br>
> > > > ><br>
> > > path=/cib/status/node_state[@id='2']/transient_attributes[@id='2'<br>
> > > ]<br>
> > > > > > > complete=true<br>
> > > > > > > Jan 28 14:48:05 [21937] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> pengine: <br>
> > > > > info:<br>
> > > > > > > master_color: ms_drbd_ourApp: Promoted 0 instances of a<br>
> > > > > possible 1<br>
> > > > > > > to master<br>
> > > > > > > <br>
> > > > > > The implication, it seems to me, is that "node01" has asked<br>
> > > > > "node02"<br>
> > > > > > to delete the transient-attributes for "node02". The<br>
> > > transient-<br>
> > > > > > attributes should normally be:<br>
> > > > > > <transient_attributes id="2"><br>
> > > > > > <instance_attributes id="status-2"><br>
> > > > > > <nvpair id="status-2-master-drbd_ourApp"<br>
> > > name="master-<br>
> > > > > > drbd_ourApp" value="10000"/><br>
> > > > > > <nvpair id="status-2-pingd" name="pingd"<br>
> > > value="100"/><br>
> > > > > > </instance_attributes><br>
> > > > > > </transient_attributes><br>
> > > > > > <br>
> > > > > > These attributes are necessary for "node02" to be<br>
> > > Master/Primary,<br>
> > > > > > correct? <br>
> > > > > > <br>
> > > > > > Why might this be happening and how do we prevent it?<br>
> > > > > <br>
> > > > > Transient attributes are always cleared when a node leaves<br>
> > > the<br>
> > > > > cluster<br>
> > > > > (that's what makes them transient ...). It's probably<br>
> > > coincidence<br>
> > > > > it<br>
> > > > > went through as the node rejoined.<br>
> > > > > <br>
> > > > > When the node rejoins, it will trigger another run of the<br>
> > > > > scheduler,<br>
> > > > > which will schedule a probe of all resources on the node.<br>
> > > Those<br>
> > > > > probes<br>
> > > > > should reset the promotion score.<br>
> > > > > _______________________________________________<br>
> > > > > Manage your subscription:<br>
> > > > > <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> > > > > <br>
> > > > > ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
> > > _______________________________________________<br>
> > > Manage your subscription:<br>
> > > <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> > > <br>
> > > ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
-- <br>
Ken Gaillot <<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>><br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>
</blockquote></div>