<div dir="ltr">Ok, that is exactly what one might expect -- and: Note that only the failing node is in maintenance mode. The current master/primary is not in maintenance mode, and on that node we continue to see messages in pacemaker.log that seem to indicate that it is doing monitor operations. <div><div>Logically, if one has a multi-node cluster and puts only one of the nodes in maintenance mode while there are no managed resources running on it, wouldn't the other nodes continue to manage the resources among themselves? </div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, Jan 25, 2021 at 2:07 AM Ulrich Windl <<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-regensburg.de</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">>>> Stuart Massey <<a href="mailto:djangoschef@gmail.com" target="_blank">djangoschef@gmail.com</a>> schrieb am 22.01.2021 um 14:08 in<br>
Nachricht<br>
<<a href="mailto:CABQ68NTGDmxVo_uVLXg0HYtLgsMRGUCvCssa3eRGQfOv%2BCJ9zQ@mail.gmail.com" target="_blank">CABQ68NTGDmxVo_uVLXg0HYtLgsMRGUCvCssa3eRGQfOv+CJ9zQ@mail.gmail.com</a>>:<br>
> Hi Ulrich,<br>
> Thank you for your response.<br>
> It makes sense that this would be happening on the failing, secondary/slave<br>
> node, in which case we might expect drbd to be restarted (the service<br>
> entirely, since it is already demoted) on the slave. I don't understand how<br>
> it would affect the master, unless the failing secondary is causing some<br>
> issue with drbd on the primary that causes the monitor on the master to<br>
> time out for some reason. This does not (so far) seem to be the case, as<br>
> the failing node has now been in maintenance mode for a couple of days with<br>
> drbd still running as secondary, so if drbd failures on the secondary were<br>
> causing the monitor on the Master/Primary to timeout, we should still be<br>
> seeing that; we are not. The master has yet to demote the drbd resource<br>
> since we put the failing node in maintenance.<br>
<br>
When you are in maintenance mode, monitor operations won't run AFAIK.<br>
<br>
> We will watch for a bit longer.<br>
> Thanks again<br>
> <br>
> On Thu, Jan 21, 2021 at 2:23 AM Ulrich Windl <<br>
> <a href="mailto:Ulrich.Windl@rz.uni-regensburg.de" target="_blank">Ulrich.Windl@rz.uni-regensburg.de</a>> wrote:<br>
> <br>
>> >>> Stuart Massey <<a href="mailto:stuart.e.massey@gmail.com" target="_blank">stuart.e.massey@gmail.com</a>> schrieb am 20.01.2021 um<br>
>> 03:41<br>
>> in<br>
>> Nachricht<br>
>> <<a href="mailto:CAJfrB75UPUmZJPjXCoACRDGoG-BqDcJHff5c_OmVBFya53D-dw@mail.gmail.com" target="_blank">CAJfrB75UPUmZJPjXCoACRDGoG-BqDcJHff5c_OmVBFya53D-dw@mail.gmail.com</a>>:<br>
>> > Strahil,<br>
>> > That is very kind of you, thanks.<br>
>> > I see that in your (feature set 3.4.1) cib, drbd is in a <clone> with<br>
>> some<br>
>> > meta_attributes and operations having to do with promotion, while in our<br>
>> > (feature set 3.0.14) cib, drbd is in a <master> which does not have<br>
those<br>
>> > (maybe since promotion is implicit).<br>
>> > Our cluster has been working quite well for some time, too. I wonder<br>
what<br>
>> > would happen if you could hang the os in one of your nodes? If a VM,<br>
>> maybe<br>
>><br>
>> Unless some other fencing mechanism (like watchdog timeout) kicks in, thge<br>
>> monitor operation is the only thing that can detect a problem (from the<br>
>> cluster's view): The monitor operation would timeout. Then the cluster<br>
>> would<br>
>> try to restart the resource (stop, then start). If stop also times out the<br>
>> node<br>
>> will be fenced.<br>
>><br>
>> > the constrained secondary could be starved by setting disk IOPs to<br>
>> > something really low. Of course, you are using different versions of<br>
just<br>
>> > about everything, as we're on centos7.<br>
>> > Regards,<br>
>> > Stuart<br>
>> ><br>
>> > On Tue, Jan 19, 2021 at 6:20 PM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>><br>
>> > wrote:<br>
>> ><br>
>> >> I have just built a test cluster (centOS 8.3) for testing DRBD and it<br>
>> >> works quite fine.<br>
>> >> Actually I followed my notes from<br>
>> >> <a href="https://forums.centos.org/viewtopic.php?t=65539" rel="noreferrer" target="_blank">https://forums.centos.org/viewtopic.php?t=65539</a> with the exception of<br>
>> >> point 8 due to the "promotable" stuff.<br>
>> >><br>
>> >> I'm attaching the output of 'pcs cluster cib file' and I hope it helps<br>
>> you<br>
>> >> fix your issue.<br>
>> >><br>
>> >> Best Regards,<br>
>> >> Strahil Nikolov<br>
>> >><br>
>> >><br>
>> >> В 09:32 -0500 на 19.01.2021 (вт), Stuart Massey написа:<br>
>> >><br>
>> >> Ulrich,<br>
>> >> Thank you for that observation. We share that concern.<br>
>> >> We have 4 ea 1G nics active, bonded in pairs. One bonded pair serves<br>
the<br>
>> >> "public" (to the intranet) IPs, and the other bonded pair is private to<br>
>> the<br>
>> >> cluster, used for drbd replication. HA will, I hope, be using the<br>
>> "public"<br>
>> >> IP, since that is the route to the IP addresses resolved for the host<br>
>> >> names; that will certainly be the only route to the quorum device. I<br>
can<br>
>> >> say that this cluster has run reasonably well for quite some time with<br>
>> this<br>
>> >> configuration prior to the recently developed hardware issues on one of<br>
>> the<br>
>> >> nodes.<br>
>> >> Regards,<br>
>> >> Stuart<br>
>> >><br>
>> >> On Tue, Jan 19, 2021 at 2:49 AM Ulrich Windl <<br>
>> >> <a href="mailto:Ulrich.Windl@rz.uni-regensburg.de" target="_blank">Ulrich.Windl@rz.uni-regensburg.de</a>> wrote:<br>
>> >><br>
>> >> >>> Stuart Massey <<a href="mailto:djangoschef@gmail.com" target="_blank">djangoschef@gmail.com</a>> schrieb am 19.01.2021 um<br>
>> 04:46<br>
>> >> in<br>
>> >> Nachricht<br>
>> >> <<a href="mailto:CABQ68NQuTyYXcYgwcUpg5TxxaJjwhSp%2Bc6GCOKfOwGyRQSAAjQ@mail.gmail.com" target="_blank">CABQ68NQuTyYXcYgwcUpg5TxxaJjwhSp+c6GCOKfOwGyRQSAAjQ@mail.gmail.com</a>>:<br>
>> >> > So, we have a 2-node cluster with a quorum device. One of the nodes<br>
>> >> (node1)<br>
>> >> > is having some trouble, so we have added constraints to prevent any<br>
>> >> > resources migrating to it, but have not put it in standby, so that<br>
>> drbd<br>
>> >> in<br>
>> >> > secondary on that node stays in sync. The problems it is having lead<br>
>> to<br>
>> >> OS<br>
>> >> > lockups that eventually resolve themselves - but that causes it to be<br>
>> >> > temporarily dropped from the cluster by the current master (node2).<br>
>> >> > Sometimes when node1 rejoins, then node2 will demote the drbd ms<br>
>> >> resource.<br>
>> >> > That causes all resources that depend on it to be stopped, leading to<br>
>> a<br>
>> >> > service outage. They are then restarted on node2, since they can't<br>
run<br>
>> on<br>
>> >> > node1 (due to constraints).<br>
>> >> > We are having a hard time understanding why this happens. It seems<br>
>> like<br>
>> >> > there may be some sort of DC contention happening. Does anyone have<br>
>> any<br>
>> >> > idea how we might prevent this from happening?<br>
>> >><br>
>> >> I think if you are routing high-volume DRBD traffic throuch "the same<br>
>> >> pipe" as the cluster communication, cluster communication may fail if<br>
>> the<br>
>> >> pipe is satiated.<br>
>> >> I'm not happy with that, but it seems to be that way.<br>
>> >><br>
>> >> Maybe running a combination of iftop and iotop could help you<br>
understand<br>
>> >> what's going on...<br>
>> >><br>
>> >> Regards,<br>
>> >> Ulrich<br>
>> >><br>
>> >> > Selected messages (de-identified) from pacemaker.log that illustrate<br>
>> >> > suspicion re DC confusion are below. The update_dc and<br>
>> >> > abort_transition_graph re deletion of lrm seem to always precede the<br>
>> >> > demotion, and a demotion seems to always follow (when not already<br>
>> >> demoted).<br>
>> >> ><br>
>> >> > Jan 18 16:52:17 [21938] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> crmd: info:<br>
>> >> > do_dc_takeover: Taking over DC status for this partition<br>
>> >> > Jan 18 16:52:17 [21938] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> crmd: info:<br>
>> >> update_dc:<br>
>> >> > Set DC to <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> (3.0.14)<br>
>> >> > Jan 18 16:52:17 [21938] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> crmd: info:<br>
>> >> > abort_transition_graph: Transition aborted by deletion of<br>
>> >> > lrm[@id='1']: Resource state removal | cib=0.89.327<br>
>> >> > source=abort_unless_down:357<br>
>> >> > path=/cib/status/node_state[@id='1']/lrm[@id='1'] complete=true<br>
>> >> > Jan 18 16:52:19 [21937] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> pengine: info:<br>
>> >> > master_color: ms_drbd_ourApp: Promoted 0 instances of a possible 1<br>
to<br>
>> >> > master<br>
>> >> > Jan 18 16:52:19 [21937] <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> pengine: notice:<br>
>> >> LogAction:<br>
>> >> > * Demote drbd_ourApp:1 ( Master -> Slave<br>
>> >> > <a href="http://node02.example.com" rel="noreferrer" target="_blank">node02.example.com</a> )<br>
>> >><br>
>> >><br>
>> >><br>
>> >><br>
>> >> _______________________________________________<br>
>> >> Manage your subscription:<br>
>> >> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a> <br>
>> >><br>
>> >> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a> <br>
>> >><br>
>> >> _______________________________________________<br>
>> >><br>
>> >> Manage your subscription:<br>
>> >><br>
>> >> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a> <br>
>> >><br>
>> >><br>
>> >><br>
>> >> ClusterLabs home:<br>
>> >><br>
>> >> <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a> <br>
>> >><br>
>> >><br>
>> >><br>
>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Manage your subscription:<br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a> <br>
>><br>
>> ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a> <br>
>><br>
<br>
<br>
<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>