<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Feb 26, 2022 at 7:14 AM Strahil Nikolov via Users <<a href="mailto:users@clusterlabs.org">users@clusterlabs.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">I always used this one for triggering kdump when using sbd:<div><a href="https://www.suse.com/support/kb/doc/?id=000019873" target="_blank">https://www.suse.com/support/kb/doc/?id=000019873</a><br> <br> <blockquote style="margin:0px 0px 20px"> <div style="font-family:Roboto,sans-serif;color:rgb(109,0,246)"> <div>On Fri, Feb 25, 2022 at 21:34, Reid Wahl</div><div><<a href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote:</div> </div> <div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"> On Fri, Feb 25, 2022 at 3:47 AM Andrei Borzenkov <<a shape="rect" href="mailto:arvidjaar@gmail.com" target="_blank">arvidjaar@gmail.com</a>> wrote:<br clear="none">><br clear="none">> On Fri, Feb 25, 2022 at 2:23 PM Reid Wahl <<a shape="rect" href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote:<br clear="none">> ><br clear="none">> > On Fri, Feb 25, 2022 at 3:22 AM Reid Wahl <<a shape="rect" href="mailto:nwahl@redhat.com" target="_blank">nwahl@redhat.com</a>> wrote:<br clear="none">> > ><br clear="none">> ...<br clear="none">> > > ><br clear="none">> > > > So what happens most likely is that the watchdog terminates the kdump.<br clear="none">> > > > In that case all the mess with fence_kdump won't help, right?<br clear="none">> > ><br clear="none">> > > You can configure extra_modules in your /etc/kdump.conf file to<br clear="none">> > > include the watchdog module, and then restart kdump.service. For<br clear="none">> > > example:<br clear="none">> > ><br clear="none">> > > # grep ^extra_modules /etc/kdump.conf<br clear="none">> > > extra_modules i6300esb<br clear="none">> > ><br clear="none">> > > If you're not sure of the name of your watchdog module, wdctl can help<br clear="none">> > > you find it. sbd needs to be stopped first, because it keeps the<br clear="none">> > > watchdog device timer busy.<br clear="none">> > ><br clear="none">> > > # pcs cluster stop --all<br clear="none">> > > # wdctl | grep Identity<br clear="none">> > > Identity: i6300ESB timer [version 0]<br clear="none">> > > # lsmod | grep -i i6300ESB<br clear="none">> > > i6300esb 13566 0<br clear="none">> > ><br clear="none">> > ><br clear="none">> > > If you're also using fence_sbd (poison-pill fencing via block device),<br clear="none">> > > then you should be able to protect yourself from that during a dump by<br clear="none">> > > configuring fencing levels so that fence_kdump is level 1 and<br clear="none">> > > fence_sbd is level 2.<br clear="none">> ><br clear="none">> > RHKB, for anyone interested:<br clear="none">> > - sbd watchdog timeout causes node to reboot during crash kernel<br clear="none">> > execution (<a shape="rect" href="https://access.redhat.com/solutions/3552201" target="_blank">https://access.redhat.com/solutions/3552201</a>)<br clear="none">><br clear="none">> What is not clear from this KB (and quotes from it above) - what<br clear="none">> instance updates watchdog? Quoting (emphasis mine)<br clear="none">><br clear="none">> --><--<br clear="none">> With the module loaded, the timer *CAN* be updated so that it does not<br clear="none">> expire and force a reboot in the middle of vmcore generation.<br clear="none">> --><--<br clear="none">><br clear="none">> Sure it can, but what program exactly updates the watchdog during<br clear="none">> kdump execution? I am pretty sure that sbd does not run at this point.<br clear="none"><br clear="none">That's a valid question. I found this approach to work back in 2018<br clear="none">after a fair amount of frustration, and didn't question it too deeply<br clear="none">at the time.<br clear="none"><br clear="none">The answer seems to be that the kernel does it.<br clear="none"> - <a shape="rect" href="https://stackoverflow.com/a/2020717" target="_blank">https://stackoverflow.com/a/2020717</a><br clear="none"> - <a shape="rect" href="https://stackoverflow.com/a/42589110" target="_blank">https://stackoverflow.com/a/42589110</a></div></blockquote></div></blockquote><div>I think in most cases nobody would be triggering the running watchdog</div><div>except maybe in case of the 2 drivers mentioned.</div><div>Behavior is that if there is no watchdog-timeout defined for the crashdump-case</div><div>sbd will (at least try to) disable the watchdog.</div><div>If disabling isn't prohibited or not possible with a certain watchdog this should</div><div>lead to the hardware-watchdog being really disabled without anything needing</div><div>to trigger it anymore.</div><div>If crashdump-watchdog-timeout is configured to the same value as</div><div>watchdog-timeout engaged before sbd isn't gonna touch the watchdog</div><div>(closing the device without stopping).</div><div>That being said I'd suppose that the only somewhat production-safe</div><div>configuration should be setting both watchdog-timeouts to the same</div><div>value.</div><div>I doubt that we can assume that all io from the host - that was initiated</div><div>prior to triggering the transition to crashdump-kernel - being stopped</div><div>immediately. All other nodes will assume that io will be stopped within</div><div>watchdog-timeout though. When we disable the watchdog we can't</div><div>be sure that subsequent transition to crashdump-kernel will even happen.</div><div>So leaving watchdog-timeout at the previous value seems to be</div><div>the only way to really assure that the node is being silenced by a </div><div>hardware-reset within the timeout assumed by the rest of the nodes.</div><div>In case the watchdog-driver has this running-detection - mentioned</div><div>in the links above - the safe way would probably be having the</div><div>module removed from crash-kernel. </div><div><br></div><div>Klaus</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><blockquote style="margin:0px 0px 20px"><div style="padding:10px 0px 0px 20px;margin:10px 0px 0px;border-left:1px solid rgb(109,0,246)"><br clear="none">> _______________________________________________<br clear="none">> Manage your subscription:<br clear="none">> <a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none">><br clear="none">> ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><br clear="none">><br clear="none"><br clear="none"><br clear="none">-- <br clear="none">Regards,<br clear="none"><br clear="none">Reid Wahl (He/Him), RHCA<br clear="none">Senior Software Maintenance Engineer, Red Hat<br clear="none">CEE - Platform Support Delivery - ClusterHA<div id="gmail-m_145643614668747502yqtfd54824"><br clear="none"><br clear="none">_______________________________________________<br clear="none">Manage your subscription:<br clear="none"><a shape="rect" href="https://lists.clusterlabs.org/mailman/listinfo/users" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none"><br clear="none">ClusterLabs home: <a shape="rect" href="https://www.clusterlabs.org/" target="_blank">https://www.clusterlabs.org/</a><br clear="none"></div> </div> </blockquote></div>_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div>