<html><header></header><body><div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Sorry, somtimes I want to make it simpler, and maybe I'm missing informations.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">I think I found what happened, and actually xstha2 DID NOT mount the zpool, nor start the IP address.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Let's make a step back.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">We have two ip resources, one is normally for xstha1, the other is normally for xstha2.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">The zpool is normally for xstha1.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">The two IPs are used to share NFS resources to a Proxmox cluster (that's why I call them NFS IPs).</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">The logic moves the zpool and then the IP to node xstha2, when xstha1 is not available, and vice versa.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">I was confused by the "duplicated IP" message I've seen on xstha1 while xstha2 was going to be stonished.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">I was worried that xstha2 may have mounted the zpool when xstha1 had already mounted it.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Actually, reading again the "duplicated IP" message, it was xstha1 that (having the pool mounted and not seeing xstha2 anymore) got the xstha2 IP for NFS.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">So I think there is no worry about the zpool!</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">I will now try to play with the "no-quorum-policy=ignore" to see if that actually works correctly with stonith enabled.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Thanks for your help!</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Gabriele </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div id="wt-mailcard">
<div> </div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Sonicle S.r.l. </strong>: <a href="http://www.sonicle.com/" target="_new">http://www.sonicle.com</a></span></div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Music: </strong><a href="http://www.gabrielebulfon.com/" target="_new">http://www.gabrielebulfon.com</a></span></div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>eXoplanets : </strong><a href="https://gabrielebulfon.bandcamp.com/album/exoplanets">https://gabrielebulfon.bandcamp.com/album/exoplanets</a></span></div>
<div> </div>
</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"><tt><br /><br /><br />----------------------------------------------------------------------------------<br /><br />Da: Andrei Borzenkov <arvidjaar@gmail.com><br />A: Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org> <br />Data: 17 dicembre 2020 9.50.54 CET<br />Oggetto: Re: [ClusterLabs] Antw: [EXT] delaying start of a resource<br /><br /></tt></div>
<blockquote style="border-left: #000080 2px solid; margin-left: 5px; padding-left: 5px;"><tt>On Thu, Dec 17, 2020 at 11:11 AM Gabriele Bulfon <gbulfon@sonicle.com> wrote:<br />><br />> Yes, sorry took same bash by mistake...here are the correct logs.<br />><br />> Yes, xstha1 has delay 10s so that I'm giving him precedence, xstha2 has delay 1s and will be stonished earlier.<br />> During the short time before xstha2 got powered off, I saw it had time to turn on NFS IP (I saw duplicated IP on xstha1).<br /><br />Again - please write so that others can understand you. How should we<br />know what "NFS IP" is supposed to be? You have two resources that<br />looks like they are IP related and neither of them has NFS in its<br />name: xstha1_san0_IP, xstha2_san0_IP. And even if they had NFS in<br />their names - which of two resources are you talking about?<br /><br />According to logs from xstha1, it started to activate resources only<br />after stonith was confirmed<br /><br />Dec 16 15:08:12 [708] stonith-ng: notice: log_operation:<br />Operation 'off' [1273] (call 4 from crmd.712) for host 'xstha2' with<br />device 'xstha2-stonith' returned: 0 (OK)<br />Dec 16 15:08:12 [708] stonith-ng: notice: remote_op_done:<br />Operation 'off' targeting xstha2 on xstha1 for<br />crmd.712@xstha1.e487e7cc: OK<br /><br />It is possible that your IPMI/BMC/whatever implementation responds<br />with success before it actually completes this action. I have seen at<br />least some delays in the past. There is not really much that can be<br />done here except adding artificial delay to stonith resource agent.<br />You need to test IPMI functionality before using it in pacemaker.<br /><br />In this case xstha1 may have configured xstha2_san0_IP resource before<br />xstha2 was down. This would explain duplicated IP.<br /><br />> And becase configuration has "order zpool_data_order inf: zpool_data ( xstha1_san0_IP )", that means xstha2 had imported the zpool for a small time before being stonished, and this must never happen.<br /><br />There is no indication in logs that pacemaker started or attempted to<br />start either of xstha1_san0_IP or zpool_sata on xstha2.<br /><br />><br />> What suggests me that resources were started on xstha2 (and duplicated IP is an effect) are these logs portions of xstha2.<br /><br />The resources xstha2_san0_IP *remained* started on xstha2. pacemaker<br />did not try to stop them at all, it had no reasons to do so.<br /><br />> These tells me it could not turn off resources on xstha1 (correct, it couldn't contact xstha1):<br />><br />> Dec 16 15:08:56 [667] pengine: warning: custom_action: Action xstha1_san0_IP_stop_0 on xstha1 is unrunnable (offline)<br />> Dec 16 15:08:56 [667] pengine: warning: custom_action: Action zpool_data_stop_0 on xstha1 is unrunnable (offline)<br />> Dec 16 15:08:56 [667] pengine: warning: custom_action: Action xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)<br />> Dec 16 15:08:56 [667] pengine: warning: custom_action: Action xstha2-stonith_stop_0 on xstha1 is unrunnable (offline)<br />><br />> These tells me xstha2 took control of resources, that were actually running on xstha1:<br />><br />> Dec 16 15:08:56 [667] pengine: notice: LogAction: * Move xstha1_san0_IP ( xstha1 -> xstha2 )<br />> Dec 16 15:08:56 [667] pengine: info: LogActions: Leave xstha2_san0_IP (Started xstha2)<br />> Dec 16 15:08:56 [667] pengine: notice: LogAction: * Move zpool_data ( xstha1 -> xstha2 )<br />> Dec 16 15:08:56 [667] pengine: info: LogActions: Leave xstha1-stonith (Started xstha2)<br />> Dec 16 15:08:56 [667] pengine: notice: LogAction: * Stop xstha2-stonith ( xstha1 ) due to node availability<br />><br /><br />These lines are only action plan what pacemaker *will* do.<br />_______________________________________________<br />Manage your subscription:<br />https://lists.clusterlabs.org/mailman/listinfo/users<br /><br />ClusterLabs home: https://www.clusterlabs.org/<br /><br /><br /></tt></blockquote></body></html>