<html><header></header><body><div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Thanks!<br /><br />I tried first option, by adding pcmk_delay_base to the two stonith primitives.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">First has 1 second, second has 5 seconds.</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">It didn't work :( they still killed each other :(</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Anything wrong with the way I did it?</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;">Here's the config:</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"><span style="font-family: 'courier new', courier;">node 1: xstha1 \</span><br /><span style="font-family: 'courier new', courier;"> attributes standby=off maintenance=off</span><br /><span style="font-family: 'courier new', courier;">node 2: xstha2 \</span><br /><span style="font-family: 'courier new', courier;"> attributes standby=off maintenance=off</span><br /><span style="font-family: 'courier new', courier;">primitive xstha1-stonith stonith:external/ipmi \</span><br /><span style="font-family: 'courier new', courier;"> params hostname=xstha1 ipaddr=192.168.221.18 userid=ADMIN passwd="***" interface=lanplus pcmk_delay_base=1 \</span><br /><span style="font-family: 'courier new', courier;"> op monitor interval=25 timeout=25 start-delay=25 \</span><br /><span style="font-family: 'courier new', courier;"> meta target-role=Started</span><br /><span style="font-family: 'courier new', courier;">primitive xstha1_san0_IP IPaddr \</span><br /><span style="font-family: 'courier new', courier;"> params ip=10.10.10.1 cidr_netmask=255.255.255.0 nic=san0</span><br /><span style="font-family: 'courier new', courier;">primitive xstha2-stonith stonith:external/ipmi \</span><br /><span style="font-family: 'courier new', courier;"> params hostname=xstha2 ipaddr=192.168.221.19 userid=ADMIN passwd="***" interface=lanplus pcmk_delay_base=5 \</span><br /><span style="font-family: 'courier new', courier;"> op monitor interval=25 timeout=25 start-delay=25 \</span><br /><span style="font-family: 'courier new', courier;"> meta target-role=Started</span><br /><span style="font-family: 'courier new', courier;">primitive xstha2_san0_IP IPaddr \</span><br /><span style="font-family: 'courier new', courier;"> params ip=10.10.10.2 cidr_netmask=255.255.255.0 nic=san0</span><br /><span style="font-family: 'courier new', courier;">primitive zpool_data ZFS \</span><br /><span style="font-family: 'courier new', courier;"> params pool=test \</span><br /><span style="font-family: 'courier new', courier;"> op start timeout=90 interval=0 \</span><br /><span style="font-family: 'courier new', courier;"> op stop timeout=90 interval=0 \</span><br /><span style="font-family: 'courier new', courier;"> meta target-role=Started</span><br /><span style="font-family: 'courier new', courier;">location xstha1-stonith-pref xstha1-stonith -inf: xstha1</span><br /><span style="font-family: 'courier new', courier;">location xstha1_san0_IP_pref xstha1_san0_IP 100: xstha1</span><br /><span style="font-family: 'courier new', courier;">location xstha2-stonith-pref xstha2-stonith -inf: xstha2</span><br /><span style="font-family: 'courier new', courier;">location xstha2_san0_IP_pref xstha2_san0_IP 100: xstha2</span><br /><span style="font-family: 'courier new', courier;">order zpool_data_order inf: zpool_data ( xstha1_san0_IP )</span><br /><span style="font-family: 'courier new', courier;">location zpool_data_pref zpool_data 100: xstha1</span><br /><span style="font-family: 'courier new', courier;">colocation zpool_data_with_IPs inf: zpool_data xstha1_san0_IP</span><br /><span style="font-family: 'courier new', courier;">property cib-bootstrap-options: \</span><br /><span style="font-family: 'courier new', courier;"> have-watchdog=false \</span><br /><span style="font-family: 'courier new', courier;"> dc-version=1.1.15-e174ec8 \</span><br /><span style="font-family: 'courier new', courier;"> cluster-infrastructure=corosync \</span><br /><span style="font-family: 'courier new', courier;"> stonith-action=poweroff \</span><br /><span style="font-family: 'courier new', courier;"> no-quorum-policy=stop</span></div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"> </div>
<div id="wt-mailcard">
<div> </div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Sonicle S.r.l. </strong>: <a href="http://www.sonicle.com/" target="_new">http://www.sonicle.com</a></span></div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Music: </strong><a href="http://www.gabrielebulfon.com/" target="_new">http://www.gabrielebulfon.com</a></span></div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>eXoplanets : </strong><a href="https://gabrielebulfon.bandcamp.com/album/exoplanets">https://gabrielebulfon.bandcamp.com/album/exoplanets</a></span></div>
<div> </div>
</div>
<div style="font-family: tahoma,arial,helvetica,sans-serif; font-size: 14px;"><tt><br /><br /><br />----------------------------------------------------------------------------------<br /><br />Da: Andrei Borzenkov <arvidjaar@gmail.com><br />A: users@clusterlabs.org <br />Data: 13 dicembre 2020 7.50.57 CET<br />Oggetto: Re: [ClusterLabs] Antw: [EXT] Recoveing from node failure<br /><br /></tt></div>
<blockquote style="border-left: #000080 2px solid; margin-left: 5px; padding-left: 5px;"><tt>12.12.2020 20:30, Gabriele Bulfon пишет:<br />> Thanks, I will experiment this.<br />> <br />> Now, I have a last issue about stonith.<br />> I tried to reproduce a stonith situation, by disabling the network interface used for HA on node 1.<br />> Stonith is configured with ipmi poweroff.<br />> What happens, is that once the interface is down, both nodes tries to stonith the other node, causing both to poweroff...<br /><br />Yes, this is expected. The options are basically<br /><br />1. Have separate stonith resource for each node and configure static<br />(pcmk_delay_base) or random dynamic (pcmk_delay_max) delays to avoid<br />both nodes starting stonith at the same time. This does not take<br />resources in account.<br /><br />2. Use fencing topology and create pseudo-stonith agent that does not<br />attempt to do anything but just delays for some time before continuing<br />with actual fencing agent. Delay can be based on anything including<br />resources running on node.<br /><br />3. If you are using pacemaker 2.0.3+, you could use new<br />priority-fencing-delay feature that implements resource-based priority<br />fencing:<br /><br />+ controller/fencing/scheduler: add new feature 'priority-fencing-delay'<br />Optionally derive the priority of a node from the<br />resource-priorities<br />of the resources it is running.<br />In a fencing-race the node with the highest priority has a certain<br />advantage over the others as fencing requests for that node are<br />executed with an additional delay.<br />controlled via cluster option priority-fencing-delay (default = 0)<br /><br /><br />See also https://www.mail-archive.com/users@clusterlabs.org/msg10328.html<br /><br />> I would like the node running all resources (zpool and nfs ip) to be the first trying to stonith the other node.<br />> Or is there anything else better?<br />> <br />> Here is the current crm config show:<br />> <br /><br />It is unreadable<br /><br />> node 1: xstha1 \ attributes standby=off maintenance=offnode 2: xstha2 \ attributes standby=off maintenance=offprimitive xstha1-stonith stonith:external/ipmi \ params hostname=xstha1 ipaddr=192.168.221.18 userid=ADMIN passwd="******" interface=lanplus \ op monitor interval=25 timeout=25 start-delay=25 \ meta target-role=Startedprimitive xstha1_san0_IP IPaddr \ params ip=10.10.10.1 cidr_netmask=255.255.255.0 nic=san0primitive xstha2-stonith stonith:external/ipmi \ params hostname=xstha2 ipaddr=192.168.221.19 userid=ADMIN passwd="******" interface=lanplus \ op monitor interval=25 timeout=25 start-delay=25 \ meta target-role=Startedprimitive xstha2_san0_IP IPaddr \ params ip=10.10.10.2 cidr_netmask=255.255.255.0 nic=san0primitive zpool_data ZFS \ params pool=test \ op start timeout=90 interval=0 \ op stop timeout=90 interval=0 \ meta target-role=Startedlocation xstha1-stonith-pref xstha1-stonith -inf: xstha1location xstha1_san0_IP_pref xstha1_san0_IP 100: xstha1location xstha2-stonith-pref xstha2-stonith -inf: xstha2location xstha2_san0_IP_pref xstha2_san0_IP 100: xstha2order zpool_data_order inf: zpool_data ( xstha1_san0_IP )location zpool_data_pref zpool_data 100: xstha1colocation zpool_data_with_IPs inf: zpool_data xstha1_san0_IPproperty cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.15-e174ec8 \ cluster-infrastructure=corosync \ stonith-action=poweroff \ no-quorum-policy=stop<br />> <br />> Thanks!<br />> Gabriele<br />> <br />> <br />> Sonicle S.r.l. : http://www.sonicle.com<br />> Music: http://www.gabrielebulfon.com<br />> eXoplanets : https://gabrielebulfon.bandcamp.com/album/exoplanets<br />> <br />> <br />> <br />> <br />> <br />> ----------------------------------------------------------------------------------<br />> <br />> Da: Andrei Borzenkov <arvidjaar@gmail.com><br />> A: users@clusterlabs.org <br />> Data: 11 dicembre 2020 18.30.29 CET<br />> Oggetto: Re: [ClusterLabs] Antw: [EXT] Recoveing from node failure<br />> <br />> <br />> 11.12.2020 18:37, Gabriele Bulfon пишет:<br />>> I found I can do this temporarily:<br />>> <br />>> crm config property cib-bootstrap-options: no-quorum-policy=ignore<br />>> <br />> <br />> All two node clusters I remember run with setting forever :)<br />> <br />>> then once node 2 is up again:<br />>> <br />>> crm config property cib-bootstrap-options: no-quorum-policy=stop<br />>> <br />>> so that I make sure nodes will not mount in another strange situation.<br />>> <br />>> Is there any better way? <br />> <br />> "better" us subjective, but ...<br />> <br />>> (such as ignore until everything is back to normal then conisder top again)<br />>> <br />> <br />> That is what stonith does. Because quorum is pretty much useless in two<br />> node cluster, as I already said all clusters I have seem used<br />> no-quorum-policy=ignore and stonith-enabled=true. It means when node<br />> boots and other node is not available stonith is attempted; if stonith<br />> succeeds pacemaker continues with starting resources; if stonith fails,<br />> node is stuck.<br />> <br />> _______________________________________________<br />> Manage your subscription:<br />> https://lists.clusterlabs.org/mailman/listinfo/users<br />> <br />> ClusterLabs home: https://www.clusterlabs.org/<br />> <br />> <br />> <br />> <br />> _______________________________________________<br />> Manage your subscription:<br />> https://lists.clusterlabs.org/mailman/listinfo/users<br />> <br />> ClusterLabs home: https://www.clusterlabs.org/<br />> <br /><br />_______________________________________________<br />Manage your subscription:<br />https://lists.clusterlabs.org/mailman/listinfo/users<br /><br />ClusterLabs home: https://www.clusterlabs.org/<br /><br /><br /></tt></blockquote></body></html>