<html><header></header><body><div style="font-family: Tahoma; font-size: 14px; color: #000000;">Hi, it's a single controller, shared to both nodes, SM server.</div>
<div style="font-family: Tahoma; font-size: 14px; color: #000000;"> </div>
<div style="font-family: Tahoma; font-size: 14px; color: #000000;">Thanks!</div>
<div style="font-family: Tahoma; font-size: 14px; color: #000000;">Gabriele<br /><br />
<div id="wt-mailcard">
<div> </div>
<div> </div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Sonicle S.r.l. </strong>: <a href="http://www.sonicle.com/" target="_new">http://www.sonicle.com</a></span></div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Music: </strong><a href="http://www.gabrielebulfon.com/" target="_new">http://www.gabrielebulfon.com</a></span></div>
<div><span style="font-size: 14px; font-family: Helvetica;"><strong>Quantum Mechanics : </strong><a href="http://www.cdbaby.com/cd/gabrielebulfon" target="_new">http://www.cdbaby.com/cd/gabrielebulfon</a></span></div>
</div>
<tt><br /><br /><br />----------------------------------------------------------------------------------<br /><br />Da: Ulrich Windl <Ulrich.Windl@rz.uni-regensburg.de><br />A: users@clusterlabs.org <br />Data: 29 luglio 2020 9.26.39 CEST<br />Oggetto: [ClusterLabs] Antw: Re: Antw: [EXT] Stonith failing<br /><br /></tt>
<blockquote style="border-left: #000080 2px solid; margin-left: 5px; padding-left: 5px;"><tt>>>> Gabriele Bulfon <gbulfon@sonicle.com> schrieb am 29.07.2020 um 08:01 in<br />Nachricht <603366395.379.1596002482554@www>:<br />> That one was taken from a specific implementation on Solaris 11.<br />> The situation is a dual node server with shared storage controller: both <br />> nodes see the same disks concurrently.<br /><br />You mean you have a dual-controler setup (one controller on each node, both<br />connected to the same bus)? If so Use sbd!<br /><br />> Here we must be sure that the two nodes are not going to import/mount the <br />> same zpool at the same time, or we will encounter data corruption: node 1 <br />> will be perferred for pool 1, node 2 for pool 2, only in case one of the<br />node <br />> goes down or is taken offline the resources should be first free by the <br />> leaving node and taken by the other node.<br />> <br />> Would you suggest one of the available stonith in this case?<br />> <br />> Thanks!<br />> Gabriele<br />> <br />> <br />> <br />> Sonicle S.r.l. <br />> : <br />> http://www.sonicle.com <br />> Music: <br />> http://www.gabrielebulfon.com <br />> Quantum Mechanics : <br />> http://www.cdbaby.com/cd/gabrielebulfon <br />><br />----------------------------------------------------------------------------<br />> ------<br />> Da: Strahil Nikolov<br />> A: Cluster Labs - All topics related to open-source clustering welcomed<br />> Gabriele Bulfon<br />> Data: 29 luglio 2020 6.39.08 CEST<br />> Oggetto: Re: [ClusterLabs] Antw: [EXT] Stonith failing<br />> Do you have a reason not to use any stonith already available ?<br />> Best Regards,<br />> Strahil Nikolov<br />> На 28 юли 2020 г. 13:26:52 GMT+03:00, Gabriele Bulfon<br />> написа:<br />> Thanks, I attach here the script.<br />> It basically runs ssh on the other node with no password (must be<br />> preconfigured via authorization keys) with commands.<br />> This was taken from a script by OpenIndiana (I think).<br />> As it stated in the comments, we don't want to halt or boot via ssh,<br />> only reboot.<br />> Maybe this is the problem, we should at least have it shutdown when<br />> asked for.<br />> <br />> Actually if I stop corosync in node 2, I don't want it to shutdown the<br />> system but just let node 1 keep control of all resources.<br />> Same if I just shutdown manually node 2, <br />> node 1 should keep control of all resources and release them back on<br />> reboot.<br />> Instead, when I stopped corosync on node 2, log was showing the<br />> temptative to stonith node 2: why?<br />> <br />> Thanks!<br />> Gabriele<br />> <br />> <br />> <br />> Sonicle S.r.l. <br />> : <br />> http://www.sonicle.com <br />> Music: <br />> http://www.gabrielebulfon.com <br />> Quantum Mechanics : <br />> http://www.cdbaby.com/cd/gabrielebulfon <br />> Da:<br />> Reid Wahl<br />> A:<br />> Cluster Labs - All topics related to open-source clustering welcomed<br />> Data:<br />> 28 luglio 2020 12.03.46 CEST<br />> Oggetto:<br />> Re: [ClusterLabs] Antw: [EXT] Stonith failing<br />> Gabriele,<br />> <br />> "No route to host" is a somewhat generic error message when we can't<br />> find anyone to fence the node. It doesn't mean there's necessarily a<br />> network routing issue at fault; no need to focus on that error message.<br />> <br />> I agree with Ulrich about needing to know what the script does. But<br />> based on your initial message, it sounds like your custom fence agent<br />> returns 1 in response to "on" and "off" actions. Am I understanding<br />> correctly? If so, why does it behave that way? Pacemaker is trying to<br />> run a poweroff action based on the logs, so it needs your script to<br />> support an off action.<br />> On Tue, Jul 28, 2020 at 2:47 AM Ulrich Windl<br />> Ulrich.Windl@rz.uni-regensburg.de <br />> wrote:<br />> Gabriele Bulfon<br />> gbulfon@sonicle.com <br />> schrieb am 28.07.2020 um 10:56 in<br />> Nachricht<br />> :<br />> Hi, now I have my two nodes (xstha1 and xstha2) with IPs configured by<br />> Corosync.<br />> To check how stonith would work, I turned off Corosync service on<br />> second<br />> node.<br />> First node try to attempt to stonith 2nd node and take care of its<br />> resources, but this fails.<br />> Stonith action is configured to run a custom script to run ssh<br />> commands,<br />> I think you should explain what that script does exactly.<br />> [...]<br />> _______________________________________________<br />> Manage your subscription:<br />> https://lists.clusterlabs.org/mailman/listinfo/users <br />> ClusterLabs home:<br />> https://www.clusterlabs.org/ <br />> --<br />> Regards,<br />> Reid Wahl, RHCA<br />> Software Maintenance Engineer, Red Hat<br />> CEE - Platform Support Delivery - ClusterHA<br />> _______________________________________________Manage your<br />> subscription:https://lists.clusterlabs.org/mailman/listinfo/usersClusterLabs<br /><br />> home: https://www.clusterlabs.org/ <br /><br /><br /><br />_______________________________________________<br />Manage your subscription:<br />https://lists.clusterlabs.org/mailman/listinfo/users<br /><br />ClusterLabs home: https://www.clusterlabs.org/<br /><br /><br /></tt></blockquote>
</div></body></html>