[ClusterLabs] Antw: [EXT] Stonith failing

Gabriele Bulfon gbulfon at sonicle.com
Fri Aug 14 09:09:04 EDT 2020


Thanks to all your suggestions, I now have the systems with stonith configured on ipmi.
 
Two questions:
- how can I simulate a stonith situation to check that everything is ok?
- considering that I have both nodes with stonith against the other node, once the two nodes can communicate, how can I be sure the two nodes will not try to stonith each other?
 
:)
Thanks!
Gabriele
 
 
Sonicle S.r.l. 
: 
http://www.sonicle.com
Music: 
http://www.gabrielebulfon.com
Quantum Mechanics : 
http://www.cdbaby.com/cd/gabrielebulfon
Da:
Gabriele Bulfon
A:
Cluster Labs - All topics related to open-source clustering welcomed
Data:
29 luglio 2020 14.22.42 CEST
Oggetto:
Re: [ClusterLabs] Antw: [EXT] Stonith failing
 
It is a ZFS based illumos system.
I don't think SBD is an option.
Is there a reliable ZFS based stonith?
 
Gabriele
 
 
Sonicle S.r.l. 
: 
http://www.sonicle.com
Music: 
http://www.gabrielebulfon.com
Quantum Mechanics : 
http://www.cdbaby.com/cd/gabrielebulfon
Da:
Andrei Borzenkov
A:
Cluster Labs - All topics related to open-source clustering welcomed
Data:
29 luglio 2020 9.46.09 CEST
Oggetto:
Re: [ClusterLabs] Antw: [EXT] Stonith failing
 
On Wed, Jul 29, 2020 at 9:01 AM Gabriele Bulfon
gbulfon at sonicle.com
wrote:
That one was taken from a specific implementation on Solaris 11.
The situation is a dual node server with shared storage controller: both nodes see the same disks concurrently.
Here we must be sure that the two nodes are not going to import/mount the same zpool at the same time, or we will encounter data corruption:
 
ssh based "stonith" cannot guarantee it.
 
node 1 will be perferred for pool 1, node 2 for pool 2, only in case one of the node goes down or is taken offline the resources should be first free by the leaving node and taken by the other node.
 
Would you suggest one of the available stonith in this case?
 
 
IPMI, managed PDU, SBD ...
In practice, the only stonith method that works in case of complete node outage including any power supply is SBD.
_______________________________________________Manage your subscription:https://lists.clusterlabs.org/mailman/listinfo/usersClusterLabs home: https://www.clusterlabs.org/
_______________________________________________Manage your subscription:https://lists.clusterlabs.org/mailman/listinfo/usersClusterLabs home: https://www.clusterlabs.org/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20200814/f3593e7a/attachment-0001.htm>


More information about the Users mailing list