[ClusterLabs] Two nodes cluster issue
Klaus Wenninger
kwenning at redhat.com
Mon Jul 24 11:38:01 EDT 2017
On 07/24/2017 05:32 PM, Tomer Azran wrote:
> So your suggestion is to use sbd with or without qdevice? What is the
> point of having a qdevice in two node cluster if it doesn't help in
> this situation?
If you have a qdevice setup that is already working (meaning in terms of
that one
of your nodes is quorate and the other not if they are split) I would
use that.
And if you use sbd with just a watchdog (no shared disk) - should be
supported in
Centos 7.3 (you said you are there somewhere down below iirc) - it would be
assured that the node that is not quorate goes down reliably and that
the other
node is assuming it to be down after a timeout you configured using cluster
property stonith-watchdog-timeout.
>
>
> From: Klaus Wenninger
> Sent: Monday, July 24, 18:28
> Subject: Re: [ClusterLabs] Two nodes cluster issue
> To: Cluster Labs - All topics related to open-source clustering
> welcomed, Tomer Azran
>
>
> On 07/24/2017 05:15 PM, Tomer Azran wrote:
>> I still don't understand why the qdevice concept doesn't help on this
>> situation. Since the master node is down, I would expect the quorum
>> to declare it as dead.
>> Why doesn't it happens?
>
> That is not how quorum works. It just limits the decision-making to
> the quorate subset of the cluster.
> Still the unknown nodes are not sure to be down.
> That is why I suggested to have quorum-based watchdog-fencing with sbd.
> That would assure that within a certain time all nodes of the
> non-quorate part
> of the cluster are down.
>
>>
>>
>>
>> On Mon, Jul 24, 2017 at 4:15 PM +0300, "Dmitri Maziuk"
>> <dmitri.maziuk at gmail.com <mailto:dmitri.maziuk at gmail.com>> wrote:
>>
>>> On 2017-07-24 07:51, Tomer Azran wrote: > We don't have the ability
>>> to use it. > Is that the only solution? No, but I'd recommend
>>> thinking about it first. Are you sure you will care about your
>>> cluster working when your server room is on fire? 'Cause unless you
>>> have halon suppression, your server room is a complete write-off
>>> anyway. (Think water from sprinklers hitting rich chunky volts in
>>> the servers.) Dima _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> <mailto:Users at clusterlabs.org> http://lists.clusterlabs.org/mailman/
>>> <http://lists.clusterlabs.org/mailman/listinfo/users>listinfo
>>> <http://lists.clusterlabs.org/mailman/listinfo/users>/users
>>> <http://lists.clusterlabs.org/mailman/listinfo/users> Project Home:
>>> http://www.clusterlabs.org Getting started:
>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs:
>>> http://bugs.clusterlabs.org
>>
>>
>> _______________________________________________ Users mailing list:
>> Users at clusterlabs.org <mailto:Users at clusterlabs.org>
>> http://lists.clusterlabs.org/mailman/
>> <http://lists.clusterlabs.org/mailman/listinfo/users>listinfo
>> <http://lists.clusterlabs.org/mailman/listinfo/users>/users
>> <http://lists.clusterlabs.org/mailman/listinfo/users> Project Home:
>> http://www.clusterlabs.org Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs:
>> http://bugs.clusterlabs.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170724/370a1811/attachment-0003.html>
More information about the Users
mailing list