[ClusterLabs] Simple Clarification's regarding pacemaker
lists at alteeve.ca
Tue Apr 26 11:09:41 EDT 2016
On 26/04/16 08:15 AM, RaSca wrote:
> Il 26/04/2016 13:39, K Aravind ha scritto:
>> Hi all
>> Have some doubts regarding pacemaker
>> It would be of great help if you could help me out
>> 1.Pacemaker handles split brains using quorum and stonith only right .?
>> It is not a case where a split brain is resolved wothout the use of
>> stonith and qourm am I right ?
> Not by default. But if you set no-quorum-policy according to what you
> want to achieve you may obtain something similar to a split brain
Note that quorum has nothing to do with split-brain protection, only
stonith does. Quorum is used to prevent a sole node (that is operating
properly, say after a reboot) from trying to run cluster services when
it is inquorate.
Said another way;
Stonith/Fencing is a tool to protect your cluster when things go wrong.
Quorum is a tool to help your nodes make decisions when things are
>> 2.Resource stickiness /location cannot be used to solve split brain right ?
> Again, it depends. For example if you set a location based upon the
> network connectivity, then you may obtain that a disconnected node will
> not run any resource.
> That said if you want to be 100% sure you must use stonith.
>> 3.let's say I have qourum enabled but not stonith for a two node cluster
>> ..can it be used for solving split brain ..if so can the user
>> application can add rules for quorum to elect appropriate / preffered
>> the master ..similar to resource stickiness ?
> Again, see no-quorum-policy.
To again clarify; No. Quorum does not help prevent split-brains and
quorum can not be used on 2-node clusters at any rate. Quorum only works
on 3+ node clusters because quorum is calculated as:
50% of total votes, plus 1, rounded down.
2 Node == 2 / 1 = 1 + 1 = 2 rounded down to 2.
3 Nodes == 3 / 2 = 1.5 + 1 = 2.5 rounded down to 2.
4 Nodes == 4 / 2 = 2 + 1 = 3 rounded down to 3.
5 Nodes == 5 / 2 = 2.5 + 1 = 3.5 rounded down to 3.
So, quorum in a 2-node cluster is '2', meaning the cluster will shut
down if one node fails. Not very HA :). 3 and 4 node clusters can each
lose 1 node, 5 nodes can lose 2 nodes, and so on.
>> 4.can I configure the default role for a node ?
>> Meaning ...let's say I have a two node master slave setup
>> ,master node goes down and the slave is elected as a master so far so
>> good now let's say the older master which was down comes up. So is there
>> a way I can configure it be slave by default meaning whenever a new node
>> comes up it be a slave and not a master
> You can work on default resource stickiness to INFINITY in this case.
>> 5.Can I have node roles other than master or slave in a master slave
>> cluster say unknown ?
> No, AFAIK.
>> 6.Is there notification from pacemaker when a split brain happens ,
>> When an election beings ,
>> When an election is done
> Just in the logs, but you can setup something like email notification
> for each action. I don't know how things are now, but in the past those
> were a lot of mail (and when I say a lot I mean a LOT).
With stonith enabled, a failed fence will leave the cluster hung, by
design. The logic is that, as bad as a hung cluster is, it is better
than risking a split-brain (which can lead to data loss / corruption,
confused switches, etc).
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
More information about the Users