[Pacemaker] Split Site 2-way clusters

Miki Shapiro Miki.Shapiro at coles.com.au
Thu Jan 14 17:44:41 EST 2010


Confused.

I *am* running DRBD in dual-master mode (apologies, I should have mentioned that earlier), and there will be both WAN clients as well as local-to-datacenter-clients writing to both nodes on both ends. It's safe to assume the clients will know not of the split.

In a WAN split I need to ensure that the node whose idea of drbd volume will be kept once resync happens stays up, and node that'll get blown away and re-synced/overwritten becomes dead asap.

NodeX(Successfully) taking on data from clients while in quorumless-freeze-still-providing-service, then discarding its hitherto collected client data when realizing other node has quorum and discarding own data isn't good.

To recap what I understood so far:

1.       CRM Availability on the multicast channel drives DC election, but DC election is irrelevant to us here.

2.       CRM Availability on the multicast channel (rather than resource failure) drive who-is-in-quorum-and-who-is-not decisions [not sure here.. correct? Or does resource failure drive quorum? ]

3.       Steve to clarify what happens quorum-wise if 1/3 nodes sees both others, but the other two only see the first ("broken triangle"), and whether this behaviour may differ based on whether the first node (which is different as it sees both others) happens to be the DC at the time or not.

Given that anyone who goes about building a production cluster would want to identify all likely failure modes and be able to predict how the cluster behaves in each one, is there any user-targeted doco/rtfm material one could read regarding how quorum establishment works in such scenarios?

Setting up a 3-way with intermittent WAN links without getting a clear understanding in advance of how the software will behave is ... scary :)


From: Andrew Beekhof [mailto:andrew at beekhof.net]
Sent: Thursday, 14 January 2010 7:56 PM
To: pacemaker at oss.clusterlabs.org
Subject: Re: [Pacemaker] Split Site 2-way clusters


On Thu, Jan 14, 2010 at 1:40 AM, Miki Shapiro <Miki.Shapiro at coles.com.au<mailto:Miki.Shapiro at coles.com.au>> wrote:
When you suggest:
>>> What about setting no-quorum-policy to freeze and making the third node a full cluster member (that just doesn't run any resources)?
That way, if you get a 1-1-1 split the nodes will leave all services running where they were and while it waits for quorum.
And if it heals into a 1-2 split, then the majority will terminate the rogue node and acquire all the services.

No-quorum-policy 'Freeze' rather than 'Stop' pretty much ASSURES me of getting a split brain for my fileserver cluster. Sounds like the last thing I want to have. Any data local clients write to the cutoff node (and its DRBD split-brain volume) cannot be later reconciled and will need to be discarded. I'd rather not give the local clients that can still reach that node a false sense of security of their data having been written to disk (to a drbd volume that will be blown away and resynced with the quorum side once connectivity is re-established). 'Stop' policy sounds safer.
Are you using DRBD in dual-master mode?
Because freeze wont start anything new, so if you had one writer before the split, you'll still only have one during.
Then when the split heals DRBD can resync as normal. Right?

Question 1: Am I missing anything re stop/ignore no-quorum policy?

Possibly for the freeze option.

Further, I'm having more trouble working out a list of tabulated failure modes for this 3-way scenario, where 3-way outage-prone WAN links get introduced.
Question 2: If one WAN link is broken - (A) can speak to (B), (B) can speak to (C), but (A) CANNOT speak to (C), what drives the quorum decision and what would happen? In particular, what would happen if the node that can see both is the DC?

I'm not sure how totem works in this situation, but whether the node that can see both is the DC is irrelevant.
Membership happens at a much lower level.

I _think_ you'd end up with a 2-1 split, but I'm not sure how it decides if its A-B or B-C
Steve could probably tell you.

Question 3: Just to verify I got this right - what drives pacemaker's STONITH events,
[a] RESOURCE monitoring failure,
or
[b] CRM's crosstalk that establishes quorum-state / DC-election?

Its not an XOR condition.
Membership events (from openais) AND resource failures can both result in fencing occurring.

But please forget about DCs and DC elections - they are really not relevant to any of this.

______________________________________________________________________
This email and any attachments may contain privileged and confidential
information and are intended for the named addressee only. If you have
received this e-mail in error, please notify the sender and delete
this e-mail immediately. Any confidentiality, privilege or copyright
is not waived or lost because this e-mail has been sent to you in
error. It is your responsibility to check this e-mail and any
attachments for viruses.  No warranty is made that this material is
free from computer virus or any other defect or error.  Any
loss/damage incurred by using this material is not the sender's
responsibility.  The sender's entire liability will be limited to
resupplying the material.
______________________________________________________________________
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100115/626bff66/attachment-0001.html>


More information about the Pacemaker mailing list