[Pacemaker] Fwd: [Openais] cpg behavior on transitional membership change

Jiaju Zhang jjzhang.linux at gmail.com
Sat Sep 3 17:33:15 UTC 2011


Seems this mail being truncated while sending, so post it again;)
Also this time I CCed pacemaker mailing list as well.

---------- Forwarded message ----------
From: Jiaju Zhang <jjzhang.linux at gmail.com>
Date: Sun, Sep 4, 2011 at 12:52 AM
Subject: Re: [Openais] cpg behavior on transitional membership change
To: Vladislav Bogdanov <bubble at hoster-ok.com>
Cc: David Teigland <teigland at redhat.com>,
"Openais at lists.linux-foundation.org"
<Openais at lists.linux-foundation.org>


On Fri, Sep 02, 2011 at 10:12:11PM +0300, Vladislav Bogdanov wrote:
> 02.09.2011 20:55, David Teigland wrote:
> [snip]
> >
> > I really can't make any sense of the report, sorry.  Maybe reproduce it
> > without pacemaker, and then describe the specific steps to create the
> > issue and resulting symptoms.  After that we can determine what logs, if
> > any, would be useful.
> >
>
> I just tried to ask a question about cluster components logic based on
> information I discovered from both logs and code analysis. I'm sorry if
> I was unclear in that, probably some language barrier still exists.
>
> Please see my previous mail, I tried to add some explanations why I
> think current logic is not complete.

Hi Vladislav, I guess I have known the problem what you described;)
I'd like to give a example to make the things more clear.

3-node cluster, for whatever reason, especially on heavy workload,
corosync may detect one node disappear and reappear again. So the
membership information changes are as follows:
membership 1: nodeA, nodeB, nodeC
membership 2: nodeB, nodeC
membership 3: nodeA, nodeB, nodeC

>From the membership change 1 -> 2, dlm_controld konws nodeA is down,
and have many things to do, like check_fs_done, check_fencing_done ...
The key point here is dlm need to wait the fencing is really done
before it proceed. If we employ a cluster filesystem here, like ocfs2,
it also needs the fencing is really done. I believe in the normal
cases, pacemaker will fence nodeA and then everything should be OK.

However, there is a possibility here that pacemaker won't fence nodeA.
Say nodeA is the original DC of the cluster, when nodeA is down, the
cluster should elect a new DC. But if the time window where membership
change 2 -> 3 is too small, node A is up again and attend the election
too, then node A is elected to be the DC again and it won't fence
itself.
Andrew, correct me if my understanding on pacemaker is wrong;)

So I think the membership change should be like a transaction in
database or filesystem field, that is, for the membership change
1 -> 2, every thing should be done (e.g. fencing nodeA), no matter the
following change 2 -> 3 will happen or not. For the situation where a
node magically disappear and reappear, and the situation where a node
normally down and then up, ocfs2 and dlm should not be able to see any
difference between them, what they can do is just waiting the fencing
to be done.

Any comments? thoughts?

Thanks,
Jiaju




More information about the Pacemaker mailing list