[ClusterLabs] questions about startup fencing
Andrei Borzenkov
arvidjaar at gmail.com
Thu Nov 30 06:47:20 EST 2017
On Thu, Nov 30, 2017 at 1:39 PM, Gao,Yan <ygao at suse.com> wrote:
> On 11/30/2017 09:14 AM, Andrei Borzenkov wrote:
>>
>> On Wed, Nov 29, 2017 at 6:54 PM, Ken Gaillot <kgaillot at redhat.com> wrote:
>>>
>>>
>>> The same scenario is why a single node can't have quorum at start-up in
>>> a cluster with "two_node" set. Both nodes have to see each other at
>>> least once before they can assume it's safe to do anything.
>>>
>>
>> Unless we set no-quorum-policy=ignore in which case it will proceed
>> after fencing another node. As far as I'm understand this is the only
>> way to get number of active cluster nodes below quorum, right?
>
> To be safe, "two_node: 1" automatically enables "wait_for_all". Of course
> one can explicitly disable "wait_for_all" if they know what they are doing.
>
Well ...
ha1:~ # crm corosync status
Printing ring status.
Local node ID 1084766299
RING ID 0
id = 192.168.56.91
status = ring 0 active with no faults
Quorum information
------------------
Date: Thu Nov 30 19:09:57 2017
Quorum provider: corosync_votequorum
Nodes: 1
Node ID: 1084766299
Ring ID: 412
Quorate: No
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 1
Quorum: 1 Activity blocked
Flags: 2Node WaitForAll
Membership information
----------------------
Nodeid Votes Name
1084766299 1 ha1 (local)
ha1:~ #
ha1:~ # crm_mon -1r
Stack: corosync
Current DC: ha1 (version 1.1.16-4.8-77ea74d) - partition WITHOUT quorum
Last updated: Thu Nov 30 19:08:03 2017
Last change: Thu Nov 30 11:05:03 2017 by root via cibadmin on ha1
2 nodes configured
3 resources configured
Online: [ ha1 ]
OFFLINE: [ ha2 ]
Full list of resources:
stonith-sbd (stonith:external/sbd): Started ha1
Master/Slave Set: ms_Stateful_1 [rsc_Stateful_1]
Masters: [ ha1 ]
Stopped: [ ha2 ]
ha1:~ #
More information about the Users
mailing list