[ClusterLabs] Fast-failover on 2 nodes + qnetd: qdevice connenction disrupted.
Ken Gaillot
kgaillot at redhat.com
Mon May 6 11:11:05 EDT 2024
On Mon, 2024-05-06 at 10:05 -0500, Ken Gaillot wrote:
> On Fri, 2024-05-03 at 16:18 +0300, alexey at pavlyuts.ru wrote:
> > Hi,
> >
> > > > Thanks great for your suggestion, probably I need to think
> > > > about
> > > > this
> > > > way too, however, the project environment is not a good one to
> > > > rely on
> > > > fencing and, moreover, we can't control the bottom layer a
> > > > trusted
> > > > way.
> > >
> > > That is a problem. A VM being gone is not the only possible
> > > failure
> > > scenario. For
> > > example, a kernel or device driver issue could temporarily freeze
> > > the node, or
> > > networking could temporarily drop out, causing the node to appear
> > > lost to
> > > Corosync, but the node could be responsive again (with the app
> > > running) after the
> > > app has been started on the other node.
> > >
> > > If there's no problem with the app running on both nodes at the
> > > same time, then
> > > that's fine, but that's rarely the case. If an IP address is
> > > needed, or shared storage
> > > is used, simultaneous access will cause problems that only
> > > fencing
> > > can avoid.
> > The pacemaker use very pessimistic approach if you set resources to
> > require quorum.
> > If a network outage is trigger changes, it will ruin quorum first
> > and
> > after that try to rebuild it. Therefore there are two questions:
> > 1. How to keep active app running?
> > 2. How to prevent two copies started.
> > As for me, quorum-dependent resource management performs well on
> > both
> > points.
>
> That's fine as long as the cluster is behaving properly. Fencing is
> for
> when it's not.
>
> Quorum prevents multiple copies only if the nodes can communicate and
> operate normally. There are many situations when that's not true: a
> device driver or kernel bug locks up a node for more than the
> Corosync
> token timeout, CPU or I/O load gets high enough for a node to become
> unresponsive for long stretches of time, a failing network controller
> randomly drops large numbers of packets, etc.
>
> In such situations, a node that appears lost to the other nodes may
> actually just be temporarily unreachable, and may come back at any
> moment (with its resources still active).
>
> If an IP address is active in more than one location, packets will be
> randomly routed to one node or another, rendering all communication
> via
> that IP useless. If an application that uses shared storage is active
> in more than one location, data can be corrupted. And so forth.
>
> Fencing ensures that the lost node is *definitely* not running
> resources before recovering them elsewhere.
>
> > > > my goal is to keep the app from moves (e.g. restarts) as long
> > > > as
> > > > possible. This means only two kinds of moves accepted: current
> > > > host
> > > > fail (move to other with restart) or admin move (managed move
> > > > at
> > > > certain time with restart). Any other troubles should NOT
> > > > trigger
> > > > app
> > > > down/restart. Except of total connectivity loss where no second
> > > > node,
> > > > no arbiter => stop service.
> > >
> > > Total connectivity loss may not be permanent. Fencing ensures the
> > > connectivity
> > > will not be restored after the app is started elsewhere.
> > Nothing bad if it restored and the node alive, but got app down
> > because of no quorum.
>
> Again, that assumes it is operating normally. HA is all about the
> times
> when it's not.
>
> > > Pacemaker 2.0.4 and later supports priority-fencing-delay which
> > > allows the node
> > > currently running the app to survive. The node not running the
> > > app
> > > will wait the
> > > configured amount of time before trying to fence the other node.
> > > Of
> > > course that
> > > does add more time to the recovery if the node running the app is
> > > really gone.
> > I feel I am not sure about how it works.
> > Imagine just connectivity loss between nodes but no to the other
> > pars.
> > And Node1 runs app. Everything well, node2 off.
> > So, we start Node2 with intention to restore cluster.
> > Node 2 starts and trying to find it's partner, failure and fence
> > node1 out.
> > While Node1 not even know about Node2 starts.
> >
> > Is it correct?
>
> Before connectivity is lost, the cluster is presumably in a normal
> state. That means the two nodes are passing the Corosync token
> between
> them.
>
> When connectivity is lost, both nodes see the token lost at roughly
> the
> same time. With priority-fencing-delay, node2 waits the specified
> amount of time before fencing node1. node1 does not wait, so it
> fences
> node2, and node2 never gets the chance to fence node1.
I forgot to mention:
The fence_heuristics_ping fence agent can be used instead of or in
addition to priority-fencing-delay. This agent has to be combined with
a "real" fencing agent in a fencing topology. If a node can't ping an
IP (generally the gateway or something just beyond it), it will refuse
to execute the "real" fencing. This is intended for services that need
to be externally available.
If you do have quorum in addition to fencing, only the node with quorum
will initiate fencing, so you don't need any fencing delay or
heuristics in that case.
> > > > Therefore, quorum-based management seems better way for my
> > > > exact
> > > > case.
> > >
> > > Unfortunately it's unsafe without fencing.
> > You may say I am stupid, but I really can’t understand why quorum-
> > based resource management is unreliable without fencing.
> > May a host hold quorum bit longer than another host got quorum and
> > run app. Probably, it may do this.
> > But fencing is not immediate too. So, it can't protect for 100%
> > from
> > short-time parallel runs.
>
> Certainly -- with fencing enabled, the cluster will not recover
> resources elsewhere until fencing succeeds.
>
> > > That does complicate the situation. Ideally there would be some
> > > way
> > > to request
> > > the VM to be immediately destroyed (whether via fence_xvm, a
> > > cloud
> > > provider
> > > API, or similar).
> > What you mean by "destroyed"? Mean get down?
>
> Correct. For fencing purposes, it should not be a clean shutdown but
> an
> immediate halt.
>
> > > > Please, mind all the above is from my common sense and quite
> > > > poor
> > > > fundamental knowledge in clustering. And please be so kind to
> > > > correct
> > > > me if I am wrong at any point.
> > > >
> > > > Sincerely,
> > > >
> > > > Alex
> > > > -----Original Message-----
> > > > From: Users <users-bounces at clusterlabs.org> On Behalf Of Ken
> > > > Gaillot
> > > > Sent: Thursday, May 2, 2024 5:55 PM
> > > > To: Cluster Labs - All topics related to open-source clustering
> > > > welcomed <users at clusterlabs.org>
> > > > Subject: Re: [ClusterLabs] Fast-failover on 2 nodes + qnetd:
> > > > qdevice
> > > > connenction disrupted.
> > > >
> > > > I don't see fencing times in here -- fencing is absolutely
> > > > essential.
> > > >
> > > > With the setup you describe, I would drop qdevice. With
> > > > fencing,
> > > > quorum is not strictly required in a two-node cluster (two_node
> > > > should
> > > > be set in corosync.conf). You can set priority-fencing-delay to
> > > > reduce
> > > > the chance of simultaneous fencing. For VMs, you can use
> > > > fence_xvm,
> > > > which is extremely quick.
> > > >
> > > > On Thu, 2024-05-02 at 02:56 +0300, alexey at pavlyuts.ru wrote:
> > > > > Hi All,
> > > > >
> > > > > I am trying to build application-specific 2-node failover
> > > > > cluster
> > > > > using ubuntu 22, pacemaker 2.1.2 + corosync 3.1.6 and DRBD
> > > > > 9.2.9,
> > > > > knet transport.
> > > > >
> > > > > For some reason I can’t use 3-node then I have to use
> > > > > qnetd+qdevice
> > > > > 3.0.1.
> > > > >
> > > > > The main goal Is to protect custom app which is not cluster-
> > > > > aware by
> > > > > itself. It is quite stateful, can’t store the state outside
> > > > > memory
> > > > > and take some time to get converged with other parts of the
> > > > > system,
> > > > > then the best scenario is “failover is a restart with same
> > > > > config”,
> > > > > but each unnecessary restart is painful. So, if failover
> > > > > done,
> > > > > app
> > > > > must retain on the backup node until it fail or admin push it
> > > > > back,
> > > > > this work well with stickiness param.
> > > > >
> > > > > So, the goal is to detect serving node fail ASAP and restart
> > > > > it
> > > > > ASAP
> > > > > on other node, using DRBD-synced config/data. ASAP means
> > > > > within
> > > > > 5-
> > > > > 7
> > > > > sec, not 30 or more.
> > > > >
> > > > > I was tried different combinations of timing, and finally got
> > > > > acceptable result within 5 sec for the best case. But! The
> > > > > case
> > > > > is
> > > > > very unstable.
> > > > >
> > > > > My setup is a simple: two nodes on VM, and one more VM as
> > > > > arbiter
> > > > > (qnetd), VMs under Proxmox and connected by net via external
> > > > > ethernet switch to get closer to reality where “nodes VM”
> > > > > should
> > > > > locate as VM on different PHY hosts in one rack.
> > > > >
> > > > > Then, it was adjusted for faster detect and failover.
> > > > > In Corosync, left the token default 1000ms, but add
> > > > > “heartbeat_failures_allowed: 3”, this made corosync catch
> > > > > node
> > > > > failure for about 200ms (4x50ms heartbeat).
> > > > > Both qnet and qdevice was run
> > > > > with net_heartbeat_interval_min=200
> > > > > to allow play with faster hearbeats and detects Also,
> > > > > quorum.device.net has timeout: 500, sync_timeout: 3000, algo:
> > > > > LMS.
> > > > >
> > > > > The testing is to issue “ate +%M:%S.%N && qm stop 201”, and
> > > > > then
> > > > > check the logs on timestamp when the app started on the
> > > > > “backup”
> > > > > host. And, when backup host boot again, the test is to check
> > > > > the
> > > > > logs for the app was not restarted.
> > > > >
> > > > > Sometimes switchover work like a charm but sometimes it may
> > > > > delay
> > > > > for dozens of seconds.
> > > > > Sometimes when the primary host boot up again, secondary hold
> > > > > quorum
> > > > > well and keep app running, sometimes quorum is lost first
> > > > > (and
> > > > > pacemaker downs the app) and then found and pacemaker get app
> > > > > up
> > > > > again, so unwanted restart happen.
> > > > >
> > > > > My investigation shows that the difference between “good” and
> > > > > “bad”
> > > > > cases:
> > > > >
> > > > > Good case - all the logs clear and reasonable.
> > > > >
> > > > > Bad case: qnetd losing connection to second node just after
> > > > > the
> > > > > connection to “failure” node detected and then it may take
> > > > > dozens of
> > > > > seconds to restore it. All this time qdevice trying to
> > > > > connect
> > > > > qnetd
> > > > > and fails:
> > > > >
> > > > > Example, host 192.168.100.1 send to failure, 100.2 is
> > > > > failover
> > > > > to:
> > > > >
> > > > > From qnetd:
> > > > > May 01 23:30:39 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.1:60686 doesn't sent any message during
> > > > > 600ms.
> > > > > Disconnecting
> > > > > May 01 23:30:39 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.1:60686 (init_received 1, cluster bsc-
> > > > > test-
> > > > > cluster, node_id 1) disconnect May 01 23:30:39 arbiter
> > > > > corosync-qnetd[6338]: algo-lms: Client
> > > > > 0x55a6fc6785b0 (cluster bsc-test-cluster, node_id 1)
> > > > > disconnect
> > > > > May 01 23:30:39 arbiter corosync-qnetd[6338]: algo-
> > > > > lms: server
> > > > > going down 0
> > > > > > > > This is unexpected down, at normal scenario connection
> > > > > > > > persist
> > > > > May 01 23:30:40 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.2:32790 doesn't sent any message during
> > > > > 600ms.
> > > > > Disconnecting
> > > > > May 01 23:30:40 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.2:32790 (init_received 1, cluster bsc-
> > > > > test-
> > > > > cluster, node_id 2) disconnect May 01 23:30:40 arbiter
> > > > > corosync-qnetd[6338]: algo-lms: Client
> > > > > 0x55a6fc6363d0 (cluster bsc-test-cluster, node_id 2)
> > > > > disconnect
> > > > > May 01 23:30:40 arbiter corosync-qnetd[6338]: algo-
> > > > > lms: server
> > > > > going down 0
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: New client
> > > > > connected
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: cluster name
> > > > > =
> > > > > bsc-
> > > > > test-cluster
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: tls started =
> > > > > 0
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: tls peer
> > > > > certificate
> > > > > verified = 0
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: node_id = 2
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: pointer =
> > > > > 0x55a6fc6363d0
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: addr_str =
> > > > > ::ffff:192.168.100.2:57736
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: ring id =
> > > > > (2.801)
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: cluster dump:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: client =
> > > > > ::ffff:192.168.100.2:57736, node_id = 2 May 01 23:30:56
> > > > > arbiter
> > > > > corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.2:57736 (cluster bsc-test-cluster, node_id
> > > > > 2)
> > > > > sent initial node list.
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: msg seq num =
> > > > > 99
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: Node list:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 0 node_id =
> > > > > 1,
> > > > > data_center_id = 0, node_state = not set
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 1 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = not set May 01 23:30:56
> > > > > arbiter
> > > > > corosync-qnetd[6338]: algo-lms: cluster bsc- test-cluster
> > > > > config_list has 2 nodes May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]:
> > > > > Algorithm
> > > > > result vote is No change May 01 23:30:56 arbiter corosync-
> > > > > qnetd[6338]:
> > > > > Client
> > > > > ::ffff:192.168.100.2:57736 (cluster bsc-test-cluster, node_id
> > > > > 2)
> > > > > sent membership node list.
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: msg seq num =
> > > > > 100
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: ring id =
> > > > > (2.801)
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: heuristics =
> > > > > Undefined
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: Node list:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 0 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = not set May 01 23:30:56
> > > > > arbiter
> > > > > corosync-qnetd[6338]:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: algo-lms:
> > > > > membership
> > > > > list from node 2 partition (2.801) May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: algo-util: partition
> > > > > (2.801) (0x55a6fc67e110) has 1 nodes May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: algo-lms: Only 1 partition. This is
> > > > > votequorum's problem, not ours May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: Algorithm result vote is ACK May 01
> > > > > 23:30:56 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.2:57736 (cluster bsc-test-cluster, node_id
> > > > > 2)
> > > > > sent quorum node list.
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: msg seq num =
> > > > > 101
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: quorate = 0
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: Node list:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 0 node_id =
> > > > > 1,
> > > > > data_center_id = 0, node_state = dead
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 1 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = member May 01 23:30:56
> > > > > arbiter
> > > > > corosync-qnetd[6338]:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: algo-lms:
> > > > > quorum
> > > > > node
> > > > > list from node 2 partition (2.801) May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: algo-util: partition
> > > > > (2.801) (0x55a6fc697e70) has 1 nodes May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: algo-lms: Only 1 partition. This is
> > > > > votequorum's problem, not ours May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: Algorithm result vote is ACK May 01
> > > > > 23:30:56 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.2:57736 (cluster bsc-test-cluster, node_id
> > > > > 2)
> > > > > sent quorum node list.
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: msg seq num =
> > > > > 102
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: quorate = 1
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: Node list:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 0 node_id =
> > > > > 1,
> > > > > data_center_id = 0, node_state = dead
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: 1 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = member May 01 23:30:56
> > > > > arbiter
> > > > > corosync-qnetd[6338]:
> > > > > May 01 23:30:56 arbiter corosync-qnetd[6338]: algo-lms:
> > > > > quorum
> > > > > node
> > > > > list from node 2 partition (2.801) May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: algo-util: partition
> > > > > (2.801) (0x55a6fc669dc0) has 1 nodes May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: algo-lms: Only 1 partition. This is
> > > > > votequorum's problem, not ours May 01 23:30:56 arbiter
> > > > > corosync-qnetd[6338]: Algorithm result vote is ACK May 01
> > > > > 23:30:58 arbiter corosync-qnetd[6338]: Client closed
> > > > > connection
> > > > > May
> > > > > 01
> > > > > 23:30:58 arbiter corosync-qnetd[6338]: Client
> > > > > ::ffff:192.168.100.2:57674 (init_received 0, cluster bsc-
> > > > > test-
> > > > > cluster, node_id 0) disconnect
> > > > > > > At this point resource start on backup host
> > > > >
> > > > > From qdevice:
> > > > >
> > > > > May 01 23:30:40 node2 corosync-qdevice[781]: Server closed
> > > > > connection May 01 23:30:40 node2 corosync-qdevice[781]: algo-
> > > > > lms:
> > > > > disconnected.
> > > > > quorate = 1, WFA = 0
> > > > > May 01 23:30:40 node2 corosync-qdevice[781]: algo-lms:
> > > > > disconnected.
> > > > > reason = 22, WFA = 0
> > > > > May 01 23:30:40 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is NACK May 01 23:30:40 node2 corosync-qdevice[781]: Cast
> > > > > vote
> > > > > timer
> > > > > remains scheduled every 250ms voting NACK.
> > > > > May 01 23:30:40 node2 corosync-qdevice[781]: Sleeping for 161
> > > > > ms
> > > > > before reconnect May 01 23:30:40 node2 corosync-qdevice[781]:
> > > > > Trying
> > > > > connect to qnetd server arbiter:5403 (timeout = 400ms) May 01
> > > > > 23:30:41
> > > > > node2 corosync-qdevice[781]: Connect timeout May 01 23:30:41
> > > > > node2
> > > > > corosync-qdevice[781]: algo-lms: disconnected.
> > > > > quorate = 1, WFA = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: algo-lms:
> > > > > disconnected.
> > > > > reason = 27, WFA = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is NACK May 01 23:30:41 node2 corosync-qdevice[781]: Cast
> > > > > vote
> > > > > timer
> > > > > remains scheduled every 250ms voting NACK.
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Trying connect
> > > > > to
> > > > > qnetd
> > > > > server arbiter:5403 (timeout = 400ms) May 01 23:30:41 node2
> > > > > corosync-qdevice[781]: Votequorum nodelist notify callback:
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Ring_id =
> > > > > (2.801)
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Node list
> > > > > (size
> > > > > =
> > > > > 1):
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: 0 nodeid = 2
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Algorithm
> > > > > decided
> > > > > to
> > > > > pause cast vote timer and result vote is No change May 01
> > > > > 23:30:41
> > > > > node2 corosync-qdevice[781]: Cast vote timer is now paused.
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: worker:
> > > > > qdevice_heuristics_worker_cmd_process_exec: Received exec
> > > > > command
> > > > > with seq_no "24" and timeout "1500"
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Received
> > > > > heuristics
> > > > > exec result command with seq_no "24" and result "Disabled"
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Votequorum
> > > > > heuristics
> > > > > exec result callback:
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: seq_number =
> > > > > 24,
> > > > > exec_result = Disabled
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Algorithm
> > > > > decided
> > > > > to
> > > > > not send list, result vote is No change and heuristics is
> > > > > Undefined
> > > > > May
> > > > > 01
> > > > > 23:30:41 node2 corosync-qdevice[781]: Cast vote timer is no
> > > > > longer
> > > > > paused.
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Not scheduling
> > > > > heuristics timer because mode is not enabled May 01 23:30:41
> > > > > node2
> > > > > corosync-qdevice[781]: Votequorum quorum notify
> > > > > callback:
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Quorate = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Node list
> > > > > (size
> > > > > =
> > > > > 3):
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: 0 nodeid =
> > > > > 1,
> > > > > state
> > > > > = 2
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: 1 nodeid =
> > > > > 2,
> > > > > state
> > > > > = 1
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: 2 nodeid =
> > > > > 0,
> > > > > state
> > > > > = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: algo-lms:
> > > > > quorum_notify.
> > > > > quorate = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Algorithm
> > > > > decided
> > > > > to
> > > > > not send list and result vote is No change May 01 23:30:41
> > > > > node2
> > > > > corosync-qdevice[781]: Connect timeout May 01 23:30:41 node2
> > > > > corosync-qdevice[781]: algo-lms: disconnected.
> > > > > quorate = 0, WFA = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: algo-lms:
> > > > > disconnected.
> > > > > reason = 27, WFA = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is NACK May 01 23:30:41 node2 corosync-qdevice[781]: Cast
> > > > > vote
> > > > > timer
> > > > > remains scheduled every 250ms voting NACK.
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Trying connect
> > > > > to
> > > > > qnetd
> > > > > server arbiter:5403 (timeout = 400ms)
> > > > > > > > At this point quorum reported lost
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Connect timeout
> > > > > May
> > > > > 01
> > > > > 23:30:41 node2 corosync-qdevice[781]: algo-lms: disconnected.
> > > > > quorate = 0, WFA = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: algo-lms:
> > > > > disconnected.
> > > > > reason = 27, WFA = 0
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is NACK May 01 23:30:41 node2 corosync-qdevice[781]: Cast
> > > > > vote
> > > > > timer
> > > > > remains scheduled every 250ms voting NACK.
> > > > >
> > > > > > > > This failure pattern repeats 31 times
> > > > > May 01 23:30:41 node2 corosync-qdevice[781]: Trying connect
> > > > > to
> > > > > qnetd
> > > > > server arbiter:5403 (timeout = 400ms) May 01 23:30:42 node2
> > > > > corosync-qdevice[781]: Connect timeout May 01 23:30:42 node2
> > > > > corosync-qdevice[781]: algo-lms: disconnected.
> > > > > quorate = 0, WFA = 0
> > > > > May 01 23:30:42 node2 corosync-qdevice[781]: algo-lms:
> > > > > disconnected.
> > > > > reason = 27, WFA = 0
> > > > > May 01 23:30:42 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is NACK May 01 23:30:42 node2 corosync-qdevice[781]: Cast
> > > > > vote
> > > > > timer
> > > > > remains scheduled every 250ms voting NACK.
> > > > > > > > End of pattern repeat, continue
> > > > >
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Trying connect
> > > > > to
> > > > > qnetd
> > > > > server arbiter:5403 (timeout = 400ms) May 01 23:30:56 node2
> > > > > corosync-qdevice[781]: Sending preinit msg to qnetd May 01
> > > > > 23:30:56
> > > > > node2 corosync-qdevice[781]: Received preinit reply msg May
> > > > > 01
> > > > > 23:30:56 node2 corosync-qdevice[781]: Received init reply msg
> > > > > May
> > > > > 01
> > > > > 23:30:56 node2 corosync-qdevice[781]: Scheduling send of
> > > > > heartbeat
> > > > > every 400ms May 01 23:30:56 node2 corosync-qdevice[781]:
> > > > > Executing
> > > > > after-connect heuristics.
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: worker:
> > > > > qdevice_heuristics_worker_cmd_process_exec: Received exec
> > > > > command
> > > > > with seq_no "25" and timeout "250"
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Received
> > > > > heuristics
> > > > > exec result command with seq_no "25" and result "Disabled"
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Algorithm
> > > > > decided
> > > > > to
> > > > > send config node list, send membership node list, send quorum
> > > > > node
> > > > > list, heuristics is Undefined and result vote is Wait for
> > > > > reply
> > > > > May
> > > > > 01
> > > > > 23:30:56 node2 corosync-qdevice[781]: Sending set option seq
> > > > > =
> > > > > 98,
> > > > > HB(0) = 0ms, KAP Tie-breaker(1) = Enabled May 01 23:30:56
> > > > > node2
> > > > > corosync-qdevice[781]: Sending config node list seq = 99
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Node list:
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 0 node_id =
> > > > > 1,
> > > > > data_center_id = 0, node_state = not set
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 1 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = not set May 01 23:30:56
> > > > > node2
> > > > > corosync-qdevice[781]: Sending membership node list seq =
> > > > > 100,
> > > > > ringid = (2.801), heuristics = Undefined.
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Node list:
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 0 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = not set May 01 23:30:56
> > > > > node2
> > > > > corosync-qdevice[781]: Sending quorum node list seq = 101,
> > > > > quorate =
> > > > > 0
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Node list:
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 0 node_id =
> > > > > 1,
> > > > > data_center_id = 0, node_state = dead
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 1 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = member May 01 23:30:56 node2
> > > > > corosync-qdevice[781]: Cast vote timer is now stopped.
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Received set
> > > > > option
> > > > > reply
> > > > > seq(1) = 98, HB(0) = 0ms, KAP Tie-breaker(1) = Enabled May 01
> > > > > 23:30:56
> > > > > node2 corosync-qdevice[781]: Received initial config node
> > > > > list
> > > > > reply
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: seq = 99
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: vote = No
> > > > > change
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: ring id =
> > > > > (2.801)
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is No change May 01 23:30:56 node2 corosync-qdevice[781]:
> > > > > Received
> > > > > membership node list reply
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: seq = 100
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: vote = ACK
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: ring id =
> > > > > (2.801)
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is ACK May 01 23:30:56 node2 corosync-qdevice[781]: Cast vote
> > > > > timer
> > > > > is now scheduled every 250ms voting ACK.
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Received quorum
> > > > > node
> > > > > list reply
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: seq = 101
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: vote = ACK
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: ring id =
> > > > > (2.801)
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is ACK May 01 23:30:56 node2 corosync-qdevice[781]: Cast vote
> > > > > timer
> > > > > remains scheduled every 250ms voting ACK.
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Votequorum
> > > > > quorum
> > > > > notify
> > > > > callback:
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Quorate = 1
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Node list
> > > > > (size
> > > > > =
> > > > > 3):
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 0 nodeid =
> > > > > 1,
> > > > > state
> > > > > = 2
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 1 nodeid =
> > > > > 2,
> > > > > state
> > > > > = 1
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 2 nodeid =
> > > > > 0,
> > > > > state
> > > > > = 0
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: algo-lms:
> > > > > quorum_notify.
> > > > > quorate = 1
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Algorithm
> > > > > decided
> > > > > to
> > > > > send list and result vote is No change May 01 23:30:56 node2
> > > > > corosync-qdevice[781]: Sending quorum node list seq = 102,
> > > > > quorate =
> > > > > 1
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Node list:
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 0 node_id =
> > > > > 1,
> > > > > data_center_id = 0, node_state = dead
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: 1 node_id =
> > > > > 2,
> > > > > data_center_id = 0, node_state = member May 01 23:30:56 node2
> > > > > corosync-qdevice[781]: Received quorum node list reply
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: seq = 102
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: vote = ACK
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: ring id =
> > > > > (2.801)
> > > > > May 01 23:30:56 node2 corosync-qdevice[781]: Algorithm result
> > > > > vote
> > > > > is ACK May 01 23:30:56 node2 corosync-qdevice[781]: Cast vote
> > > > > timer
> > > > > remains scheduled every 250ms voting ACK.
> > > > > > > > Here everything become OK and resource started on Node2
> > > > >
> > > > > Also, I’ve done wireshark capture and found great mess in
> > > > > TCP,
> > > > > it
> > > > > seems like connection between qdevice and qnetd really stops
> > > > > for
> > > > > some time and packets won’t deliver.
> > > > >
> > > > > For my guess, it match corosync syncing activities, and I
> > > > > suspect
> > > > > that corosync prevent any other traffic on the interface it
> > > > > use
> > > > > for
> > > > > rings.
> > > > >
> > > > > As I switch qnetd and qdevice to use different interface it
> > > > > seems to
> > > > > work fine.
> > > > >
> > > > > So, the question is: does corosync really temporary blocks
> > > > > any
> > > > > other
> > > > > traffic on the interface it uses? Or it is just a
> > > > > coincidence?
> > > > > If it
> > > > > blocks, is there a way to manage it?
> > > > >
> > > > > Thank you for any suggest on that!
> > > > >
> > > > > Sincerely,
> > > > >
> > > > > Alex
> > > > >
> > > > >
> > > > > _______________________________________________
> > > > > Manage your subscription:
> > > > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > > >
> > > > > ClusterLabs home: https://www.clusterlabs.org/
> > > > --
> > > > Ken Gaillot <kgaillot at redhat.com>
> > > >
> > > > _______________________________________________
> > > > Manage your subscription:
> > > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > >
> > > > ClusterLabs home: https://www.clusterlabs.org/
> > > >
> > > > _______________________________________________
> > > > Manage your subscription:
> > > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > >
> > > > ClusterLabs home: https://www.clusterlabs.org/
> > > --
> > > Ken Gaillot <kgaillot at redhat.com>
> > >
> > > _______________________________________________
> > > Manage your subscription:
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > >
> > > ClusterLabs home: https://www.clusterlabs.org/
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
--
Ken Gaillot <kgaillot at redhat.com>
More information about the Users
mailing list