[ClusterLabs] Strange lost quorum with qdevice
Олег Самойлов
splarv at ya.ru
Fri Aug 9 21:11:47 EDT 2019
> 9 авг. 2019 г., в 9:25, Jan Friesse <jfriesse at redhat.com> написал(а):
> Please do not set dpd_interval that high. dpd_interval on qnetd side is not about how often is the ping is sent. Could you please retry your test with dpd_interval=1000? I'm pretty sure it will work then.
>
> Honza
Yep. As far as I undestand dpd_interval of qnetd, timeout and sync_timeout of qdevice is somehow linked. By default they are dpd_interval=10, timeout=10, sync_timeout=30. And you advised to change them proportionally.
https://github.com/ClusterLabs/sbd/pull/76#issuecomment-486952369
But mechanic how they are depend on each other is mysterious and is not documented.
I rechecked test with 20-60 combination. I get the same problem on 16th failure simultation. The qnetd return vote exactly in the same second, when qdevice expects, but slightly less. So the node lost quorum, got vote slightly later, but don't get quorum may be due to 'wait for all' option.
I retried the default 10-30 combination. I got the same problem on the first failure simulation. Qnetd send vote after 1 second, then expected.
Combination is 1-3 (dpd_interval=1, timeout=1, sync_timeout=3). The same problem on 11th failore simulation. The qnetd return vote exactly in the same second, when qdevice expects, but slightly less. So the node lost quorum, got vote slightly later, but don't get quorum may be due to 'wait for all' option. And node is watchdoged later due to lack of quorum.
So, my conclusions:
1. IMHO may be this bug depend not on absolute value of dpd_interval, on proportion between dpd_interval of qnetd and timeout, sync_timeout of qdevice. Because this options, I can not predict how to change them to work around this behaviour.
2. IMHO "wait for all" also bugged. According on documentation it must fire only on the start of cluster, but looked like it fire every time when quorum (or all votes) is lost.
More information about the Users
mailing list