[ClusterLabs] Antw: [EXT] Re: QDevice vs 3rd host for majority node quorum

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Thu Jul 15 06:46:10 EDT 2021

>>> Jehan-Guillaume de Rorthais <jgdr at dalibo.com> schrieb am 15.07.2021 um
10:09 in
Nachricht <20210715100930.06b45f5b at firost>:
> Hi all,
> On Tue, 13 Jul 2021 19:55:30 +0000 (UTC)
> Strahil Nikolov <hunter86_bg at yahoo.com> wrote:
>> In some cases the third location has a single IP and it makes sense to use

> it
>> as QDevice. If it has multiple network connections to that location ‑ use
>> full blown node .
> By the way, what's the point of multiple rings in corosync when we can
> bonding or teaming on OS layer?

Good question: back in the times of HP-UX and ServiceGuard we had two
networks, each using bonding to ensure cluster communication.
With Linux and pacemaker we have the same, BUT corosync (as of SLES15 SP2)
seems to use them not as redundancy, but in parallel.
That is most notable if your rings use different network speeds (like 100 vs.
100, or 10000 vs. 1000): The slower net slows down ALL cluster communication.
(In contrast HP-UX ServiceGuard would _switch_ to the secondary network when
the primary looked like failed (and back again))

It seems there was an idea for Linux, but the implementation is bad.

> I remember some times ago bonding was recommended over corosync rings, 
> because
> the totem protocol on multiple rings wasn't as flexible than
> and multiple rings was only useful to corosync/pacemaker where bonding was
> useful for all other services on the server.
> ...But that was before the knet era. Did it changed?

Sorry, I don't know knet yet.


> Regards,
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> ClusterLabs home: https://www.clusterlabs.org/ 

More information about the Users mailing list