[ClusterLabs] DRBD diskless quorum vs DRBD "handlers" and "fencing" in two-node Pacemaker storage metro-cluster
Windl, Ulrich
u.windl at ukr.de
Tue Mar 17 09:31:02 UTC 2026
I think it’s challenging to fence the other node via network when the network is down.
Kind regards,
Ulrich Windl
From: Users <users-bounces at clusterlabs.org> On Behalf Of Anton Gavriliuk via Users
Sent: Wednesday, March 11, 2026 6:45 PM
To: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
Cc: Anton Gavriliuk <Anton.Gavriliuk at hpe.ua>
Subject: [EXT] [EXT] [ClusterLabs] DRBD diskless quorum vs DRBD "handlers" and "fencing" in two-node Pacemaker storage metro-cluster
Hello
There is two-node shared-nothing DRBD-based sync replication active/standby Pacemaker storage metro-cluster.
DRBD replication links are directly (no switch) connected.
Configuration with Corosync quorum and DRBD diskless quorum are running on additional qdevice host.
Heuristics (parallel fping) and STONITH/fencing - fence_ipmilan and diskless sbd (hpwdt, /dev/watchdog).
Data consistency and integrity is absolute priority during any node/network breaks.
In such setup, should I add to DRBD resource files "handlers" and "fencing",
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.9.sh";
unfence-peer "/usr/lib/drbd/crm-unfence-peer.9.sh";
}
and
fencing resource-and-stonith; or fencing resource-only
Or is it unnecessary and may even be counterproductive ?
Anton
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20260317/4d3a996e/attachment.htm>
More information about the Users
mailing list