<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Wed, Feb 4, 2026 at 4:36 PM Anton Gavriliuk via Users <<a href="mailto:users@clusterlabs.org">users@clusterlabs.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg1686579835300625889">
<div lang="EN-US" style="overflow-wrap: break-word;">
<div class="m_1686579835300625889WordSection1">
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Hello<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">There is two-node (HPE DL345 Gen12 servers) shared-nothing DRBD-based sync (Protocol C) replication, distributed active/standby pacemaker storage metro-cluster. The distributed active/standby pacemaker storage metro-cluster configured with
qdevice, heuristics (parallel fping) and fencing - fence_ipmilan and diskless sbd (hpwdt, /dev/watchdog). All cluster resources are configured to always run together on the same node.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">The two storage cluster nodes and qdevice running on Rocky Linux 10.1<u></u><u></u></p>
<p class="MsoNormal">Pacemaker version 3.0.1<u></u><u></u></p>
<p class="MsoNormal">Corosync version 3.1.9<u></u><u></u></p>
<p class="MsoNormal">DRBD version 9.3.0<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">So, the question is – what is the most correct way of implementing STONITH/fencing with fence_iomilan + diskless sbd (hpwdt, /dev/watchdog) ?</p></div></div></div></blockquote><div><br></div><div>The correct way of using diskless sbd with a two-node cluster is not to use it ;-)</div><div><br></div><div>diskless sbd (watchdog-fencing) requires 'real' quorum and quorum provided by corosync in two-node mode would introduce split-brain which</div><div>is the reason why sbd recognizes the two-node operation and replaces quorum from corosync by the information that the peer node is currently</div><div>in the cluster. This is fine for working with poison-pill fencing - a single single shared disk then doesn't become a single-point-of-failure as long</div><div>as the peer is there. But for watchdog-fencing that doesn't help because the peer going away would mean you have to commit suicide.</div><div><br></div><div>and alternative with a two-node cluster is to step away from the actual two-node design and go with qdevice for 'real' quorum.<br>You'll need some kind of 3rd node but it doesn't have to be a full cluster node.</div><div><br></div><div>Regards,</div><div>Klaus</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="msg1686579835300625889"><div lang="EN-US" style="overflow-wrap: break-word;"><div class="m_1686579835300625889WordSection1"><p class="MsoNormal"><u></u><u></u></p>
<p class="MsoNormal">I’m not sure about two-level fencing topology, because diskless sbd is not an external agent/resource…<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Currently it works without fencing topology, and both running in “parallel”. Really no matter who wins. I just want to make sure fenced node is powered off of rebooted.<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Here is log how it works now in “parallel”,<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">[root@memverge2 ~]# cat /var/log/messages|grep -i fence<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:07 memverge2 pacemaker-fenced[3902]: notice: Node memverge state is now lost<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:07 memverge2 pacemaker-fenced[3902]: notice: Removed 1 inactive node with cluster layer ID 27 from the membership cache<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-schedulerd[3905]: warning: Cluster node memverge will be fenced: peer is no longer part of the cluster<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-schedulerd[3905]: warning: ipmi-fence-memverge2_stop_0 on memverge is unrunnable (node is offline)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-schedulerd[3905]: warning: ipmi-fence-memverge2_stop_0 on memverge is unrunnable (node is offline)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-schedulerd[3905]: notice: Actions: Fence (reboot) memverge 'peer is no longer part of the cluster'<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-schedulerd[3905]: notice: Actions: Stop ipmi-fence-memverge2 ( memverge ) due to node availability<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-fenced[3902]: notice: Client pacemaker-controld.3906 wants to fence (reboot) memverge using any device<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-fenced[3902]: notice: Requesting peer fencing (reboot) targeting memverge<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-fenced[3902]: notice: Requesting that memverge2 perform 'reboot' action targeting memverge<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-fenced[3902]: notice: Waiting 25s for memverge to self-fence (reboot) for client pacemaker-controld.3906<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:10 memverge2 pacemaker-fenced[3902]: notice: Delaying 'reboot' action targeting memverge using ipmi-fence-memverge for 5s<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 pacemaker-fenced[3902]: notice: Self-fencing (reboot) by memverge for pacemaker-controld.3906 assumed complete<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 pacemaker-fenced[3902]: notice: Operation 'reboot' targeting memverge by memverge2 for
<a href="mailto:pacemaker-controld.3906@memverge2" target="_blank">pacemaker-controld.3906@memverge2</a>: OK (Done)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 kernel: drbd ha-nfs memverge: helper command: /sbin/drbdadm fence-peer<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 kernel: drbd ha-iscsi memverge: helper command: /sbin/drbdadm fence-peer<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 <a href="http://crm-fence-peer.9.sh" target="_blank">crm-fence-peer.9.sh</a>[7332]: DRBD_BACKING_DEV_1=/dev/mapper/object_block_nfs_vg-ha_nfs_exports_lv_with_vdo_1x8 DRBD_BACKING_DEV_2=/dev/mapper/object_block_nfs_vg-ha_nfs_internal_lv_without_vdo DRBD_BACKING_DEV_5=/dev/mapper/object_block_nfs_vg-ha_samba_exports_lv_with_vdo_1x8
DRBD_CONF=/etc/drbd.conf DRBD_CSTATE=Connecting DRBD_LL_DISK=/dev/mapper/object_block_nfs_vg-ha_nfs_exports_lv_with_vdo_1x8\ /dev/mapper/object_block_nfs_vg-ha_nfs_internal_lv_without_vdo\ /dev/mapper/object_block_nfs_vg-ha_samba_exports_lv_with_vdo_1x8 DRBD_MINOR=1\
2\ 5 DRBD_MINOR_1=1 DRBD_MINOR_2=2 DRBD_MINOR_5=5 DRBD_MY_ADDRESS=192.168.0.8 DRBD_MY_AF=ipv4 DRBD_MY_NODE_ID=28 DRBD_NODE_ID_27=memverge DRBD_NODE_ID_28=memverge2 DRBD_PEER_ADDRESS=192.168.0.6 DRBD_PEER_AF=ipv4 DRBD_PEER_NODE_ID=27 DRBD_RESOURCE=ha-nfs DRBD_VOLUME=1\
2\ 5 UP_TO_DATE_NODES=0x10000000 /usr/lib/drbd/<a href="http://crm-fence-peer.9.sh" target="_blank">crm-fence-peer.9.sh</a><u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 <a href="http://crm-fence-peer.9.sh" target="_blank">crm-fence-peer.9.sh</a>[7333]: DRBD_BACKING_DEV_3=/dev/mapper/object_block_nfs_vg-ha_block_exports_lv_with_vdo_1x8 DRBD_BACKING_DEV_4=/dev/mapper/object_block_nfs_vg-ha_block_exports_lv_without_vdo DRBD_CONF=/etc/drbd.conf
DRBD_CSTATE=Connecting DRBD_LL_DISK=/dev/mapper/object_block_nfs_vg-ha_block_exports_lv_with_vdo_1x8\ /dev/mapper/object_block_nfs_vg-ha_block_exports_lv_without_vdo DRBD_MINOR=3\ 4 DRBD_MINOR_3=3 DRBD_MINOR_4=4 DRBD_MY_ADDRESS=192.168.0.8 DRBD_MY_AF=ipv4
DRBD_MY_NODE_ID=28 DRBD_NODE_ID_27=memverge DRBD_NODE_ID_28=memverge2 DRBD_PEER_ADDRESS=192.168.0.6 DRBD_PEER_AF=ipv4 DRBD_PEER_NODE_ID=27 DRBD_RESOURCE=ha-iscsi DRBD_VOLUME=3\ 4 UP_TO_DATE_NODES=0x10000000 /usr/lib/drbd/<a href="http://crm-fence-peer.9.sh" target="_blank">crm-fence-peer.9.sh</a><u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 <a href="http://crm-fence-peer.9.sh" target="_blank">crm-fence-peer.9.sh</a>[7333]: INFO Concurrency check: Peer is already marked clean/fenced by another resource. Returning success (Exit 4).<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 <a href="http://crm-fence-peer.9.sh" target="_blank">crm-fence-peer.9.sh</a>[7332]: INFO Concurrency check: Peer is already marked clean/fenced by another resource. Returning success (Exit 4).<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 kernel: drbd ha-iscsi memverge: helper command: /sbin/drbdadm fence-peer exit code 4 (0x400)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 kernel: drbd ha-iscsi memverge: fence-peer helper returned 4 (peer was fenced)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 kernel: drbd ha-nfs memverge: helper command: /sbin/drbdadm fence-peer exit code 4 (0x400)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:36 memverge2 kernel: drbd ha-nfs memverge: fence-peer helper returned 4 (peer was fenced)<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:37 memverge2 pacemaker-fenced[3902]: notice: Operation 'reboot' [7068] targeting memverge using ipmi-fence-memverge returned 0<u></u><u></u></p>
<p class="MsoNormal">Feb 2 12:46:37 memverge2 pacemaker-fenced[3902]: notice: Operation 'reboot' targeting memverge by memverge2 for
<a href="mailto:pacemaker-controld.3906@memverge2" target="_blank">pacemaker-controld.3906@memverge2</a>: Result arrived too late<u></u><u></u></p>
<p class="MsoNormal">[root@memverge2 ~]#<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
<p class="MsoNormal">Anton<u></u><u></u></p>
<p class="MsoNormal"><u></u> <u></u></p>
</div>
</div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</div></blockquote></div></div>