[ClusterLabs] Trying to understand the default action of a fence agent
Bryan K. Walton
bwalton+1546953805 at leepfrog.com
Tue Jan 8 08:35:14 EST 2019
Hi,
I'm building a two node cluster with Centos 7.6 and DRBD. These nodes
are connected upstream to two Brocade switches. I'm trying to enable
fencing by using Digimer's fence_dlink_snmp script (
https://github.com/digimer/fence_dlink_snmp ).
I've renamed the script to fence_brocade_snmp and have
created my stonith resources using the following syntax:
pcs -f stonith_cfg stonith create fenceStorage1-centipede \
fence_brocade_snmp pcmk_host_list=storage1-drbd ipaddr=10.40.1.1 \
community=xxxxxxx port=193 pcmk_off_action="off" \
pcmk_monitor_timeout=120s
When I run "stonith-admin storage1-drbd", from my other node,
the switch ports do not get disabled. However, when I run
"stonith_admin -F storage1-drbd", the switch ports DO get disabled.
If I run "pcs stonith fence storage1-drbd", from the other node, the
response is: "Node: storage1-drbd fenced", but, again, the switch ports
are still enabled. I'm forced to instead run: "pcs stonith fence
storage1-drbd --off" to get the ports to be disabled.
What I'm trying to figure out, is under what scenario should I see the
ports actually get disabled? My concern is that, for example, I can
stop the cluster on storage1-drbd, and the logs will show that the
fencing was successful, and then my resources get moved. But when I
check on the switch ports that are connected to storage1-drbd, they are
still enabled. So, the node does not appear to be really fenced.
Do I need to create my stonith resource differently to actually disable
those ports?
Thank you for your time. I am greatly appreciative.
Sincerely,
Bryan Walton
--
Bryan K. Walton 319-337-3877
Linux Systems Administrator Leepfrog Technologies, Inc
----- End forwarded message -----
--
Bryan K. Walton 319-337-3877
Linux Systems Administrator Leepfrog Technologies, Inc
More information about the Users
mailing list