<html xmlns="http://www.w3.org/1999/xhtml" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office"><head><!--[if gte mso 9]><xml><o:OfficeDocumentSettings><o:AllowPNG/><o:PixelsPerInch>96</o:PixelsPerInch></o:OfficeDocumentSettings></xml><![endif]--></head><body><div class="ydpa2532afdyahoo-style-wrap" style="font-family:courier new, courier, monaco, monospace, sans-serif;font-size:16px;"><div></div>
<div dir="ltr" data-setdir="false">Have you checked this article: <a href="https://access.redhat.com/articles/530533" rel="nofollow" target="_blank" class="enhancr_card_2770446698">Using SCSI Persistent Reservation Fencing (fence_scsi) with pacemaker in a Red Hat High Availability cluster - Red Hat Customer Portal</a></div><div><br></div><div id="ydp9810fc88enhancr_card_2770446698" class="ydp9810fc88yahoo-link-enhancr-card ydp9810fc88ymail-preserve-class ydp9810fc88ymail-preserve-style" style="max-width:400px;font-family:Helvetica Neue, Segoe UI, Helvetica, Arial, sans-serif" data-url="https://access.redhat.com/articles/530533" data-type="YENHANCER" data-size="MEDIUM" contenteditable="false"><a href="https://access.redhat.com/articles/530533" style="text-decoration-line: none !important; text-decoration-style: solid !important; text-decoration-color: currentcolor !important; color: rgb(0, 0, 0) !important;" class="ydp9810fc88yahoo-enhancr-cardlink" rel="nofollow" target="_blank"><table class="ydp9810fc88card-wrapper ydp9810fc88yahoo-ignore-table" style="max-width:400px" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td width="400"><table class="ydp9810fc88card ydp9810fc88yahoo-ignore-table" style="max-width:400px;border-width:1px;border-style:solid;border-color:rgb(224, 228, 233);border-radius:2px" width="100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td class="ydp9810fc88card-primary-image-cell" style="background-color: rgb(0, 0, 0); background-repeat: no-repeat; background-size: cover; position: relative; border-radius: 2px 2px 0px 0px; min-height: 175px;" valign="top" height="175" bgcolor="#000000" background="https://s.yimg.com/lo/api/res/1.2/vxBKvP6po8XIu1oShb1Sug--~A/Zmk9ZmlsbDt3PTQwMDtoPTIwMDthcHBpZD1pZXh0cmFjdA--/https://access.redhat.com/webassets/avalon/g/shadowman-200.png.cf.jpg"><!--[if gte mso 9]><v:rect fill="true" stroke="false" style="width:396px;height:175px;position:absolute;top:0;left:0;"><v:fill type="frame" color="#000000" src="https://s.yimg.com/lo/api/res/1.2/vxBKvP6po8XIu1oShb1Sug--~A/Zmk9ZmlsbDt3PTQwMDtoPTIwMDthcHBpZD1pZXh0cmFjdA--/https://access.redhat.com/webassets/avalon/g/shadowman-200.png.cf.jpg"/></v:rect><![endif]--><table class="ydp9810fc88card-overlay-container-table ydp9810fc88yahoo-ignore-table" style="width:100%" cellspacing="0" cellpadding="0" border="0"><tbody><tr><td class="ydp9810fc88card-overlay-cell" style="background-color: transparent; border-radius: 2px 2px 0px 0px; min-height: 175px;" valign="top" bgcolor="transparent" background="https://s.yimg.com/cv/ae/nq/storm/assets/enhancrV21/1/enhancr_gradient-400x175.png"><!--[if gte mso 9]><v:rect fill="true" stroke="false" style="width:396px;height:175px;position:absolute;top:-18px;left:0;"><v:fill type="pattern" color="#000000" src="https://s.yimg.com/cv/ae/nq/storm/assets/enhancrV21/1/enhancr_gradient-400x175.png"/><v:textbox inset="0,0,20px,0"><![endif]--><table class="ydp9810fc88yahoo-ignore-table" style="width: 100%; min-height: 175px;" height="175" border="0"><tbody><tr><td class="ydp9810fc88card-richInfo2" style="text-align:left;padding:15px 0 0 15px;vertical-align:top"></td><td class="ydp9810fc88card-actions" style="text-align:right;padding:15px 15px 0 0;vertical-align:top"><div class="ydp9810fc88card-share-container"></div></td></tr></tbody></table><!--[if gte mso 9]></v:textbox></v:rect><![endif]--></td></tr></tbody></table></td></tr><tr><td><table class="ydp9810fc88card-info ydp9810fc88yahoo-ignore-table" style="background-color: rgb(255, 255, 255); background-repeat: repeat; background-attachment: scroll; background-image: none; background-size: auto; position: relative; z-index: 2; width: 100%; max-width: 400px; border-radius: 0px 0px 2px 2px; border-top: 1px solid rgb(224, 228, 233);" cellspacing="0" cellpadding="0" border="0" align="center"><tbody><tr><td style="background-color:#ffffff;padding:16px 0 16px 12px;vertical-align:top;border-radius:0 0 0 2px"></td><td style="vertical-align:middle;padding:12px 24px 16px 12px;width:99%;font-family:Helvetica Neue, Segoe UI, Helvetica, Arial, sans-serif;border-radius:0 0 2px 0"><h2 class="ydp9810fc88card-title" style="font-size: 14px; line-height: 19px; margin: 0px 0px 6px; font-family: Helvetica Neue, Segoe UI, Helvetica, Arial, sans-serif; color: rgb(38, 40, 42);">Using SCSI Persistent Reservation Fencing (fence_scsi) with pacemaker in...</h2><p class="ydp9810fc88card-description" style="font-size: 12px; line-height: 16px; margin: 0px; color: rgb(151, 155, 167);">This article describes how to properly configure fence_scsi and the requirements for using it.</p></td></tr></tbody></table></td></tr></tbody></table></td></tr></tbody></table></a></div><div><br></div><div><br></div><div dir="ltr" data-setdir="false">Have you checked if your storage supports persistent reservations?</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">Best Regards,</div><div dir="ltr" data-setdir="false">Strahil Nikolov<br></div><div><br></div>
</div><div id="ydp25591ef1yahoo_quoted_3431476847" class="ydp25591ef1yahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В сряда, 30 октомври 2019 г., 8:42:16 ч. Гринуич-4, RAM PRASAD TWISTED ILLUSIONS <ramdesp@gmail.com> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="ydp25591ef1yiv9209660548"><div dir="ltr"><div class="ydp25591ef1yiv9209660548gmail-moz-text-flowed" style="font-family:-moz-fixed;font-size:12px;" lang="x-unicode">Hi everyone,
<br>
<br>I am trying to set up a storage cluster with two nodes, both running
debian buster. The two nodes called, duke and miles, have a LUN residing
on a SAN box as their shared storage device between them. As you can see
in the output of pcs status, all the demons are active and I can get the
nodes online without any issues. However, I cannot get the fencing
resources to start.
<br>
<br>These two nodes were running debian jessie before and had access to the
same LUN in a storage cluster configuration. Now, I am trying to
recreate a similar setup with both nodes now running the latest debian. I am not sure if this is relevant, but this LUN already has shared VG with data on it. I am wondering if this could be the cause of the trouble? Should I be creating my stonith device on a different/fresh LUN?</div><div class="ydp25591ef1yiv9209660548gmail-moz-text-flowed" style="font-family:-moz-fixed;font-size:12px;" lang="x-unicode"><br>####### pcs status
<br>Cluster name: jazz
<br>Stack: corosync
<br>Current DC: duke (version 2.0.1-9e909a5bdd) - partition with quorum
<br>Last updated: Wed Oct 30 11:58:19 2019
<br>Last change: Wed Oct 30 11:28:28 2019 by root via cibadmin on duke
<br>
<br>2 nodes configured
<br>2 resources configured
<br>
<br>Online: [ duke miles ]
<br>
<br>Full list of resources:
<br>
<br> fence_duke (stonith:fence_scsi): Stopped
<br> fence_miles (stonith:fence_scsi): Stopped
<br>
<br>Failed Fencing Actions:
<br>* unfencing of duke failed: delegate=, client=pacemaker-controld.1703,
origin=duke,
<br> last-failed='Wed Oct 30 11:43:29 2019'
<br>* unfencing of miles failed: delegate=, client=pacemaker-controld.1703,
origin=duke,
<br> last-failed='Wed Oct 30 11:43:29 2019'
<br>
<br>Daemon Status:
<br> corosync: active/enabled
<br> pacemaker: active/enabled
<br> pcsd: active/enabled
<br>#######
<br>
<br>I used the following commands to add the two fencing devices and set
their location constraints .
<br>
<br>#######
<br>sudo pcs cluster cib test_cib_cfg
<br>pcs -f test_cib_cfg stonith create fence_duke fence_scsi
pcmk_host_list=duke pcmk_reboot_action="off"
devices="/dev/disk/by-id/wwn-0x600c0ff0001e8e3c89601b5801000000" meta
provides="unfencing"
<br>pcs -f test_cib_cfg stonith create fence_miles fence_scsi
pcmk_host_list=miles pcmk_reboot_action="off"
devices="/dev/disk/by-id/wwn-0x600c0ff0001e8e3c89601b5801000000"
delay=15 meta provides="unfencing"
<br>pcs -f test_cib_cfg constraint location fence_duke avoids duke=INFINITY
<br>pcs -f test_cib_cfg constraint location fence_miles avoids miles=INFINITY
<br>pcs cluster cib-push test_cib_cfg
<br>#######
<br>
<br>Here is the output in /var/log/pacemaker/pacemaker.log after adding the
fencing resources
<br>
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702]
(determine_online_status_fencing) info: Node miles is active
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702]
(determine_online_status) info: Node miles is online
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702]
(determine_online_status_fencing) info: Node duke is active
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702]
(determine_online_status) info: Node duke is online
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 2 is already processed
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 1 is already processed
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 2 is already processed
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 1 is already processed
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (common_print) info:
fence_duke (stonith:fence_scsi): Stopped
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (common_print) info:
fence_miles (stonith:fence_scsi): Stopped
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (RecurringOp) info:
Start recurring monitor (60s) for fence_duke on miles
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (RecurringOp) info:
Start recurring monitor (60s) for fence_miles on duke
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (LogNodeActions)
notice: * Fence (on) miles 'required by fence_duke monitor'
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (LogNodeActions)
notice: * Fence (on) duke 'required by fence_duke monitor'
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (LogAction) notice: *
Start fence_duke ( miles )
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (LogAction) notice: *
Start fence_miles ( duke )
<br>Oct 30 12:06:02 duke pacemaker-schedulerd[1702] (process_pe_message)
notice: Calculated transition 63, saving inputs in
/var/lib/pacemaker/pengine/pe-input-23.bz2
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (do_state_transition)
info: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE |
input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (do_te_invoke) info:
Processing graph 63 (ref=pe_calc-dc-1572433562-101) derived from
/var/lib/pacemaker/pengine/pe-input-23.bz2
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (te_fence_node)
notice: Requesting fencing (on) of node miles | action=5 timeout=60000
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (te_fence_node)
notice: Requesting fencing (on) of node duke | action=2 timeout=60000
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (handle_request)
notice: Client pacemaker-controld.1703.470f8b4e wants to fence (on)
'miles' with device '(any)'
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(initiate_remote_stonith_op) notice: Requesting peer fencing (on) of
miles | id=a0ac6e3a-0296-4aff-85e3-c591f75f38d3 state=0
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (handle_request)
notice: Client pacemaker-controld.1703.470f8b4e wants to fence (on)
'duke' with device '(any)'
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(initiate_remote_stonith_op) notice: Requesting peer fencing (on) of
duke | id=261d9311-0553-48ff-864f-41d53d12b152 state=0
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(can_fence_host_with_device) notice: fence_miles can not fence (on)
duke: static-list
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 1 of 2 from duke for
miles/on (0 devices) a0ac6e3a-0296-4aff-85e3-c591f75f38d3
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 1 of 2 from duke for
duke/on (0 devices) 261d9311-0553-48ff-864f-41d53d12b152
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 2 of 2 from miles for
miles/on (0 devices) a0ac6e3a-0296-4aff-85e3-c591f75f38d3
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: All query replies have arrived,
continuing (2 expected/2 received)
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (stonith_choose_peer)
notice: Couldn't find anyone to fence (on) miles with any device
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (call_remote_stonith)
info: Total timeout set to 60 for peer's fencing of miles for
pacemaker-controld.1703|id=a0ac6e3a-0296-4aff-85e3-c591f75f38d3
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (call_remote_stonith)
info: No peers (out of 2) have devices capable of fencing (on) miles for
pacemaker-controld.1703 (0)
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 2 of 2 from miles for
duke/on (0 devices) 261d9311-0553-48ff-864f-41d53d12b152
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: All query replies have arrived,
continuing (2 expected/2 received)
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (stonith_choose_peer)
notice: Couldn't find anyone to fence (on) duke with any device
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (call_remote_stonith)
info: Total timeout set to 60 for peer's fencing of duke for
pacemaker-controld.1703|id=261d9311-0553-48ff-864f-41d53d12b152
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (call_remote_stonith)
info: No peers (out of 2) have devices capable of fencing (on) duke for
pacemaker-controld.1703 (0)
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (remote_op_done)
error: Operation on of miles by <no-one> for
<a class="ydp25591ef1yiv9209660548gmail-moz-txt-link-abbreviated" href="mailto:pacemaker-controld.1703@duke.a0ac6e3a" rel="nofollow" target="_blank">pacemaker-controld.1703@duke.a0ac6e3a</a>: No such device
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation
15/5:63:0:5e3e0ef6-02a5-4f9a-b999-806413a3da12: No such device (-19)
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation 15 for miles
failed (No such device): aborting transition.
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_callback) warning: No devices found in cluster to
fence miles, giving up
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(abort_transition_graph) notice: Transition 63 aborted: Stonith
failed | source=abort_for_stonith_failure:776 complete=false
<br>Oct 30 12:06:02 duke pacemaker-fenced [1699] (remote_op_done)
error: Operation on of duke by <no-one> for
<a class="ydp25591ef1yiv9209660548gmail-moz-txt-link-abbreviated" href="mailto:pacemaker-controld.1703@duke.261d9311" rel="nofollow" target="_blank">pacemaker-controld.1703@duke.261d9311</a>: No such device
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_notify) error: Unfencing of miles by <anyone>
failed: No such device (-19)
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation
16/2:63:0:5e3e0ef6-02a5-4f9a-b999-806413a3da12: No such device (-19)
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation 16 for duke
failed (No such device): aborting transition.
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_callback) warning: No devices found in cluster to
fence duke, giving up
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(abort_transition_graph) info: Transition 63 aborted: Stonith
failed | source=abort_for_stonith_failure:776 complete=false
<br>Oct 30 12:06:02 duke pacemaker-controld [1703]
(tengine_stonith_notify) error: Unfencing of duke by <anyone>
failed: No such device (-19)
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (run_graph) notice:
Transition 63 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=8,
Source=/var/lib/pacemaker/pengine/pe-input-23.bz2): Complete
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (do_log) info: Input
I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
<br>Oct 30 12:06:02 duke pacemaker-controld [1703] (do_state_transition)
notice: State transition S_TRANSITION_ENGINE -> S_IDLE |
input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
<br>Oct 30 12:06:06 duke pacemaker-based [1698] (cib_process_ping)
info: Reporting our current digest to duke:
c75a23192109201a5ceaa896d6c313cc for 0.28.6 (0x55a5ab8ff1f0 0)
<br>
<br>#######
<br>When I tried without explicitly mentioning the device in the stonith
commands, this is what I end up having in the pacemaker.log.
<br>
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702]
(determine_online_status_fencing) info: Node miles is active
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702]
(determine_online_status) info: Node miles is online
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702]
(determine_online_status_fencing) info: Node duke is active
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702]
(determine_online_status) info: Node duke is online
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 2 is already processed
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 1 is already processed
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 2 is already processed
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (unpack_node_loop)
info: Node 1 is already processed
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (common_print) info:
fence_duke (stonith:fence_scsi): Stopped
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (common_print) info:
fence_miles (stonith:fence_scsi): Stopped
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (RecurringOp) info:
Start recurring monitor (60s) for fence_duke on miles
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (RecurringOp) info:
Start recurring monitor (60s) for fence_miles on duke
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (LogNodeActions)
notice: * Fence (on) miles 'required by fence_duke monitor'
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (LogNodeActions)
notice: * Fence (on) duke 'required by fence_duke monitor'
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (LogAction) notice: *
Start fence_duke ( miles )
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (LogAction) notice: *
Start fence_miles ( duke )
<br>Oct 30 12:22:34 duke pacemaker-schedulerd[1702] (process_pe_message)
notice: Calculated transition 69, saving inputs in
/var/lib/pacemaker/pengine/pe-input-28.bz2
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (do_state_transition)
info: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE |
input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (do_te_invoke) info:
Processing graph 69 (ref=pe_calc-dc-1572434554-114) derived from
/var/lib/pacemaker/pengine/pe-input-28.bz2
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (te_fence_node)
notice: Requesting fencing (on) of node miles | action=5 timeout=60000
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (te_fence_node)
notice: Requesting fencing (on) of node duke | action=2 timeout=60000
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (handle_request)
notice: Client pacemaker-controld.1703.470f8b4e wants to fence (on)
'miles' with device '(any)'
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(initiate_remote_stonith_op) notice: Requesting peer fencing (on) of
miles | id=4d360268-d290-42e6-b28f-fd4d7649613b state=0
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (handle_request)
notice: Client pacemaker-controld.1703.470f8b4e wants to fence (on)
'duke' with device '(any)'
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(initiate_remote_stonith_op) notice: Requesting peer fencing (on) of
duke | id=90ca3294-5eb5-4c66-a298-cd5afcbbbd77 state=0
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(can_fence_host_with_device) notice: fence_miles can not fence (on)
duke: static-list
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 1 of 2 from duke for
miles/on (0 devices) 4d360268-d290-42e6-b28f-fd4d7649613b
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 2 of 2 from miles for
miles/on (0 devices) 4d360268-d290-42e6-b28f-fd4d7649613b
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: All query replies have arrived,
continuing (2 expected/2 received)
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (stonith_choose_peer)
notice: Couldn't find anyone to fence (on) miles with any device
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (call_remote_stonith)
info: Total timeout set to 60 for peer's fencing of miles for
pacemaker-controld.1703|id=4d360268-d290-42e6-b28f-fd4d7649613b
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (call_remote_stonith)
info: No peers (out of 2) have devices capable of fencing (on) miles for
pacemaker-controld.1703 (0)
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 1 of 2 from miles for
duke/on (0 devices) 90ca3294-5eb5-4c66-a298-cd5afcbbbd77
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: Query result 2 of 2 from duke for
duke/on (0 devices) 90ca3294-5eb5-4c66-a298-cd5afcbbbd77
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699]
(process_remote_stonith_query) info: All query replies have arrived,
continuing (2 expected/2 received)
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (stonith_choose_peer)
notice: Couldn't find anyone to fence (on) duke with any device
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (call_remote_stonith)
info: Total timeout set to 60 for peer's fencing of duke for
pacemaker-controld.1703|id=90ca3294-5eb5-4c66-a298-cd5afcbbbd77
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (call_remote_stonith)
info: No peers (out of 2) have devices capable of fencing (on) duke for
pacemaker-controld.1703 (0)
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (remote_op_done)
error: Operation on of miles by <no-one> for
<a class="ydp25591ef1yiv9209660548gmail-moz-txt-link-abbreviated" href="mailto:pacemaker-controld.1703@duke.4d360268" rel="nofollow" target="_blank">pacemaker-controld.1703@duke.4d360268</a>: No such device
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation
25/5:69:0:5e3e0ef6-02a5-4f9a-b999-806413a3da12: No such device (-19)
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation 25 for miles
failed (No such device): aborting transition.
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_callback) warning: No devices found in cluster to
fence miles, giving up
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(abort_transition_graph) notice: Transition 69 aborted: Stonith
failed | source=abort_for_stonith_failure:776 complete=false
<br>Oct 30 12:22:34 duke pacemaker-fenced [1699] (remote_op_done)
error: Operation on of duke by <no-one> for
<a class="ydp25591ef1yiv9209660548gmail-moz-txt-link-abbreviated" href="mailto:pacemaker-controld.1703@duke.90ca3294" rel="nofollow" target="_blank">pacemaker-controld.1703@duke.90ca3294</a>: No such device
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_notify) error: Unfencing of miles by <anyone>
failed: No such device (-19)
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation
26/2:69:0:5e3e0ef6-02a5-4f9a-b999-806413a3da12: No such device (-19)
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_callback) notice: Stonith operation 26 for duke
failed (No such device): aborting transition.
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_callback) warning: No devices found in cluster to
fence duke, giving up
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(abort_transition_graph) info: Transition 69 aborted: Stonith
failed | source=abort_for_stonith_failure:776 complete=false
<br>Oct 30 12:22:34 duke pacemaker-controld [1703]
(tengine_stonith_notify) error: Unfencing of duke by <anyone>
failed: No such device (-19)
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (run_graph) notice:
Transition 69 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=8,
Source=/var/lib/pacemaker/pengine/pe-input-28.bz2): Complete
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (do_log) info: Input
I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd
<br>Oct 30 12:22:34 duke pacemaker-controld [1703] (do_state_transition)
notice: State transition S_TRANSITION_ENGINE -> S_IDLE |
input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd
<br>Oct 30 12:22:37 duke pacemaker-based [1698] (cib_process_ping)
info: Reporting our current digest to duke:
2eb5c8ee7e7df17c5737befc7d93de76 for 0.37.6 (0x55a5ab900f70 0)
<br>
<br>#######
<br>
<br>Here is my corosync config for your reference,
<br>
<br># Please read the corosync.conf.5 manual page
<br>totem {
<br>version: 2
<br>cluster_name: debian
<br> token: 3000
<br> token_retransmits_before_loss_const: 10
<br> transport: udpu
<br> interface {
<br> ringnumber: 0
<br> bindnetaddr: 130.237.191.255
<br> }
<br>}
<br>logging {
<br>fileline: off
<br>to_stderr: no
<br>to_logfile: yes
<br>logfile: /var/log/corosync/corosync.log
<br>to_syslog: yes
<br>debug: off
<br>timestamp: on
<br>logger_subsys {
<br>subsys: QUORUM
<br>debug: off
<br>}
<br>}
<br>
<br>quorum {
<br>provider: corosync_votequorum
<br> two_node: 1
<br>}
<br>
<br>nodelist {
<br>node {
<br>name: duke
<br>nodeid: 1
<br>ring0_addr: XXXXXXXXXX
<br>}
<br>node {
<br>name: miles
<br>nodeid: 2
<br>ring0_addr: XXXXXXXXXX
<br>}
<br>}
<br>#######
<br>
<br>I am completely out of ideas in terms of what to do, and I would
appreciate any help. Let me know if you guys need more details.
<br>
<br>Thanks in advance!
<br>Ram <br></div></div></div>_______________________________________________<br>Manage your subscription:<br><a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="nofollow" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br><br>ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="nofollow" target="_blank">https://www.clusterlabs.org/</a></div>
</div>
</div></body></html>