[ClusterLabs] stonith device locate on same host in active/passive cluster

Albert Weng weng.albert at gmail.com
Tue May 2 01:30:06 EDT 2017


Hi All,

the following logs from corosync.log that might help.

Apr 25 10:29:32 [15334] gmlcdbw02    pengine:     info: native_print:
ipmi-fence-db01    (stonith:fence_ipmilan):    Started gmlcdbw01
Apr 25 10:29:32 [15334] gmlcdbw02    pengine:     info: native_print:
ipmi-fence-db02    (stonith:fence_ipmilan):    Started gmlcdbw02

Apr 25 10:29:32 [15334] gmlcdbw02    pengine:     info: RecurringOp:
 Start recurring monitor (60s) for ipmi-fence-db01 on gmlcdbw02
Apr 25 10:29:32 [15334] gmlcdbw02    pengine:   notice: LogActions:
Move    ipmi-fence-db01    (Started gmlcdbw01 -> gmlcdbw02)
Apr 25 10:29:32 [15334] gmlcdbw02    pengine:     info: LogActions:
Leave   ipmi-fence-db02    (Started gmlcdbw02)
Apr 25 10:29:32 [15335] gmlcdbw02       crmd:   notice: te_rsc_command:
Initiating action 11: stop ipmi-fence-db01_stop_0 on gmlcdbw01
Apr 25 10:29:32 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_stop_0, @operation=stop,
@transition-key=11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=75, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=0
Apr 25 10:29:32 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_stop_0, @operation=stop,
@transition-key=11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=75, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=0
Apr 25 10:29:32 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_stop_0, @operation=stop,
@transition-key=11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;11:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=75, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=0
Apr 25 10:29:32 [15335] gmlcdbw02       crmd:     info:
match_graph_event:    Action ipmi-fence-db01_stop_0 (11) confirmed on
gmlcdbw01 (rc=0)
Apr 25 10:29:32 [15335] gmlcdbw02       crmd:   notice: te_rsc_command:
Initiating action 12: start ipmi-fence-db01_start_0 on gmlcdbw02 (local)
Apr 25 10:29:32 [15335] gmlcdbw02       crmd:     info: do_lrm_rsc_op:
Performing key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850
op=ipmi-fence-db01_start_0
Apr 25 10:29:32 [15332] gmlcdbw02       lrmd:     info: log_execute:
executing - rsc:ipmi-fence-db01 action:start call_id:65
Apr 25 10:29:32 [15332] gmlcdbw02       lrmd:     info: log_finished:
finished - rsc:ipmi-fence-db01 action:start call_id:65  exit-code:0
exec-time:45ms queue-time:0ms
Apr 25 10:29:33 [15335] gmlcdbw02       crmd:   notice:
process_lrm_event:    Operation ipmi-fence-db01_start_0: ok
(node=gmlcdbw02, call=65, rc=0, cib-update=2571, confirmed=true)
Apr 25 10:29:33 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_start_0, @operation=start,
@transition-key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=65, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=45
Apr 25 10:29:33 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_start_0, @operation=start,
@transition-key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=65, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=45
Apr 25 10:29:33 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_last_0']:
@operation_key=ipmi-fence-db01_start_0, @operation=start,
@transition-key=12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;12:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=65, @last-run=1493087372, @last-rc-change=1493087372, @exec-time=45
Apr 25 10:29:33 [15335] gmlcdbw02       crmd:     info:
match_graph_event:    Action ipmi-fence-db01_start_0 (12) confirmed on
gmlcdbw02 (rc=0)
Apr 25 10:29:33 [15335] gmlcdbw02       crmd:   notice: te_rsc_command:
Initiating action 13: monitor ipmi-fence-db01_monitor_60000 on gmlcdbw02
(local)
Apr 25 10:29:33 [15335] gmlcdbw02       crmd:     info: do_lrm_rsc_op:
Performing key=13:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850
op=ipmi-fence-db01_monitor_60000
Apr 25 10:29:33 [15335] gmlcdbw02       crmd:     info:
process_lrm_event:    Operation ipmi-fence-db01_monitor_60000: ok
(node=gmlcdbw02, call=66, rc=0, cib-update=2577, confirmed=false)
Apr 25 10:29:33 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_monitor_60000']:
@transition-key=13:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;13:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=66, @last-rc-change=1493087373, @exec-time=39
Apr 25 10:29:33 [15330] gmlcdbw02        cib:     info: cib_perform_op:
+
/cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources/lrm_resource[@id='ipmi-fence-db01']/lrm_rsc_op[@id='ipmi-fence-db01_monitor_60000']:
@transition-key=13:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@transition-magic=0:0;13:2485:0:27a91aab-060a-4de9-80b1-18abeb7bd850,
@call-id=66, @last-rc-change=1493087373, @exec-time=39
Apr 25 10:29:33 [15335] gmlcdbw02       crmd:     info:
match_graph_event:    Action ipmi-fence-db01_monitor_60000 (13) confirmed
on gmlcdbw02 (rc=0)
Apr 25 10:35:37 [15333] gmlcdbw02      attrd:     info: write_attribute:
Sent update 6 with 1 changes for fail-count-ipmi-fence-db02, id=<n/a>,
set=(null)
Apr 25 10:35:37 [15333] gmlcdbw02      attrd:     info:
attrd_cib_callback:    Update 6 for fail-count-ipmi-fence-db02: OK (0)
Apr 25 10:35:37 [15333] gmlcdbw02      attrd:     info:
attrd_cib_callback:    Update 6 for
fail-count-ipmi-fence-db02[gmlcdbw02]=(null): OK (0)

Apr 25 10:35:37 [15334] gmlcdbw02    pengine:     info: native_print:
ipmi-fence-db01    (stonith:fence_ipmilan):    Started gmlcdbw02
Apr 25 10:35:37 [15334] gmlcdbw02    pengine:     info: native_print:
ipmi-fence-db02    (stonith:fence_ipmilan):    Started gmlcdbw02

Apr 25 10:35:37 [15334] gmlcdbw02    pengine:     info: native_color:
Resource ipmi-fence-db01 cannot run anywhere
Apr 25 10:35:37 [15334] gmlcdbw02    pengine:     info: native_color:
Resource ipmi-fence-db02 cannot run anywhere
Apr 25 10:35:37 [15334] gmlcdbw02    pengine:   notice: LogActions:
Stop    ipmi-fence-db01    (gmlcdbw02)
Apr 25 10:35:37 [15334] gmlcdbw02    pengine:   notice: LogActions:
Stop    ipmi-fence-db02    (gmlcdbw02)
Apr 25 10:35:37 [15335] gmlcdbw02       crmd:   notice: te_rsc_command:
Initiating action 10: stop ipmi-fence-db01_stop_0 on gmlcdbw02 (local)

if i create location constraint, can i force ipmi-fence-db01 keep on the
gmlcdbw01?
# pcs constraint location ipmi-fence-db01 prefers gmlcdbw01
# pcs constraint location ipmi-fence-db02 prefers gmlcdbw02

Thanks.



On Tue, May 2, 2017 at 9:39 AM, Albert Weng <weng.albert at gmail.com> wrote:

> Hi All,
>
> I have created active/passive pacemaker cluster on RHEL 7.
>
> here is my environment:
> clustera : 192.168.11.1
> clusterb : 192.168.11.2
> clustera-ilo4 : 192.168.11.10
> clusterb-ilo4 : 192.168.11.11
>
> both nodes are connected SAN storage for shared storage.
>
> i used the following cmd to create my stonith devices on each node :
> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan parms
> lanplus="ture" pcmk_host_list="clustera" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.10" login=adminsitrator passwd=1234322
> op monitor interval=60s
>
> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan parms
> lanplus="true" pcmk_host_list="clusterb" pcmk_host_check="static-list"
> action="reboot" ipaddr="192.168.11.11" login=USERID passwd=password op
> monitor interval=60s
>
> # pcs status
> ipmi-fence-node1                     clustera
> ipmi-fence-node2                     clusterb
>
> but when i failover to passive node, then i ran
> # pcs status
>
> ipmi-fence-node1                    clusterb
> ipmi-fence-node2                    clusterb
>
> why both fence device locate on the same node ?
>
>
> --
> Kind regards,
> Albert Weng
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> 不含病毒。www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
> <#m_-4327774609408162231_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>



-- 
Kind regards,
Albert Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170502/50e930de/attachment-0003.html>


More information about the Users mailing list