[ClusterLabs] Issue with Stonith Resource parameters

vija ar vjav78 at gmail.com
Tue Mar 8 23:29:22 EST 2016


I have configured SLEHA cluster on cisco ucs boxes with ipmi configured, i
have tested IPMI using impitool, however ipmitool to function neatly i have
to pass parameter -y i.e. <hex key> along with username and password,

however  to configure stonith there is no parameter in pacemaker to pass
<hex key>? and due to which stonith is failing

can you please let me know if there is any way to add it or is this a bug?

*******************



Mar  9 00:26:28 server02 stonith: external_status: 'ipmi status' failed
with rc 1
Mar  9 00:26:28 server02 stonith: external/ipmi device not accessible.
Mar  9 00:26:28 server02 stonith-ng[99114]:   notice: log_operation:
Operation 'monitor' [99200] for device 'STONITH-server02' returned: -201
(Generic Pacemaker error)
Mar  9 00:26:28 server02 stonith-ng[99114]:  warning: log_operation:
STONITH-server02:99200 [ Performing: stonith -t external/ipmi -S ]
Mar  9 00:26:28 server02 stonith-ng[99114]:  warning: log_operation:
STONITH-server02:99200 [ logd is not runningfailed:  1 ]
Mar  9 00:26:28 server02 crmd[99118]:    error: process_lrm_event: LRM
operation STONITH-server02_start_0 (call=13, status=4, cib-update=13,
confirmed=true) Error
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 35: fail-count-STONITH-server02=INFINITY
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server02
(1457463388)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 37: last-failure-STONITH-server02=1457463388
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 39: fail-count-STONITH-server02=INFINITY
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server02
(1457463388)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 41: last-failure-STONITH-server02=1457463388
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 43: fail-count-STONITH-server02=INFINITY
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server02
(1457463388)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 45: last-failure-STONITH-server02=1457463388
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 47: fail-count-STONITH-server02=INFINITY
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update
relayed from server01
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server02
(1457463388)
Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent
update 49: last-failure-STONITH-server02=1457463388
Mar  9 00:26:28 server02 crmd[99118]:   notice: process_lrm_event: LRM
operation STONITH-server02_stop_0 (call=14, rc=0, cib-update=14,
confirmed=true) ok
Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 9
(STONITH-server02_start_0) on server02 failed (target: 0 vs. rc: 1): Error
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server02 on server02 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server02 on server02 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 9
(STONITH-server02_start_0) on server02 failed (target: 0 vs. rc: 1): Error
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server02 on server02 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server02 on server02 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 stonith: external_status: 'ipmi status' failed
with rc 1
Mar  9 00:26:28 server01 stonith: external/ipmi device not accessible.
Mar  9 00:26:28 server01 stonith-ng[16805]:   notice: log_operation:
Operation 'monitor' [16891] for device 'STONITH-server01' returned: -201
(Generic Pacemaker error)
Mar  9 00:26:28 server01 stonith-ng[16805]:  warning: log_operation:
STONITH-server01:16891 [ Performing: stonith -t external/ipmi -S ]
Mar  9 00:26:28 server01 stonith-ng[16805]:  warning: log_operation:
STONITH-server01:16891 [ logd is not runningfailed:  1 ]
Mar  9 00:26:28 server01 crmd[16809]:    error: process_lrm_event: LRM
operation STONITH-server01_start_0 (call=13, status=4, cib-update=49,
confirmed=true) Error
Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 7
(STONITH-server01_start_0) on server01 failed (target: 0 vs. rc: 1): Error
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server01 on server01 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server01 (INFINITY)
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server01 on server01 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 7
(STONITH-server01_start_0) on server01 failed (target: 0 vs. rc: 1): Error
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server01 on server01 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating
failcount for STONITH-server01 on server01 after failed start: rc=1
(update=INFINITY, time=1457463388)
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent
update 47: fail-count-STONITH-server01=INFINITY
Mar  9 00:26:28 server01 crmd[16809]:   notice: run_graph: Transition 3
(Complete=5, Pending=0, Fired=0, Skipped=2, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-70.bz2): Stopped
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server01
(1457463388)
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent
update 49: last-failure-STONITH-server01=1457463388
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server01 (INFINITY)
Mar  9 00:26:28 server01 pengine[16808]:   notice: unpack_config: On loss
of CCM Quorum: Ignore
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server02 away from server01 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server01 away from server02 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server02 away from server02 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Recover
STONITH-server01    (Started server01)
Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop
STONITH-server02    (server02)
Mar  9 00:26:28 server01 pengine[16808]:   notice: process_pe_message:
Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-71.bz2
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent
update 51: fail-count-STONITH-server01=INFINITY
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server01
(1457463388)
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent
update 53: last-failure-STONITH-server01=1457463388
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: fail-count-STONITH-server01 (INFINITY)
Mar  9 00:26:28 server01 pengine[16808]:   notice: unpack_config: On loss
of CCM Quorum: Ignore
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server01 away from server01 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server02 away from server01 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server01 away from server02 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server02 away from server02 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop
STONITH-server01    (server01)
Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop
STONITH-server02    (server02)
Mar  9 00:26:28 server01 pengine[16808]:   notice: process_pe_message:
Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-72.bz2
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent
update 55: fail-count-STONITH-server01=INFINITY
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:
Sending flush op to all hosts for: last-failure-STONITH-server01
(1457463388)
Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent
update 57: last-failure-STONITH-server01=1457463388
Mar  9 00:26:28 server01 pengine[16808]:   notice: unpack_config: On loss
of CCM Quorum: Ignore
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server01: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server02 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:
Processing failed op start for STONITH-server01 on server02: unknown error
(1)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server01 away from server01 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server02 away from server01 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server01 away from server02 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:
Forcing STONITH-server02 away from server02 after 1000000 failures (max=3)
Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop
STONITH-server01    (server01)
Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop
STONITH-server02    (server02)
Mar  9 00:26:28 server01 pengine[16808]:   notice: process_pe_message:
Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-73.bz2
Mar  9 00:26:28 server01 crmd[16809]:   notice: do_te_invoke: Processing
graph 6 (ref=pe_calc-dc-1457463388-32) derived from
/var/lib/pacemaker/pengine/pe-input-73.bz2
Mar  9 00:26:28 server01 crmd[16809]:   notice: te_rsc_command: Initiating
action 1: stop STONITH-server01_stop_0 on server01 (local)
Mar  9 00:26:28 server01 crmd[16809]:   notice: te_rsc_command: Initiating
action 2: stop STONITH-server02_stop_0 on server02
Mar  9 00:26:28 server01 crmd[16809]:   notice: process_lrm_event: LRM
operation STONITH-server01_stop_0 (call=14, rc=0, cib-update=55,
confirmed=true) ok
Mar  9 00:26:28 server01 crmd[16809]:   notice: run_graph: Transition 6
(Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-73.bz2): Complete
Mar  9 00:26:28 server01 crmd[16809]:   notice: do_state_transition: State
transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
cause=C_FSA_INTERNAL origin=notify_crmd ]
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20160309/9474ecae/attachment-0002.html>


More information about the Users mailing list