<div dir="ltr">here is the config ....<br><br><br><cib epoch="20" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.2" cib-last-written="Wed Mar  9 00:56:57 2016" update-origin="server02" update-client="cibadmin" update-user="hacluster" crm_feature_set="3.0.8" have-quorum="1" dc-uuid="server01"><br>  <configuration><br>    <crm_config><br>      <cluster_property_set id="cib-bootstrap-options"><br>        <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/><br>        <nvpair name="no-quorum-policy" value="ignore" id="cib-bootstrap-options-no-quorum-policy"/><br>        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-3ca8c3b"/><br>        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="classic openais (with plugin)"/><br>        <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/><br>        <nvpair name="stonith-action" value="reboot" id="cib-bootstrap-options-stonith-action"/><br>        <nvpair name="stonith-timeout" value="150s" id="cib-bootstrap-options-stonith-timeout"/><br>      </cluster_property_set><br>    </crm_config><br>    <nodes><br>      <node id="server02" uname="server02"/><br>      <node id="server01" uname="server01"/><br>    </nodes><br>    <resources><br>      <primitive id="STONITH-server01" class="stonith" type="external/ipmi"><br>        <operations><br>          <op name="monitor" interval="0" timeout="60s" id="STONITH-server01-monitor-0"/><br>          <op name="monitor" interval="300s" timeout="60s" on-fail="restart" id="STONITH-server01-monitor-300s"/><br>          <op name="start" interval="0" timeout="60s" on-fail="restart" id="STONITH-server01-start-0"/><br>        </operations><br>        <instance_attributes id="STONITH-server01-instance_attributes"><br>          <nvpair name="hostname" value="server01" id="STONITH-server01-instance_attributes-hostname"/><br>          <nvpair name="ipaddr" value="server01-ipmi" id="STONITH-server01-instance_attributes-ipaddr"/><br>          <nvpair name="userid" value="administrator" id="STONITH-server01-instance_attributes-userid"/><br>          <nvpair name="passwd" value="To12" id="STONITH-server01-instance_attributes-passwd"/><br>          <nvpair name="interface" value="lanplus" id="STONITH-server01-instance_attributes-interface"/><br>        </instance_attributes><br>      </primitive><br>      <primitive id="STONITH-server02" class="stonith" type="external/ipmi"><br>        <operations><br>          <op name="monitor" interval="0" timeout="60s" id="STONITH-server02-monitor-0"/><br>          <op name="monitor" interval="300s" timeout="60s" on-fail="restart" id="STONITH-server02-monitor-300s"/><br>          <op name="start" interval="0" timeout="60s" on-fail="restart" id="STONITH-server02-start-0"/><br>        </operations><br>        <instance_attributes id="STONITH-server02-instance_attributes"><br>          <nvpair name="hostname" value="server02" id="STONITH-server02-instance_attributes-hostname"/><br>          <nvpair name="ipaddr" value="server02-ipmi" id="STONITH-server02-instance_attributes-ipaddr"/><br>          <nvpair name="userid" value="administrator" id="STONITH-server02-instance_attributes-userid"/><br>          <nvpair name="passwd" value="To12" id="STONITH-server02-instance_attributes-passwd"/><br>          <nvpair name="interface" value="lanplus" id="STONITH-server02-instance_attributes-interface"/><br>        </instance_attributes><br>      </primitive><br>      <primitive id="VIRTUAL-IP" class="ocf" provider="heartbeat" type="IPaddr2"><br>        <instance_attributes id="VIRTUAL-IP-instance_attributes"><br>          <nvpair name="ip" value="10.0.0.44" id="VIRTUAL-IP-instance_attributes-ip"/><br>        </instance_attributes><br>        <operations><br>          <op name="monitor" timeout="20s" interval="10s" id="VIRTUAL-IP-monitor-10s"/><br>        </operations><br>        <meta_attributes id="VIRTUAL-IP-meta_attributes"><br>          <nvpair name="is-managed" value="true" id="VIRTUAL-IP-meta_attributes-is-managed"/><br>          <nvpair name="target-role" value="Started" id="VIRTUAL-IP-meta_attributes-target-role"/><br>        </meta_attributes><br>      </primitive><br>    </resources><br>    <constraints><br>      <rsc_location id="LOC_STONITH_server01" rsc="STONITH-server01" score="INFINITY" node="server02"/><br>      <rsc_location id="LOC_STONITH_server02" rsc="STONITH-server02" score="INFINITY" node="server01"/><br>    </constraints><br>    <rsc_defaults><br>      <meta_attributes id="rsc-options"><br>        <nvpair name="migration-threshold" value="5000" id="rsc-options-migration-threshold"/><br>        <nvpair name="resource-stickiness" value="1000" id="rsc-options-resource-stickiness"/><br>      </meta_attributes><br>    </rsc_defaults><br>    <op_defaults><br>      <meta_attributes id="op-options"><br>        <nvpair name="timeout" value="600" id="op-options-timeout"/><br>        <nvpair name="record-pending" value="false" id="op-options-record-pending"/><br>      </meta_attributes><br>    </op_defaults><br>  </configuration><br></cib><br><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Mar 9, 2016 at 1:25 PM, emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I think you should give the parameters to the stonith agent, anyway<br>
show your config.<br>
<div><div class="h5"><br>
2016-03-09 5:29 GMT+01:00 vija ar <<a href="mailto:vjav78@gmail.com">vjav78@gmail.com</a>>:<br>
> I have configured SLEHA cluster on cisco ucs boxes with ipmi configured, i<br>
> have tested IPMI using impitool, however ipmitool to function neatly i have<br>
> to pass parameter -y i.e. <hex key> along with username and password,<br>
><br>
> however  to configure stonith there is no parameter in pacemaker to pass<br>
> <hex key>? and due to which stonith is failing<br>
><br>
> can you please let me know if there is any way to add it or is this a bug?<br>
><br>
> *******************<br>
><br>
><br>
><br>
> Mar  9 00:26:28 server02 stonith: external_status: 'ipmi status' failed with<br>
> rc 1<br>
> Mar  9 00:26:28 server02 stonith: external/ipmi device not accessible.<br>
> Mar  9 00:26:28 server02 stonith-ng[99114]:   notice: log_operation:<br>
> Operation 'monitor' [99200] for device 'STONITH-server02' returned: -201<br>
> (Generic Pacemaker error)<br>
> Mar  9 00:26:28 server02 stonith-ng[99114]:  warning: log_operation:<br>
> STONITH-server02:99200 [ Performing: stonith -t external/ipmi -S ]<br>
> Mar  9 00:26:28 server02 stonith-ng[99114]:  warning: log_operation:<br>
> STONITH-server02:99200 [ logd is not runningfailed:  1 ]<br>
> Mar  9 00:26:28 server02 crmd[99118]:    error: process_lrm_event: LRM<br>
> operation STONITH-server02_start_0 (call=13, status=4, cib-update=13,<br>
> confirmed=true) Error<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 35: fail-count-STONITH-server02=INFINITY<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server02<br>
> (1457463388)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 37: last-failure-STONITH-server02=1457463388<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 39: fail-count-STONITH-server02=INFINITY<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server02<br>
> (1457463388)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 41: last-failure-STONITH-server02=1457463388<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 43: fail-count-STONITH-server02=INFINITY<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server02<br>
> (1457463388)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 45: last-failure-STONITH-server02=1457463388<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server02 (INFINITY)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 47: fail-count-STONITH-server02=INFINITY<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_cs_dispatch: Update<br>
> relayed from server01<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server02<br>
> (1457463388)<br>
> Mar  9 00:26:28 server02 attrd[99116]:   notice: attrd_perform_update: Sent<br>
> update 49: last-failure-STONITH-server02=1457463388<br>
> Mar  9 00:26:28 server02 crmd[99118]:   notice: process_lrm_event: LRM<br>
> operation STONITH-server02_stop_0 (call=14, rc=0, cib-update=14,<br>
> confirmed=true) ok<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 9<br>
> (STONITH-server02_start_0) on server02 failed (target: 0 vs. rc: 1): Error<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server02 on server02 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server02 on server02 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 9<br>
> (STONITH-server02_start_0) on server02 failed (target: 0 vs. rc: 1): Error<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server02 on server02 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server02 on server02 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 stonith: external_status: 'ipmi status' failed with<br>
> rc 1<br>
> Mar  9 00:26:28 server01 stonith: external/ipmi device not accessible.<br>
> Mar  9 00:26:28 server01 stonith-ng[16805]:   notice: log_operation:<br>
> Operation 'monitor' [16891] for device 'STONITH-server01' returned: -201<br>
> (Generic Pacemaker error)<br>
> Mar  9 00:26:28 server01 stonith-ng[16805]:  warning: log_operation:<br>
> STONITH-server01:16891 [ Performing: stonith -t external/ipmi -S ]<br>
> Mar  9 00:26:28 server01 stonith-ng[16805]:  warning: log_operation:<br>
> STONITH-server01:16891 [ logd is not runningfailed:  1 ]<br>
> Mar  9 00:26:28 server01 crmd[16809]:    error: process_lrm_event: LRM<br>
> operation STONITH-server01_start_0 (call=13, status=4, cib-update=49,<br>
> confirmed=true) Error<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 7<br>
> (STONITH-server01_start_0) on server01 failed (target: 0 vs. rc: 1): Error<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server01 on server01 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server01 (INFINITY)<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server01 on server01 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: status_from_rc: Action 7<br>
> (STONITH-server01_start_0) on server01 failed (target: 0 vs. rc: 1): Error<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server01 on server01 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 crmd[16809]:  warning: update_failcount: Updating<br>
> failcount for STONITH-server01 on server01 after failed start: rc=1<br>
> (update=INFINITY, time=1457463388)<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent<br>
> update 47: fail-count-STONITH-server01=INFINITY<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: run_graph: Transition 3<br>
> (Complete=5, Pending=0, Fired=0, Skipped=2, Incomplete=0,<br>
> Source=/var/lib/pacemaker/pengine/pe-input-70.bz2): Stopped<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server01<br>
> (1457463388)<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent<br>
> update 49: last-failure-STONITH-server01=1457463388<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server01 (INFINITY)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: unpack_config: On loss of<br>
> CCM Quorum: Ignore<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server02 away from server01 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server01 away from server02 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server02 away from server02 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Recover<br>
> STONITH-server01    (Started server01)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop<br>
> STONITH-server02    (server02)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: process_pe_message:<br>
> Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-71.bz2<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent<br>
> update 51: fail-count-STONITH-server01=INFINITY<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server01<br>
> (1457463388)<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent<br>
> update 53: last-failure-STONITH-server01=1457463388<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: fail-count-STONITH-server01 (INFINITY)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: unpack_config: On loss of<br>
> CCM Quorum: Ignore<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server01 away from server01 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server02 away from server01 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server01 away from server02 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server02 away from server02 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop<br>
> STONITH-server01    (server01)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop<br>
> STONITH-server02    (server02)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: process_pe_message:<br>
> Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-72.bz2<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent<br>
> update 55: fail-count-STONITH-server01=INFINITY<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_trigger_update:<br>
> Sending flush op to all hosts for: last-failure-STONITH-server01<br>
> (1457463388)<br>
> Mar  9 00:26:28 server01 attrd[16807]:   notice: attrd_perform_update: Sent<br>
> update 57: last-failure-STONITH-server01=1457463388<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: unpack_config: On loss of<br>
> CCM Quorum: Ignore<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server01: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server02 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: unpack_rsc_op_failure:<br>
> Processing failed op start for STONITH-server01 on server02: unknown error<br>
> (1)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server01 away from server01 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server02 away from server01 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server01 away from server02 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:  warning: common_apply_stickiness:<br>
> Forcing STONITH-server02 away from server02 after 1000000 failures (max=3)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop<br>
> STONITH-server01    (server01)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: LogActions: Stop<br>
> STONITH-server02    (server02)<br>
> Mar  9 00:26:28 server01 pengine[16808]:   notice: process_pe_message:<br>
> Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-73.bz2<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: do_te_invoke: Processing<br>
> graph 6 (ref=pe_calc-dc-1457463388-32) derived from<br>
> /var/lib/pacemaker/pengine/pe-input-73.bz2<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: te_rsc_command: Initiating<br>
> action 1: stop STONITH-server01_stop_0 on server01 (local)<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: te_rsc_command: Initiating<br>
> action 2: stop STONITH-server02_stop_0 on server02<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: process_lrm_event: LRM<br>
> operation STONITH-server01_stop_0 (call=14, rc=0, cib-update=55,<br>
> confirmed=true) ok<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: run_graph: Transition 6<br>
> (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0,<br>
> Source=/var/lib/pacemaker/pengine/pe-input-73.bz2): Complete<br>
> Mar  9 00:26:28 server01 crmd[16809]:   notice: do_state_transition: State<br>
> transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS<br>
> cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
><br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
<br>
<br>
--<br>
  .~.<br>
  /V\<br>
 //  \\<br>
/(   )\<br>
^`~'^<br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>