Oct 14 13:10:21 node1 stonithd: [12327]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:10:21 node1 stonithd: [12327]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:10:21 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12327] exited, its exit code: 0 when signo=0.
Oct 14 13:10:21 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:10:21 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12326 (pid=12326) signed off
Oct 14 13:10:23 node1 vmrdra[12339]: INFO: action: monitor, clone instance vmrd-res:1
Oct 14 13:10:23 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:10:23 INFO: action: monitor, clone instance vmrd-res:1

Oct 14 13:10:23 node1 vmrdra[12339]: INFO: vmrd status: STANDBY
Oct 14 13:10:23 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:10:23 INFO: vmrd status: STANDBY
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.41 -> 0.69.42 (S_IDLE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 crmd: [4034]: info: process_graph_event: Detected action testdummy-res:0_monitor_10000 from a different transition: 93 vs. 101
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: process_graph_event:462 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=testdummy-res:0_monitor_10000, magic=0:1;9:93:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8, cib=0.69.42) : Old event
Oct 14 13:10:24 node1 crmd: [4034]: WARN: update_failcount: Updating failcount for testdummy-res:0 on node2 after failed monitor: rc=1 (update=value++, time=1255547424)
Oct 14 13:10:24 node1 crmd: [4034]: debug: attrd_update: Sent update: fail-count-testdummy-res:0=value++ for node2
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: attrd_update: Sent update: last-failure-testdummy-res:0=1255547424 for node2
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 14 13:10:24 node1 crmd: [4034]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 14 13:10:24 node1 crmd: [4034]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke: Query 292: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 193 
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.42): ok (rc=0)
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 19b 
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1255547424-273, seq=91292, quorate=1
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.42 -> 0.69.43 (S_POLICY_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=node2, magic=NA, cib=0.69.43) : Transient attribute: update
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-node2-fail-count-testdummy-res:0" name="fail-count-testdummy-res:0" value="1" __crm_diff_marker__="added:top" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke: Query 293: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH timeout: 60000
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH of failed nodes is enabled
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Stop all active resources: false
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Default stickiness: 0
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node1 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node2 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: testdummy-res:0_monitor_10000 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op testdummy-res:0_monitor_10000 on node2: unknown error
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node2 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: testdummy-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: native_print:     testdummy-res:0#011(ocf::peakpoint:testdummy):#011Started node2 FAILED
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: ipmi-stonith-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node2 node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Master/Slave Set: vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Masters: [ node2 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Slaves: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: native_print: vsstvm-res#011(ocf::peakpoint:vsstvm):#011Started node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_rsc_location: Constraint (vmrd-master-prefer-location-rule) is not active (role : Master)
Oct 14 13:10:24 node1 pengine:last message repeated 2 times
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: ipmi-stonith-clone has failed 1 times on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: common_apply_stickiness: ipmi-stonith-clone can fail 999999 more times on node1 before being forced off
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to testdummy-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to testdummy-res:0
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 2 testdummy-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to ipmi-stonith-res:0
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to ipmi-stonith-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 2 ipmi-stonith-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to vmrd-res:0
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vmrd-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 2 vmrd-master-res instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 200
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: Promoting vmrd-res:0 (Master node2)
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 105
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 200
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: -1000000
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to vsstvm-res
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (10s) for testdummy-res:0 on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 10 times
Oct 14 13:10:24 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Recover resource testdummy-res:0#011(Started node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:0#011(Started node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource vmrd-res:0#011(Master node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource vmrd-res:1#011(Slave node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource vsstvm-res#011(Started node2)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 15 times
Oct 14 13:10:24 node1 crmd: [4034]: info: handle_response: pe_calc calculation pe_calc-dc-1255547424-273 is obsolete
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.43 -> 0.69.44 (S_POLICY_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=node2, magic=NA, cib=0.69.44) : Transient attribute: update
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-node2-last-failure-testdummy-res:0" name="last-failure-testdummy-res:0" value="1255547424" __crm_diff_marker__="added:top" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke: Query 294: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:10:24 node1 cib: [4030]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='node1']//nvpair[@name='fail-count-testdummy-res:0'] does not exist
Oct 14 13:10:24 node1 attrd: [4032]: debug: attrd_cib_callback: Update -22 for fail-count-testdummy-res:0=(null) passed
Oct 14 13:10:24 node1 pengine: [4033]: info: process_pe_message: Transition 102: PEngine Input stored in: /var/lib/pengine/pe-input-5842.bz2
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.44): ok (rc=0)
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1a2 
Oct 14 13:10:24 node1 cib: [4030]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='node1']//nvpair[@name='last-failure-testdummy-res:0'] does not exist
Oct 14 13:10:24 node1 attrd: [4032]: debug: attrd_cib_callback: Update -22 for last-failure-testdummy-res:0=(null) passed
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1255547424-274, seq=91292, quorate=1
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH timeout: 60000
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH of failed nodes is enabled
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Stop all active resources: false
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Default stickiness: 0
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node1 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node2 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: testdummy-res:0_monitor_10000 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op testdummy-res:0_monitor_10000 on node2: unknown error
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node2 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: testdummy-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: native_print:     testdummy-res:0#011(ocf::peakpoint:testdummy):#011Started node2 FAILED
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: ipmi-stonith-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node2 node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Master/Slave Set: vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0 active on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Masters: [ node2 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Slaves: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: native_print: vsstvm-res#011(ocf::peakpoint:vsstvm):#011Started node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_rsc_location: Constraint (vmrd-master-prefer-location-rule) is not active (role : Master)
Oct 14 13:10:24 node1 pengine:last message repeated 2 times
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: testdummy-clone has failed 1 times on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: common_apply_stickiness: testdummy-clone can fail 999999 more times on node2 before being forced off
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: ipmi-stonith-clone has failed 1 times on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: common_apply_stickiness: ipmi-stonith-clone can fail 999999 more times on node1 before being forced off
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to testdummy-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to testdummy-res:0
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 2 testdummy-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to ipmi-stonith-res:0
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to ipmi-stonith-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 2 ipmi-stonith-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to vmrd-res:0
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vmrd-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 2 vmrd-master-res instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 200
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: Promoting vmrd-res:0 (Master node2)
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 105
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 200
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: -1000000
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node2 to vsstvm-res
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (10s) for testdummy-res:0 on node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 10 times
Oct 14 13:10:24 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Recover resource testdummy-res:0#011(Started node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:0#011(Started node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource vmrd-res:0#011(Master node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource vmrd-res:1#011(Slave node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource vsstvm-res#011(Started node2)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 15 times
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_LOG   
Oct 14 13:10:24 node1 crmd: [4034]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: unpack_graph: Unpacked transition 103: 8 actions in 8 synapses
Oct 14 13:10:24 node1 crmd: [4034]: info: do_te_invoke: Processing graph 103 (ref=pe_calc-dc-1255547424-274) derived from /var/lib/pengine/pe-input-5843.bz2
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 103 (Complete=0, Pending=0, Fired=1, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-input-5843.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 6: stop testdummy-res:0_stop_0 on node2
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 103 (Complete=1, Pending=1, Fired=1, Skipped=0, Incomplete=6, Source=/var/lib/pengine/pe-input-5843.bz2): In-progress
Oct 14 13:10:24 node1 pengine: [4033]: info: process_pe_message: Transition 103: PEngine Input stored in: /var/lib/pengine/pe-input-5843.bz2
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.44 -> 0.69.45 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: WARN: status_from_rc: Action 6 (testdummy-res:0_stop_0) on node2 failed (target: 0 vs. rc: 1): Error
Oct 14 13:10:24 node1 crmd: [4034]: WARN: update_failcount: Updating failcount for testdummy-res:0 on node2 after failed stop: rc=1 (update=INFINITY, time=1255547424)
Oct 14 13:10:24 node1 crmd: [4034]: debug: attrd_update: Sent update: fail-count-testdummy-res:0=INFINITY for node2
Oct 14 13:10:24 node1 crmd: [4034]: debug: attrd_update: Sent update: last-failure-testdummy-res:0=1255547424 for node2
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=testdummy-res:0_stop_0, magic=0:1;6:103:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8, cib=0.69.45) : Event failed
Oct 14 13:10:24 node1 crmd: [4034]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Oct 14 13:10:24 node1 crmd: [4034]: info: update_abort_priority: Abort action done superceeded by restart
Oct 14 13:10:24 node1 crmd: [4034]: info: match_graph_event: Action testdummy-res:0_stop_0 (6) confirmed on node2 (rc=4)
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 18 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 103 (Complete=2, Pending=0, Fired=1, Skipped=4, Incomplete=1, Source=/var/lib/pengine/pe-input-5843.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: info: run_graph: ====================================================
Oct 14 13:10:24 node1 crmd: [4034]: notice: run_graph: Transition 103 (Complete=3, Pending=0, Fired=0, Skipped=4, Incomplete=1, Source=/var/lib/pengine/pe-input-5843.bz2): Stopped
Oct 14 13:10:24 node1 crmd: [4034]: info: te_graph_trigger: Transition 103 is now complete
Oct 14 13:10:24 node1 crmd: [4034]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Oct 14 13:10:24 node1 crmd: [4034]: debug: notify_crmd: Transition 103 status: restart - Event failed
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:10:24 node1 crmd: [4034]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:10:24 node1 crmd: [4034]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke: Query 295: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.45): ok (rc=0)
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1255547424-276, seq=91292, quorate=1
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.45 -> 0.69.46 (S_POLICY_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=node2, magic=NA, cib=0.69.46) : Transient attribute: update
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair value="INFINITY" id="status-node2-fail-count-testdummy-res:0" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke: Query 296: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:10:24 node1 cib: [4030]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='node1']//nvpair[@name='fail-count-testdummy-res:0'] does not exist
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH timeout: 60000
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH of failed nodes is enabled
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Stop all active resources: false
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Default stickiness: 0
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node1 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node2 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: testdummy-res:0_monitor_10000 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op testdummy-res:0_monitor_10000 on node2: unknown error
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: testdummy-res:0_stop_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op testdummy-res:0_stop_0 on node2: unknown error
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node2 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: testdummy-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:0: node node2 is unclean
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Stopped: [ testdummy-res:0 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: ipmi-stonith-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:0: node node2 is unclean
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Stopped: [ ipmi-stonith-res:0 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Master/Slave Set: vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0: node node2 is unclean
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Slaves: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Stopped: [ vmrd-res:0 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: native_print: vsstvm-res#011(ocf::peakpoint:vsstvm):#011Started node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_rsc_location: Constraint (vmrd-master-prefer-location-rule) is not active (role : Master)
Oct 14 13:10:24 node1 pengine:last message repeated 2 times
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: testdummy-clone has failed 1 times on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: common_apply_stickiness: testdummy-clone can fail 999999 more times on node2 before being forced off
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: ipmi-stonith-clone has failed 1 times on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: common_apply_stickiness: ipmi-stonith-clone can fail 999999 more times on node1 before being forced off
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to testdummy-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource testdummy-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_color: Resource testdummy-res:0 cannot run anywhere
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 1 testdummy-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to ipmi-stonith-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource ipmi-stonith-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_color: Resource ipmi-stonith-res:0 cannot run anywhere
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 1 ipmi-stonith-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vmrd-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource vmrd-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_color: Resource vmrd-res:0 cannot run anywhere
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 1 vmrd-master-res instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 105
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: Promoting vmrd-res:1 (Slave node1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 205
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vsstvm-res
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 5 times
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: WARN: stage6: Scheduling Node node2 for STONITH
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_stop_constraints: Stop of failed resource testdummy-res:0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: info: native_start_constraints: Ordering testdummy-res:1_start_0 after node2 recovery
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: ipmi-stonith-res:0_stop_0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: vmrd-res:0_stop_0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: Creating secondary notification for vmrd-res:0_stop_0
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: info: native_start_constraints: Ordering vmrd-res:1_start_0 after node2 recovery
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: vsstvm-res_stop_0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:10:24 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Stop resource testdummy-res:0#011(node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Stop resource ipmi-stonith-res:0#011(node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Demote vmrd-res:0#011(Master -> Stopped node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Stop resource vmrd-res:0#011(node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Promote vmrd-res:1#011(Slave -> Master node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Move resource vsstvm-res#011(Started node2 -> node1)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 5 times
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 3 times
Oct 14 13:10:24 node1 attrd: [4032]: debug: attrd_cib_callback: Update -22 for fail-count-testdummy-res:0=(null) passed
Oct 14 13:10:24 node1 crmd: [4034]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1255547424-277, seq=91292, quorate=1
Oct 14 13:10:24 node1 pengine: [4033]: WARN: process_pe_message: Transition 104: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26645.bz2
Oct 14 13:10:24 node1 pengine: [4033]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Oct 14 13:10:24 node1 crmd: [4034]: info: handle_response: pe_calc calculation pe_calc-dc-1255547424-276 is obsolete
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Oct 14 13:10:24 node1 pengine: [4033]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH timeout: 60000
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: STONITH of failed nodes is enabled
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Stop all active resources: false
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Oct 14 13:10:24 node1 pengine: [4033]: debug: unpack_config: Default stickiness: 0
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node1 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: determine_online_status: Node node2 is online
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: testdummy-res:0_monitor_10000 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op testdummy-res:0_monitor_10000 on node2: unknown error
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: testdummy-res:0_stop_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op testdummy-res:0_stop_0 on node2: unknown error
Oct 14 13:10:24 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node2 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:10:24 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node2
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: testdummy-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:0: node node2 is unclean
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Stopped: [ testdummy-res:0 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Clone Set: ipmi-stonith-clone
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:0: node node2 is unclean
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Stopped: [ ipmi-stonith-res:0 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: clone_print: Master/Slave Set: vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0: node node2 is unclean
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Slaves: [ node1 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: print_list: #011Stopped: [ vmrd-res:0 ]
Oct 14 13:10:24 node1 pengine: [4033]: notice: native_print: vsstvm-res#011(ocf::peakpoint:vsstvm):#011Started node2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_rsc_location: Constraint (vmrd-master-prefer-location-rule) is not active (role : Master)
Oct 14 13:10:24 node1 pengine:last message repeated 2 times
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: testdummy-clone has failed 1000000 times on node2
Oct 14 13:10:24 node1 pengine: [4033]: WARN: common_apply_stickiness: Forcing testdummy-clone away from node2 after 1000000 failures (max=1000000)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: info: get_failcount: ipmi-stonith-clone has failed 1 times on node1
Oct 14 13:10:24 node1 pengine: [4033]: notice: common_apply_stickiness: ipmi-stonith-clone can fail 999999 more times on node1 before being forced off
Oct 14 13:10:24 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to testdummy-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource testdummy-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_color: Resource testdummy-res:0 cannot run anywhere
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 1 testdummy-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to ipmi-stonith-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource ipmi-stonith-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_color: Resource ipmi-stonith-res:0 cannot run anywhere
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 1 ipmi-stonith-clone instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vmrd-res:1
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource vmrd-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_color: Resource vmrd-res:0 cannot run anywhere
Oct 14 13:10:24 node1 pengine: [4033]: debug: clone_color: Allocated 1 vmrd-master-res instances of a possible 2
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 105
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: Promoting vmrd-res:1 (Slave node1)
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 205
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:10:24 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:10:24 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vsstvm-res
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 5 times
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: WARN: stage6: Scheduling Node node2 for STONITH
Oct 14 13:10:24 node1 pengine: [4033]: WARN: native_stop_constraints: Stop of failed resource testdummy-res:0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: info: native_start_constraints: Ordering testdummy-res:1_start_0 after node2 recovery
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: ipmi-stonith-res:0_stop_0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: vmrd-res:0_stop_0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: Creating secondary notification for vmrd-res:0_stop_0
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:10:24 node1 pengine: [4033]: info: RecurringOp: Cancelling action vmrd-res:1_monitor_9000 (Slave vs. Master)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: info: native_start_constraints: Ordering vmrd-res:1_start_0 after node2 recovery
Oct 14 13:10:24 node1 pengine: [4033]: info: native_stop_constraints: vsstvm-res_stop_0 is implicit after node2 is fenced
Oct 14 13:10:24 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:10:24 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Stop resource testdummy-res:0#011(node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Stop resource ipmi-stonith-res:0#011(node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:1#011(Started node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Demote vmrd-res:0#011(Master -> Stopped node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Stop resource vmrd-res:0#011(node2)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Promote vmrd-res:1#011(Slave -> Master node1)
Oct 14 13:10:24 node1 pengine: [4033]: notice: LogActions: Move resource vsstvm-res#011(Started node2 -> node1)
Oct 14 13:10:24 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:10:24 node1 pengine:last message repeated 9 times
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.46): ok (rc=0)
Oct 14 13:10:24 node1 pengine: [4033]: WARN: process_pe_message: Transition 105: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26646.bz2
Oct 14 13:10:24 node1 pengine: [4033]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Oct 14 13:10:24 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_LOG   
Oct 14 13:10:24 node1 crmd: [4034]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE
Oct 14 13:10:24 node1 crmd: [4034]: info: unpack_graph: Unpacked transition 105: 46 actions in 46 synapses
Oct 14 13:10:24 node1 crmd: [4034]: info: do_te_invoke: Processing graph 105 (ref=pe_calc-dc-1255547424-277) derived from /var/lib/pengine/pe-warn-26646.bz2
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 21 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 2: cancel vmrd-res:1_monitor_9000 on node1 (local)
Oct 14 13:10:24 node1 crmd: [4034]: debug: do_lrm_invoke: PE requested op vmrd-res:1_monitor_9000 (call=NA) be cancelled
Oct 14 13:10:24 node1 crmd: [4034]: debug: cancel_op: Scheduling vmrd-res:1:59 for removal
Oct 14 13:10:24 node1 crmd: [4034]: debug: cancel_op: Cancelling op 59 for vmrd-res:1 (vmrd-res:1:59)
Oct 14 13:10:24 node1 lrmd: [4031]: debug: cancel_op: operation monitor[59] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_interval=[9000] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_start_resource=[vmrd-res:0 vmrd-res:1 ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta cancelled
Oct 14 13:10:24 node1 crmd: [4034]: info: send_direct_ack: ACK'ing resource op vmrd-res:1_monitor_9000 from 2:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8: lrm_invoke-lrmd-1255547424-279
Oct 14 13:10:24 node1 crmd: [4034]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1255547424-279 from node1
Oct 14 13:10:24 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_monitor_9000 (2) confirmed on node1 (rc=0)
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 50 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 56 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_fence_node: Executing reboot fencing operation (58) on node2 (timeout=60000)
Oct 14 13:10:24 node1 crmd: [4034]: debug: waiting for the stonith reply msg.
Oct 14 13:10:24 node1 stonithd: [4029]: info: client tengine [pid: 4034] requests a STONITH operation RESET on node node2
Oct 14 13:10:24 node1 stonithd: [4029]: debug: get_local_stonithobj_can_stonith:2820: next stonith resource ipmi-stonith-res:1, priority 0
Oct 14 13:10:24 node1 stonithd: [12365]: debug: external_reset_req: called.
Oct 14 13:10:24 node1 stonithd: [12365]: debug: Host external-reset initiating on node2
Oct 14 13:10:24 node1 stonithd: [12365]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi reset node2'
Oct 14 13:10:24 node1 stonithd: [4029]: info: stonith_operate_locally::2688: sending fencing op RESET for node2 to ipmi-stonith-res:1 (external/ipmi) (pid=12365)
Oct 14 13:10:24 node1 stonithd: [4029]: debug: inserted optype=RESET, key=12365
Oct 14 13:10:24 node1 stonithd: [4029]: debug: stonithd_node_fence: sent back a synchronous reply.
Oct 14 13:10:24 node1 crmd: [4034]: debug: stonithd_node_fence:582: stonithd's synchronous answer is ST_APIOK
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=0, Pending=1, Fired=6, Skipped=0, Incomplete=40, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: debug: delete_op_entry: async: Sending delete op for vmrd-res:1_monitor_9000 (call=59)
Oct 14 13:10:24 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_monitor_9000 (call=59, rc=-2, cib-update=0, confirmed=true) Cancelled unknown exec error
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 22 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 72: notify vmrd-res:0_pre_notify_demote_0 on node2
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 74: notify vmrd-res:1_pre_notify_demote_0 on node1 (local)
Oct 14 13:10:24 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=74:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_notify_0 )
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation notify[60] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[vmrd-res:0 ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_operation=[demote] CRM_meta_notify_slave_resource=[vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[node2 ] CRM_meta_notify_master_uname=[node2 ] CRM_meta_notify_demote_uname=[silvertho to the operation list.
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 lrmd: [4031]: info: rsc:vmrd-res:1:60: notify
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=5, Pending=3, Fired=4, Skipped=0, Incomplete=36, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_delete): 0.69.46 -> 0.69.47 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Deleted lrm_rsc_op vmrd-res:1_monitor_9000 on node1 was for graph event 2
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=7, Pending=3, Fired=0, Skipped=0, Incomplete=36, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 vmrdra[12367]: INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:10:24 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:10:24 INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 vmrdra[12367]: INFO:  notify: pre for demote - counts: active 0 - starting 0 - stopping 1
Oct 14 13:10:24 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:10:24 INFO:  notify: pre for demote - counts: active 0 - starting 0 - stopping 1
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.47): ok (rc=0)
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1c0 
Oct 14 13:10:24 node1 lrmd: [4031]: info: Managed vmrd-res:1:notify process 12367 exited with return code 0.
Oct 14 13:10:24 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_notify_0 (call=60, rc=0, cib-update=298, confirmed=true) complete ok
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.47 -> 0.69.48 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_pre_notify_demote_0 (74) confirmed on node1 (rc=0)
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=8, Pending=2, Fired=0, Skipped=0, Incomplete=36, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1c4 
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.48): ok (rc=0)
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1ca 
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.48 -> 0.69.49 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:0_pre_notify_demote_0 (72) confirmed on node2 (rc=0)
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 51 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=9, Pending=1, Fired=1, Skipped=0, Incomplete=35, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=10, Pending=1, Fired=0, Skipped=0, Incomplete=35, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.49): ok (rc=0)
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1d2 
Oct 14 13:10:24 node1 stonithd: [12365]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi reset node2' output: Chassis Power Control: Reset
Oct 14 13:10:24 node1 stonithd: [12365]: debug: external_reset_req: running 'ipmi reset' returned 0
Oct 14 13:10:24 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_1 [12365] exited, its exit code: 0 when signo=0.
Oct 14 13:10:24 node1 stonithd: [4029]: info: Succeeded to STONITH the node node2: optype=RESET. whodoit: node1
Oct 14 13:10:24 node1 stonithd: [4029]: debug: stonithop_result_to_local_client: succeed in sending back final result message.
Oct 14 13:10:24 node1 crmd: [4034]: debug: stonithd_receive_ops_result: begin
Oct 14 13:10:24 node1 crmd: [4034]: info: tengine_stonith_callback: call=12365, optype=1, node_name=node2, result=0, node_list=node1, action=58:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 9 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 59 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 54 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 57 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=11, Pending=0, Fired=5, Skipped=0, Incomplete=30, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=16, Pending=0, Fired=1, Skipped=0, Incomplete=29, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 23: demote vmrd-res:0_demote_0 on node2
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=17, Pending=1, Fired=1, Skipped=0, Incomplete=28, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.49 -> 0.69.50 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 cib: [4030]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='node2']/lrm (/cib/status/node_state[2]/lrm)
Oct 14 13:10:24 node1 crmd: [4034]: debug: match_down_event: Match found for action 0: stonith on node2
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: notice: fail_incompletable_actions: Action 23 (23) is scheduled for node2 (offline)
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/300, version=0.69.51): ok (rc=0)
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: notice: fail_incompletable_actions: Action 68 (68) is scheduled for node2 (offline)
Oct 14 13:10:24 node1 crmd: [4034]: notice: fail_incompletable_actions: Action 73 (73) is scheduled for node2 (offline)
Oct 14 13:10:24 node1 crmd: [4034]: WARN: fail_incompletable_actions: Node node2 shutdown resulted in un-runnable actions
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: fail_incompletable_actions:103 - Triggered transition abort (complete=0, tag=rsc_op, id=73, magic=NA) : Node failure
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause <rsc_op id="73" operation="notify" operation_key="vmrd-res:0_post_notify_demote_0" on_node="node2" on_node_uuid="node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   <primitive id="vmrd-res:0" long-id="vmrd-master-res:vmrd-res:0" class="ocf" provider="peakpoint" type="vmrdra" />
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 cib: [4030]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='node2']/transient_attributes (/cib/status/node_state[2]/transient_attributes)
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   <attributes CRM_meta_clone="0" CRM_meta_clone_max="2" CRM_meta_globally_unique="false" CRM_meta_notify_active_resource=" " CRM_meta_notify_active_uname=" " CRM_meta_notify_demote_resource="vmrd-res:0 " CRM_meta_notify_demote_uname="node2 " CRM_meta_notify_inactive_resource=" " CRM_meta_notify_master_resource="vmrd-res:0 " CRM_meta_notify_master_uname="node2 " CRM_meta_notify_operation="demote" CRM_meta_notify_promote_resource="vmrd-res:1 " CRM_meta_notify_promote_uname="node1 " CRM_meta_notify_slave_resource="vmrd-res:1 " CRM_meta_notify_slave_uname="node1 " CRM_meta_notify_start_resource=" " CRM_meta_notify_start_uname=" " CRM_meta_notify_stop_resource="vmrd-res:0 " CRM_meta_notify_stop_uname="node2 " CRM_meta_notify_type="post" CRM_meta_timeout="6000" crm_feature_set="3.0.1" />
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=local/crmd/301, version=0.69.52): ok (rc=0)
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause </rsc_op>
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 crmd: [4034]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Oct 14 13:10:24 node1 crmd: [4034]: info: update_abort_priority: Abort action done superceeded by restart
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_delete): 0.69.50 -> 0.69.51 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.52): ok (rc=0)
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='ipmi-stonith-res:0_monitor_0'] (ipmi-stonith-res:0_monitor_0 on node2)
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ipmi-stonith-res:0_monitor_0, magic=0:7;8:92:7:5796e0cd-bf36-4e41-afc7-335e064a4ec8, cib=0.69.51) : Resource op removal
Oct 14 13:10:24 node1 crmd: [4034]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_delete): 0.69.51 -> 0.69.52 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 crmd: [4034]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=0, tag=transient_attributes, id=node2, magic=NA, cib=0.69.52) : Transient attribute: removal
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause <transient_attributes id="node2" __crm_diff_marker__="removed:top" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   <instance_attributes id="status-node2" >
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-node2-probe_complete" name="probe_complete" value="true" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-node2-master-vmrd-res:0" name="master-vmrd-res:0" value="200" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-node2-fail-count-testdummy-res:0" name="fail-count-testdummy-res:0" value="INFINITY" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause     <nvpair id="status-node2-last-failure-testdummy-res:0" name="last-failure-testdummy-res:0" value="1255547424" />
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause   </instance_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: debug: log_data_element: abort_transition_graph: Cause </transient_attributes>
Oct 14 13:10:24 node1 crmd: [4034]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/transient_attributes": ok (rc=0)
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 52 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=18, Pending=0, Fired=2, Skipped=14, Incomplete=12, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 73: notify vmrd-res:0_post_notify_demote_0 on node2
Oct 14 13:10:24 node1 crmd: [4034]: info: te_rsc_command: Initiating action 75: notify vmrd-res:1_post_notify_demote_0 on node1 (local)
Oct 14 13:10:24 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=75:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_notify_0 )
Oct 14 13:10:24 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation notify[61] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[vmrd-res:0 ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_operation=[demote] CRM_meta_notify_slave_resource=[vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[node2 ] CRM_meta_notify_master_uname=[node2 ] CRM_meta_notify_demote_uname=[silvertho to the operation list.
Oct 14 13:10:24 node1 lrmd: [4031]: info: rsc:vmrd-res:1:61: notify
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=20, Pending=2, Fired=2, Skipped=14, Incomplete=10, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1e2 
Oct 14 13:10:24 node1 vmrdra[12387]: INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:10:24 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:10:24 INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:10:24 node1 vmrdra[12387]: INFO:  notify: post for demote - counts: active 0 - starting 0 - stopping 1
Oct 14 13:10:24 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:10:24 INFO:  notify: post for demote - counts: active 0 - starting 0 - stopping 1
Oct 14 13:10:24 node1 lrmd: [4031]: info: Managed vmrd-res:1:notify process 12387 exited with return code 0.
Oct 14 13:10:24 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_notify_0 (call=61, rc=0, cib-update=302, confirmed=true) complete ok
Oct 14 13:10:24 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.52 -> 0.69.53 (S_TRANSITION_ENGINE)
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:24 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:24 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_post_notify_demote_0 (75) confirmed on node1 (rc=0)
Oct 14 13:10:24 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:24 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 53 fired and confirmed
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=21, Pending=1, Fired=1, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=22, Pending=1, Fired=0, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:24 node1 openais[4023]: [totemsrp.c:2365] Retransmit List: 1eb 
Oct 14 13:10:24 node1 cib: [4030]: debug: sync_our_cib: Syncing CIB to node2
Oct 14 13:10:24 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=node2/node2/(null), version=0.69.53): ok (rc=0)
Oct 14 13:10:25 node1 kernel: [78178.348038] bnx2x: eth2 NIC Link is Down
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: cib_query#012cib
Oct 14 13:10:25 node1 mgmtd: [4035]: info: CIB query: cib
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012<cib epoch="69" num_updates="53" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="1" cib-last-written="Tue Oct 13 15:10:01 2009" dc-uuid="node1">#012  <configuration>#012    <crm_config>#012      <cluster_property_set id="cib-bootstrap-options">#012        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7"/>#012        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>#012        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>#012        <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="6s"/>#012        <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>#012        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1255479423"/>#012        <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="6s"/>#012      </cluster_property_set>#012    </crm_config>#012    <nodes>#012      <node id="node2" uname="node2" type="normal"/>#012      <node id="node1" uname="node1" type="normal"/>#012    </nodes>#012    <resources>#012      <clone id="testdummy-clone">#012        <meta_attributes id="testdummy-clone-meta_attributes">#012          <nvpair id="testdummy-clone-meta_attributes-target-role" name="target-role" value="started"/>#012        </meta_attributes>#012        <primitive class="ocf" id="testdummy-res" provider="peakpoint" type="testdummy">#012          <operations id="testdummy-res-operations">#012            <op id="testdummy-res-op-monitor-10" interval="10" name="monitor" start-delay="0" timeout="20"/>#012          </operations>#012          <meta_attributes id="testdummy-res-meta_attributes">#012            <nvpair id="t
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: active_cib
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: all_nodes
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: f
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: crm_nodes

Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node2#012node1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: active_nodes
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: cluster_type
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012openais
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: node_config#012node2
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node2#012False#012False#012False#012False#012False#012False#012member#012False#012False
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: node_config#012node1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node1#012True#012False#012False#012False#012True#012True#012member#012False#012False
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: all_rsc
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-clone#012ipmi-stonith-clone#012vmrd-master-res#012vsstvm-res
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-clone
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012testdummy-clone
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-res:0#012testdummy-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-clone
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012ipmi-stonith-clone
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012ipmi-stonith-res:0#012ipmi-stonith-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-master-res
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012master
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012vmrd-master-res
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012vmrd-res:0#012vmrd-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:0
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012running (Slave)#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vsstvm-res
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vsstvm-res
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vsstvm-res
Oct 14 13:10:25 node1 mgmtd: [4035]: debug: send msg: o

Oct 14 13:10:27 node1 kernel: [78180.353314] bnx2x: eth2 NIC Link is Down
Oct 14 13:10:27 node1 kernel: [78180.354522] bnx2x: eth2 NIC Link is Down
Oct 14 13:10:27 node1 kernel: [78180.408009] bnx2x: eth2 NIC Link is Down

Oct 14 13:10:27 node1 kernel: [78180.758038] drbd1: PingAck did not arrive in time.
Oct 14 13:10:27 node1 kernel: [78180.758048] drbd1: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown ) 
Oct 14 13:10:27 node1 kernel: [78180.758057] drbd1: asender terminated
Oct 14 13:10:27 node1 kernel: [78180.758078] drbd1: Terminating asender thread
Oct 14 13:10:27 node1 kernel: [78180.758093] drbd1: short read expecting header on sock: r=-512
Oct 14 13:10:27 node1 kernel: [78180.758096] drbd1: drbd disconnecting, buffered 0.
Oct 14 13:10:27 node1 kernel: [78180.758098] drbd1: discard buffered blocks then disable buffering.
Oct 14 13:10:27 node1 kernel: [78180.758100] drbd1: ckpt discard done, to be discarded 0.
Oct 14 13:10:27 node1 kernel: [78180.758104] drbd1: ckpt buffering is disabled, total_barrier_cnt 1332, total_commit_cnt 1332.
Oct 14 13:10:27 node1 kernel: [78180.758125] drbd1: drbd disconnect, all the lists are empty.
Oct 14 13:10:27 node1 kernel: [78180.758148] drbd1: Creating new current UUID
Oct 14 13:10:27 node1 kernel: [78180.758273] drbd1: conn( NetworkFailure -> Unconnected ) 
Oct 14 13:10:27 node1 kernel: [78180.758276] drbd1: receiver terminated
Oct 14 13:10:27 node1 kernel: [78180.758278] drbd1: Restarting receiver thread
Oct 14 13:10:27 node1 kernel: [78180.758280] drbd1: receiver (re)started
Oct 14 13:10:27 node1 kernel: [78180.758282] drbd1: conn( Unconnected -> WFConnection ) 

Oct 14 13:10:28 node1 openais[4023]: [totemsrp.c:1425] The token was lost in the OPERATIONAL state.
Oct 14 13:10:28 node1 openais[4023]: [totemnet.c:0995] Receive multicast socket recv buffer size (262142 bytes).
Oct 14 13:10:28 node1 openais[4023]: [totemnet.c:1001] Transmit multicast socket send buffer size (262142 bytes).
Oct 14 13:10:28 node1 openais[4023]: [totemnet.c:0995] Receive multicast socket recv buffer size (262142 bytes).
Oct 14 13:10:28 node1 openais[4023]: [totemnet.c:1001] Transmit multicast socket send buffer size (262142 bytes).
Oct 14 13:10:28 node1 openais[4023]: [totemsrp.c:1732] entering GATHER state from 2.

Oct 14 13:10:29 node1 kernel: [78182.730481] drbd1: drbd_ckpt_disable called.
Oct 14 13:10:29 node1 kernel: [78182.730485] drbd1: discard the buffered disk blocks and disable buffering.

Oct 14 13:10:29 node1 crmd: [4034]: notice: ais_dispatch: Membership 91296: quorum lost
Oct 14 13:10:29 node1 crmd: [4034]: info: ais_status_callback: status: node2 is now lost (was member)
Oct 14 13:10:29 node1 crmd: [4034]: info: crm_update_peer: Node node2: id=369797312 state=lost (new) addr=r(0) ip(192.168.10.22) r(1) ip(10.1.1.27)  votes=1 born=91292 seen=91292 proc=00000000000000000000000000053312
Oct 14 13:10:29 node1 crmd: [4034]: debug: post_cache_update: Updated cache after membership event 91296.
Oct 14 13:10:29 node1 crmd: [4034]: info: erase_node_from_join: Removed node node2 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1
Oct 14 13:10:29 node1 crmd: [4034]: debug: ghash_update_cib_node: Updating node1: true (overwrite=false) hash_size=1
Oct 14 13:10:29 node1 crmd: [4034]: debug: ghash_update_cib_node: Updating node2: false (overwrite=false) hash_size=1
Oct 14 13:10:29 node1 crmd: [4034]: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA
Oct 14 13:10:29 node1 crmd: [4034]: info: crm_update_quorum: Updating quorum status to false (call=305)

Oct 14 13:10:29 node1 cib: [4030]: notice: ais_dispatch: Membership 91296: quorum lost

Oct 14 13:10:29 node1 cib: [4030]: info: crm_update_peer: Node node2: id=369797312 state=lost (new) addr=r(0) ip(192.168.10.22) r(1) ip(10.1.1.27)  votes=1 born=91292 seen=91292 proc=00000000000000000000000000053312

Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1732] entering GATHER state from 0.
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:2788] Creating commit token because I am the rep.

Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1303] Saving state aru 1f1 high seq received 1f1
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:2949] Storing new sequence id for ring 164a0
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1771] entering COMMIT state.
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1803] entering RECOVERY state.
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1832] position [0] member 192.168.10.21:
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1836] previous ring seq 91292 rep 192.168.10.21
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1842] aru 1f1 high delivered 1f1 received flag 1
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1950] Did not need to originate any messages in recovery.
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:4084] Sending initial ORF token
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0519] CLM CONFIGURATION CHANGE
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0520] New Configuration:
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0522] #011r(0) ip(192.168.10.21) r(1) ip(10.1.1.25) 
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0524] Members Left:
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0526] #011r(0) ip(192.168.10.22) r(1) ip(10.1.1.27) 
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0529] Members Joined:
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:0633] notice: pcmk_peer_update: Transitional membership event on ring 91296: memb=1, new=0, lost=1
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:0644] info: pcmk_peer_update: memb: node1 353020096
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:0649] info: pcmk_peer_update: lost: node2 369797312
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0519] CLM CONFIGURATION CHANGE
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0520] New Configuration:
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0522] #011r(0) ip(192.168.10.21) r(1) ip(10.1.1.25) 
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0524] Members Left:
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0529] Members Joined:
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:0633] notice: pcmk_peer_update: Stable membership event on ring 91296: memb=1, new=0, lost=0
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:0679] info: pcmk_peer_update: MEMB: node1 353020096
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:0596] info: ais_mark_unseen_peer_dead: Node node2 was not seen in the previous transition
Oct 14 13:10:29 node1 openais[4023]: [utils.c:0287] info: update_member: Node 369797312/node2 is now: lost
Oct 14 13:10:29 node1 openais[4023]: [plugin.c:1187] info: send_member_notification: Sending membership update 91296 to 2 children
Oct 14 13:10:29 node1 openais[4023]: [sync.c:0321] This node is within the primary component and will provide service.
Oct 14 13:10:29 node1 openais[4023]: [totemsrp.c:1678] entering OPERATIONAL state.
Oct 14 13:10:29 node1 openais[4023]: [clm.c:0601] got nodejoin message 192.168.10.21
Oct 14 13:10:29 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/303, version=0.69.53): ok (rc=0)
Oct 14 13:10:29 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:29 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:29 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:29 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:29 node1 cib: [4030]: debug: activateCibXml: Triggering CIB write for cib_modify op
Oct 14 13:10:29 node1 cib: [4030]: info: log_data_element: cib:diff: - <cib have-quorum="1" admin_epoch="0" epoch="69" num_updates="54" />
Oct 14 13:10:29 node1 cib: [4030]: info: log_data_element: cib:diff: + <cib have-quorum="0" admin_epoch="0" epoch="70" num_updates="1" />
Oct 14 13:10:29 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/305, version=0.70.1): ok (rc=0)
Oct 14 13:10:29 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:10:29 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:10:29 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:10:29 node1 haclient: on_event:evt:cib_changed
Oct 14 13:10:29 node1 cib: [4030]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='expected-quorum-votes'] (/cib/configuration/crm_config/cluster_property_set/nvpair[5])
Oct 14 13:10:29 node1 crmd: [4034]: debug: log_data_element: find_nvpair_attr: Match <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2" />
Oct 14 13:10:29 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_ELECTION_CHECK
Oct 14 13:10:29 node1 crmd: [4034]: debug: do_election_check: Ignore election check: we not in an election
Oct 14 13:10:29 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.53 -> 0.69.54 (S_TRANSITION_ENGINE)
Oct 14 13:10:29 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:29 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.69.54 -> 0.70.1 (S_TRANSITION_ENGINE)
Oct 14 13:10:29 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:10:29 node1 crmd: [4034]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Oct 14 13:10:29 node1 crmd: [4034]: info: need_abort: Aborting on change to have-quorum
Oct 14 13:10:29 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=22, Pending=1, Fired=0, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:10:29 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/307, version=0.70.1): ok (rc=0)
Oct 14 13:10:29 node1 cib: [4030]: debug: Forking temp process write_cib_contents

Oct 14 13:10:29 node1 cib: [12413]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-69.raw
Oct 14 13:10:29 node1 cib: [12413]: info: write_cib_contents: Wrote version 0.70.0 of the CIB to disk (digest: 5d9b73e96025bbe33be0ad6068ec15c8)
Oct 14 13:10:29 node1 cib: [12413]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.gco19u (digest: /var/lib/heartbeat/crm/cib.lo8Nbc)
Oct 14 13:10:29 node1 cib: [4030]: info: Managed write_cib_contents process 12413 exited with return code 0.
Oct 14 13:10:29 node1 kernel: [78182.915797] blkback: ring-ref 8, event-channel 15, protocol 1 (x86_32-abi)
Oct 14 13:10:29 node1 kernel: [78182.926445] netback/xenbus (frontend_changed:228) Connected.
Oct 14 13:10:29 node1 kernel: [78182.926913] netback/xenbus (connect_rings:368) .
Oct 14 13:10:29 node1 kernel: [78182.929046] ADDRCONF(NETDEV_CHANGE): vif3.0: link becomes ready
Oct 14 13:10:29 node1 kernel: [78182.929550] eth0: topology change detected, propagating
Oct 14 13:10:29 node1 kernel: [78182.929562] eth0: port 2(vif3.0) entering forwarding state
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: cib_query#012cib
Oct 14 13:10:30 node1 mgmtd: [4035]: info: CIB query: cib
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012<cib epoch="70" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="0" cib-last-written="Tue Oct 13 15:10:01 2009" dc-uuid="node1">#012  <configuration>#012    <crm_config>#012      <cluster_property_set id="cib-bootstrap-options">#012        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7"/>#012        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>#012        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>#012        <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="6s"/>#012        <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>#012        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1255479423"/>#012        <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="6s"/>#012      </cluster_property_set>#012    </crm_config>#012    <nodes>#012      <node id="node2" uname="node2" type="normal"/>#012      <node id="node1" uname="node1" type="normal"/>#012    </nodes>#012    <resources>#012      <clone id="testdummy-clone">#012        <meta_attributes id="testdummy-clone-meta_attributes">#012          <nvpair id="testdummy-clone-meta_attributes-target-role" name="target-role" value="started"/>#012        </meta_attributes>#012        <primitive class="ocf" id="testdummy-res" provider="peakpoint" type="testdummy">#012          <operations id="testdummy-res-operations">#012            <op id="testdummy-res-op-monitor-10" interval="10" name="monitor" start-delay="0" timeout="20"/>#012          </operations>#012          <meta_attributes id="testdummy-res-meta_attributes">#012            <nvpair id="te
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: active_cib
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: all_nodes
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: f
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: crm_nodes
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node2#012node1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: active_nodes
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: cluster_type
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012openais
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: node_config#012node2
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node2#012False#012False#012False#012False#012False#012False#012member#012False#012False
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: node_config#012node1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node1#012True#012False#012False#012False#012True#012True#012member#012False#012False
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: all_rsc
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-clone#012ipmi-stonith-clone#012vmrd-master-res#012vsstvm-res
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-clone
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012testdummy-clone
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-res:0#012testdummy-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-clone
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012ipmi-stonith-clone
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012ipmi-stonith-res:0#012ipmi-stonith-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-master-res
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012master
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012vmrd-master-res
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012vmrd-res:0#012vmrd-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:0
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012running (Slave)#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vsstvm-res
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vsstvm-res
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vsstvm-res
Oct 14 13:10:30 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:10:30 node1 testdummy[12415]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:10:31 node1 avahi-daemon[3267]: New relevant interface vif3.0.IPv6 for mDNS.
Oct 14 13:10:31 node1 avahi-daemon[3267]: Joining mDNS multicast group on interface vif3.0.IPv6 with address fe80::fcff:ffff:feff:ffff.
Oct 14 13:10:31 node1 avahi-daemon[3267]: Registering new address record for fe80::fcff:ffff:feff:ffff on vif3.0.
Oct 14 13:10:31 node1 lrmd: [12423]: debug: stonithd_signon: creating connection
Oct 14 13:10:31 node1 lrmd: [12423]: debug: sending out the signon msg.
Oct 14 13:10:31 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12423 (pid=12423) succeeded to signon to stonithd.
Oct 14 13:10:31 node1 lrmd: [12423]: debug: signed on to stonithd.
Oct 14 13:10:31 node1 lrmd: [12423]: debug: waiting for the stonithRA reply msg.
Oct 14 13:10:31 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12423 [pid: 12423] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:10:31 node1 stonithd: [12424]: debug: external_status: called.
Oct 14 13:10:31 node1 stonithd: [12424]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:10:31 node1 lrmd: [12423]: debug: a stonith RA operation queue to run, call_id=12424.
Oct 14 13:10:31 node1 lrmd: [12423]: debug: stonithd_receive_ops_result: begin
Oct 14 13:10:31 node1 stonithd: [12424]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:10:31 node1 stonithd: [12424]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:10:31 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12424] exited, its exit code: 0 when signo=0.
Oct 14 13:10:31 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:10:31 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12423 (pid=12423) signed off
Oct 14 13:10:40 node1 kernel: [78193.535033] vif3.0: no IPv6 routers present
Oct 14 13:10:40 node1 testdummy[12433]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:10:41 node1 lrmd: [12441]: debug: stonithd_signon: creating connection
Oct 14 13:10:41 node1 lrmd: [12441]: debug: sending out the signon msg.
Oct 14 13:10:41 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12441 (pid=12441) succeeded to signon to stonithd.
Oct 14 13:10:41 node1 lrmd: [12441]: debug: signed on to stonithd.
Oct 14 13:10:41 node1 lrmd: [12441]: debug: waiting for the stonithRA reply msg.
Oct 14 13:10:41 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12441 [pid: 12441] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:10:41 node1 lrmd: [12441]: debug: a stonith RA operation queue to run, call_id=12442.
Oct 14 13:10:41 node1 lrmd: [12441]: debug: stonithd_receive_ops_result: begin
Oct 14 13:10:41 node1 stonithd: [12442]: debug: external_status: called.
Oct 14 13:10:41 node1 stonithd: [12442]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:10:41 node1 stonithd: [12442]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:10:41 node1 stonithd: [12442]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:10:41 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12442] exited, its exit code: 0 when signo=0.
Oct 14 13:10:41 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:10:41 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12441 (pid=12441) signed off
Oct 14 13:10:50 node1 testdummy[12450]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:10:51 node1 lrmd: [12458]: debug: stonithd_signon: creating connection
Oct 14 13:10:51 node1 lrmd: [12458]: debug: sending out the signon msg.
Oct 14 13:10:51 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12458 (pid=12458) succeeded to signon to stonithd.
Oct 14 13:10:51 node1 lrmd: [12458]: debug: signed on to stonithd.
Oct 14 13:10:51 node1 lrmd: [12458]: debug: waiting for the stonithRA reply msg.
Oct 14 13:10:51 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12458 [pid: 12458] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:10:51 node1 lrmd: [12458]: debug: a stonith RA operation queue to run, call_id=12459.
Oct 14 13:10:51 node1 lrmd: [12458]: debug: stonithd_receive_ops_result: begin
Oct 14 13:10:51 node1 stonithd: [12459]: debug: external_status: called.
Oct 14 13:10:51 node1 stonithd: [12459]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:10:52 node1 stonithd: [12459]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:10:52 node1 stonithd: [12459]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:10:52 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12459] exited, its exit code: 0 when signo=0.
Oct 14 13:10:52 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:10:52 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12458 (pid=12458) signed off
Oct 14 13:11:00 node1 testdummy[12467]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:11:02 node1 lrmd: [12475]: debug: stonithd_signon: creating connection
Oct 14 13:11:02 node1 lrmd: [12475]: debug: sending out the signon msg.
Oct 14 13:11:02 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12475 (pid=12475) succeeded to signon to stonithd.
Oct 14 13:11:02 node1 lrmd: [12475]: debug: signed on to stonithd.
Oct 14 13:11:02 node1 lrmd: [12475]: debug: waiting for the stonithRA reply msg.
Oct 14 13:11:02 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12475 [pid: 12475] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:11:02 node1 stonithd: [12476]: debug: external_status: called.
Oct 14 13:11:02 node1 stonithd: [12476]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:11:02 node1 lrmd: [12475]: debug: a stonith RA operation queue to run, call_id=12476.
Oct 14 13:11:02 node1 lrmd: [12475]: debug: stonithd_receive_ops_result: begin
Oct 14 13:11:02 node1 stonithd: [12476]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:11:02 node1 stonithd: [12476]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:11:02 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12476] exited, its exit code: 0 when signo=0.
Oct 14 13:11:02 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:11:02 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12475 (pid=12475) signed off
Oct 14 13:11:10 node1 testdummy[12484]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:11:12 node1 lrmd: [12492]: debug: stonithd_signon: creating connection
Oct 14 13:11:12 node1 lrmd: [12492]: debug: sending out the signon msg.
Oct 14 13:11:12 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12492 (pid=12492) succeeded to signon to stonithd.
Oct 14 13:11:12 node1 lrmd: [12492]: debug: signed on to stonithd.
Oct 14 13:11:12 node1 lrmd: [12492]: debug: waiting for the stonithRA reply msg.
Oct 14 13:11:12 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12492 [pid: 12492] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:11:12 node1 stonithd: [12493]: debug: external_status: called.
Oct 14 13:11:12 node1 stonithd: [12493]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:11:12 node1 lrmd: [12492]: debug: a stonith RA operation queue to run, call_id=12493.
Oct 14 13:11:12 node1 lrmd: [12492]: debug: stonithd_receive_ops_result: begin
Oct 14 13:11:12 node1 stonithd: [12493]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:11:12 node1 stonithd: [12493]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:11:12 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12493] exited, its exit code: 0 when signo=0.
Oct 14 13:11:12 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:11:12 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12492 (pid=12492) signed off
Oct 14 13:11:20 node1 testdummy[12501]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:11:22 node1 lrmd: [12509]: debug: stonithd_signon: creating connection
Oct 14 13:11:22 node1 lrmd: [12509]: debug: sending out the signon msg.
Oct 14 13:11:22 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12509 (pid=12509) succeeded to signon to stonithd.
Oct 14 13:11:22 node1 lrmd: [12509]: debug: signed on to stonithd.
Oct 14 13:11:22 node1 lrmd: [12509]: debug: waiting for the stonithRA reply msg.
Oct 14 13:11:22 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12509 [pid: 12509] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:11:22 node1 stonithd: [12510]: debug: external_status: called.
Oct 14 13:11:22 node1 stonithd: [12510]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:11:22 node1 lrmd: [12509]: debug: a stonith RA operation queue to run, call_id=12510.
Oct 14 13:11:22 node1 lrmd: [12509]: debug: stonithd_receive_ops_result: begin
Oct 14 13:11:22 node1 stonithd: [12510]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:11:22 node1 stonithd: [12510]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:11:22 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12510] exited, its exit code: 0 when signo=0.
Oct 14 13:11:22 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:11:22 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12509 (pid=12509) signed off
Oct 14 13:11:27 node1 crmd: [4034]: WARN: action_timer_callback: Timer popped (timeout=3000, abort_level=1000000, complete=false)
Oct 14 13:11:27 node1 crmd: [4034]: ERROR: print_elem: Aborting transition, action lost: [Action 2]: Completed (id: vmrd-res:1_monitor_9000, loc: node1, priority: 0)
Oct 14 13:11:27 node1 crmd: [4034]: info: abort_transition_graph: action_timer_callback:482 - Triggered transition abort (complete=0) : Action lost
Oct 14 13:11:27 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=22, Pending=1, Fired=0, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:11:27 node1 crmd: [4034]: WARN: action_timer_callback: Timer popped (timeout=3000, abort_level=1000000, complete=false)
Oct 14 13:11:27 node1 crmd: [4034]: ERROR: print_elem: Aborting transition, action lost: [Action 23]: Failed (id: vmrd-res:0_demote_0, loc: node2, priority: 0)
Oct 14 13:11:27 node1 crmd: [4034]: info: abort_transition_graph: action_timer_callback:482 - Triggered transition abort (complete=0) : Action lost
Oct 14 13:11:27 node1 crmd: [4034]: WARN: cib_action_update: rsc_op 23: vmrd-res:0_demote_0 on node2 timed out
Oct 14 13:11:27 node1 crmd: [4034]: debug: cib_action_update: Calculated digest f2317cad3d54cec5d7d7aa7d0bf35cf8 for vmrd-res:0_demote_0 (2:1;23:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8)
Oct 14 13:11:27 node1 crmd: [4034]: debug: log_data_element: cib_action_update: digest:source <parameters />
Oct 14 13:11:27 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=22, Pending=1, Fired=0, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:11:27 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:27 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.1 -> 0.70.2 (S_TRANSITION_ENGINE)
Oct 14 13:11:27 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:27 node1 crmd: [4034]: WARN: status_from_rc: Action 23 (vmrd-res:0_demote_0) on node2 failed (target: 0 vs. rc: 1): Error
Oct 14 13:11:27 node1 crmd: [4034]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=vmrd-res:0_demote_0, magic=2:1;23:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8, cib=0.70.2) : Event failed
Oct 14 13:11:27 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:0_demote_0 (23) confirmed on node2 (rc=4)
Oct 14 13:11:27 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:27 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:27 node1 crmd: [4034]: debug: run_graph: Transition 105 (Complete=22, Pending=1, Fired=0, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): In-progress
Oct 14 13:11:27 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: cib_query#012cib
Oct 14 13:11:28 node1 mgmtd: [4035]: info: CIB query: cib
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012<cib epoch="70" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="0" cib-last-written="Tue Oct 13 15:10:01 2009" dc-uuid="node1">#012  <configuration>#012    <crm_config>#012      <cluster_property_set id="cib-bootstrap-options">#012        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7"/>#012        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>#012        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>#012        <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="6s"/>#012        <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>#012        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1255479423"/>#012        <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="6s"/>#012      </cluster_property_set>#012    </crm_config>#012    <nodes>#012      <node id="node2" uname="node2" type="normal"/>#012      <node id="node1" uname="node1" type="normal"/>#012    </nodes>#012    <resources>#012      <clone id="testdummy-clone">#012        <meta_attributes id="testdummy-clone-meta_attributes">#012          <nvpair id="testdummy-clone-meta_attributes-target-role" name="target-role" value="started"/>#012        </meta_attributes>#012        <primitive class="ocf" id="testdummy-res" provider="peakpoint" type="testdummy">#012          <operations id="testdummy-res-operations">#012            <op id="testdummy-res-op-monitor-10" interval="10" name="monitor" start-delay="0" timeout="20"/>#012          </operations>#012          <meta_attributes id="testdummy-res-meta_attributes">#012            <nvpair id="te
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: active_cib
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: all_nodes
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: f
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: crm_nodes
Oct 14 13:11:28 node1 mgmtd: [4035]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:28 node1 mgmtd: [4035]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node2#012node1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: active_nodes
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: cluster_type
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012openais
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: node_config#012node2
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node2#012False#012False#012True#012False#012False#012False#012member#012False#012False
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: node_config#012node1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node1#012True#012False#012False#012False#012True#012True#012member#012False#012False
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: all_rsc
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-clone#012ipmi-stonith-clone#012vmrd-master-res#012vsstvm-res
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-clone
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012testdummy-clone
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-res:0#012testdummy-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-clone
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012ipmi-stonith-clone
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012ipmi-stonith-res:0#012ipmi-stonith-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-master-res
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012master
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012vmrd-master-res
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012vmrd-res:0#012vmrd-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012failed#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:0
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node2
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012running (Slave)#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vsstvm-res
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vsstvm-res
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vsstvm-res
Oct 14 13:11:28 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:30 node1 testdummy[12519]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:11:30 node1 crmd: [4034]: WARN: action_timer_callback: Timer popped (timeout=6000, abort_level=1000000, complete=false)
Oct 14 13:11:30 node1 crmd: [4034]: ERROR: print_elem: Aborting transition, action lost: [Action 73]: Failed (id: vmrd-res:0_post_notify_demote_0, loc: node2, priority: 1000000)
Oct 14 13:11:30 node1 crmd: [4034]: info: abort_transition_graph: action_timer_callback:482 - Triggered transition abort (complete=0) : Action lost
Oct 14 13:11:30 node1 crmd: [4034]: WARN: cib_action_update: rsc_op 73: vmrd-res:0_post_notify_demote_0 on node2 timed out
Oct 14 13:11:30 node1 crmd: [4034]: debug: cib_action_update: Calculated digest f2317cad3d54cec5d7d7aa7d0bf35cf8 for vmrd-res:0_notify_0 (2:1;73:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8)
Oct 14 13:11:30 node1 crmd: [4034]: debug: log_data_element: cib_action_update: digest:source <parameters />
Oct 14 13:11:30 node1 crmd: [4034]: info: run_graph: ====================================================
Oct 14 13:11:30 node1 crmd: [4034]: notice: run_graph: Transition 105 (Complete=23, Pending=0, Fired=0, Skipped=14, Incomplete=9, Source=/var/lib/pengine/pe-warn-26646.bz2): Stopped
Oct 14 13:11:30 node1 crmd: [4034]: info: te_graph_trigger: Transition 105 is now complete
Oct 14 13:11:30 node1 crmd: [4034]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Oct 14 13:11:30 node1 crmd: [4034]: debug: notify_crmd: Transition 105 status: restart - Node failure
Oct 14 13:11:30 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:11:30 node1 crmd: [4034]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:11:30 node1 crmd: [4034]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:11:30 node1 crmd: [4034]: info: do_pe_invoke: Query 310: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:11:30 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:30 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.2 -> 0.70.3 (S_POLICY_ENGINE)
Oct 14 13:11:30 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:30 node1 crmd: [4034]: info: process_graph_event: Action vmrd-res:0_notify_0 arrived after a completed transition
Oct 14 13:11:30 node1 crmd: [4034]: info: abort_transition_graph: process_graph_event:467 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=vmrd-res:0_notify_0, magic=2:1;73:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8, cib=0.70.3) : Inactive graph
Oct 14 13:11:30 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:30 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:11:30 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:30 node1 crmd: [4034]: info: do_pe_invoke: Query 311: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:11:30 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:30 node1 crmd: [4034]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1255547490-286, seq=91296, quorate=0
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Oct 14 13:11:30 node1 pengine: [4033]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Oct 14 13:11:30 node1 pengine: [4033]: debug: unpack_config: STONITH timeout: 60000
Oct 14 13:11:30 node1 pengine: [4033]: debug: unpack_config: STONITH of failed nodes is enabled
Oct 14 13:11:30 node1 pengine: [4033]: debug: unpack_config: Stop all active resources: false
Oct 14 13:11:30 node1 pengine: [4033]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Oct 14 13:11:30 node1 pengine: [4033]: debug: unpack_config: Default stickiness: 0
Oct 14 13:11:30 node1 pengine: [4033]: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 13:11:30 node1 pengine: [4033]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 14 13:11:30 node1 pengine: [4033]: info: determine_online_status: Node node1 is online
Oct 14 13:11:30 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:11:30 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node1
Oct 14 13:11:30 node1 pengine: [4033]: info: determine_online_status_fencing: Node node2 is down
Oct 14 13:11:30 node1 pengine: [4033]: debug: determine_online_status_fencing: #011ha_state=active, ccm_state=false, crm_state=online, join_state=down, expected=down
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: WARN: unpack_rsc_op: Processing failed op vmrd-res:0_demote_0 on node2: unknown error
Oct 14 13:11:30 node1 pengine: [4033]: WARN: unpack_rsc_op: Forcing vmrd-res:0 to stop after a failed demote action
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: WARN: custom_action: Action vmrd-res:0_stop_0 on node2 is unrunnable (offline)
Oct 14 13:11:30 node1 pengine: [4033]: WARN: custom_action: Marking node node2 unclean
Oct 14 13:11:30 node1 pengine: [4033]: debug: unpack_lrm_rsc_state: vmrd-res:0: Overwriting calculated next role Stopped with requested next role Started
Oct 14 13:11:30 node1 pengine: [4033]: notice: clone_print: Clone Set: testdummy-clone
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:11:30 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:11:30 node1 pengine: [4033]: notice: print_list: #011Stopped: [ testdummy-res:0 ]
Oct 14 13:11:30 node1 pengine: [4033]: notice: clone_print: Clone Set: ipmi-stonith-clone
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:11:30 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:11:30 node1 pengine: [4033]: notice: print_list: #011Stopped: [ ipmi-stonith-res:0 ]
Oct 14 13:11:30 node1 pengine: [4033]: notice: clone_print: Master/Slave Set: vmrd-master-res
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:0: node node2 is offline
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:11:30 node1 pengine: [4033]: notice: print_list: #011Slaves: [ node1 ]
Oct 14 13:11:30 node1 pengine: [4033]: notice: print_list: #011Stopped: [ vmrd-res:0 ]
Oct 14 13:11:30 node1 pengine: [4033]: notice: native_print: vsstvm-res#011(ocf::peakpoint:vsstvm):#011Stopped 
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_rsc_location: Constraint (vmrd-master-prefer-location-rule) is not active (role : Master)
Oct 14 13:11:30 node1 pengine:last message repeated 2 times
Oct 14 13:11:30 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:0: preferring current location (node=node2, weight=1)
Oct 14 13:11:30 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:11:30 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:11:30 node1 pengine: [4033]: info: get_failcount: ipmi-stonith-clone has failed 1 times on node1
Oct 14 13:11:30 node1 pengine: [4033]: notice: common_apply_stickiness: ipmi-stonith-clone can fail 999999 more times on node1 before being forced off
Oct 14 13:11:30 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to testdummy-res:1
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource testdummy-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:11:30 node1 pengine: [4033]: WARN: native_color: Resource testdummy-res:0 cannot run anywhere
Oct 14 13:11:30 node1 pengine: [4033]: debug: clone_color: Allocated 1 testdummy-clone instances of a possible 2
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to ipmi-stonith-res:1
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource ipmi-stonith-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:11:30 node1 pengine: [4033]: WARN: native_color: Resource ipmi-stonith-res:0 cannot run anywhere
Oct 14 13:11:30 node1 pengine: [4033]: debug: clone_color: Allocated 1 ipmi-stonith-clone instances of a possible 2
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vmrd-res:1
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource vmrd-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:11:30 node1 pengine: [4033]: WARN: native_color: Resource vmrd-res:0 cannot run anywhere
Oct 14 13:11:30 node1 pengine: [4033]: debug: clone_color: Allocated 1 vmrd-master-res instances of a possible 2
Oct 14 13:11:30 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 105
Oct 14 13:11:30 node1 pengine: [4033]: info: master_color: Promoting vmrd-res:1 (Slave node1)
Oct 14 13:11:30 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:11:30 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:11:30 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 205
Oct 14 13:11:30 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:11:30 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:11:30 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vsstvm-res
Oct 14 13:11:30 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:11:30 node1 pengine: [4033]: WARN: custom_action: Action vmrd-res:0_stop_0 on node2 is unrunnable (offline)
Oct 14 13:11:30 node1 pengine: [4033]: WARN: custom_action: Marking node node2 unclean
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine:last message repeated 3 times
Oct 14 13:11:30 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: WARN: custom_action: Action vmrd-res:0_stop_0 on node2 is unrunnable (offline)
Oct 14 13:11:30 node1 pengine: [4033]: WARN: custom_action: Marking node node2 unclean
Oct 14 13:11:30 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: WARN: stage6: Scheduling Node node2 for STONITH
Oct 14 13:11:30 node1 pengine: [4033]: info: native_start_constraints: Ordering testdummy-res:1_start_0 after node2 recovery
Oct 14 13:11:30 node1 pengine: [4033]: WARN: native_stop_constraints: Stop of failed resource vmrd-res:0 is implicit after node2 is fenced
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: info: native_stop_constraints: Creating secondary notification for vmrd-res:0_stop_0
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:11:30 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine: [4033]: info: native_start_constraints: Ordering vmrd-res:1_start_0 after node2 recovery
Oct 14 13:11:30 node1 pengine: [4033]: info: native_start_constraints: Ordering vsstvm-res_start_0 after node2 recovery
Oct 14 13:11:30 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:11:30 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine:last message repeated 2 times
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:0#011(Stopped)
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:1#011(Started node1)
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:0#011(Stopped)
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:1#011(Started node1)
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Stop resource vmrd-res:0#011(node2)
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Promote vmrd-res:1#011(Slave -> Master node1)
Oct 14 13:11:30 node1 pengine: [4033]: notice: LogActions: Start vsstvm-res#011(node1)
Oct 14 13:11:30 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:30 node1 pengine:last message repeated 6 times
Oct 14 13:11:30 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_LOG   
Oct 14 13:11:30 node1 crmd: [4034]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:11:30 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE
Oct 14 13:11:30 node1 crmd: [4034]: info: unpack_graph: Unpacked transition 106: 26 actions in 26 synapses
Oct 14 13:11:30 node1 crmd: [4034]: info: do_te_invoke: Processing graph 106 (ref=pe_calc-dc-1255547490-286) derived from /var/lib/pengine/pe-warn-26647.bz2
Oct 14 13:11:30 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 32 fired and confirmed
Oct 14 13:11:30 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
Oct 14 13:11:30 node1 crmd: [4034]: info: te_fence_node: Executing reboot fencing operation (51) on node2 (timeout=60000)
Oct 14 13:11:30 node1 crmd: [4034]: debug: waiting for the stonith reply msg.
Oct 14 13:11:30 node1 stonithd: [4029]: info: client tengine [pid: 4034] requests a STONITH operation RESET on node node2
Oct 14 13:11:30 node1 stonithd: [4029]: debug: get_local_stonithobj_can_stonith:2820: next stonith resource ipmi-stonith-res:1, priority 0
Oct 14 13:11:30 node1 stonithd: [12527]: debug: external_reset_req: called.
Oct 14 13:11:30 node1 stonithd: [12527]: debug: Host external-reset initiating on node2
Oct 14 13:11:30 node1 stonithd: [12527]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi reset node2'
Oct 14 13:11:30 node1 pengine: [4033]: WARN: process_pe_message: Transition 106: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26647.bz2
Oct 14 13:11:30 node1 pengine: [4033]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Oct 14 13:11:30 node1 stonithd: [4029]: info: stonith_operate_locally::2688: sending fencing op RESET for node2 to ipmi-stonith-res:1 (external/ipmi) (pid=12527)
Oct 14 13:11:30 node1 stonithd: [4029]: debug: inserted optype=RESET, key=12527
Oct 14 13:11:30 node1 stonithd: [4029]: debug: stonithd_node_fence: sent back a synchronous reply.
Oct 14 13:11:30 node1 crmd: [4034]: debug: stonithd_node_fence:582: stonithd's synchronous answer is ST_APIOK
Oct 14 13:11:30 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=0, Pending=1, Fired=3, Skipped=0, Incomplete=23, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:30 node1 crmd: [4034]: info: te_rsc_command: Initiating action 65: notify vmrd-res:1_pre_notify_stop_0 on node1 (local)
Oct 14 13:11:30 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=65:106:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_notify_0 )
Oct 14 13:11:30 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation notify[62] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[vmrd-res:0 ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_operation=[stop] CRM_meta_notify_slave_resource=[vmrd-res:0 vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[node2 ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_master_ to the operation list.
Oct 14 13:11:30 node1 lrmd: [4031]: info: rsc:vmrd-res:1:62: notify
Oct 14 13:11:30 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=2, Pending=2, Fired=1, Skipped=0, Incomplete=22, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 vmrdra[12529]: INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 vmrdra[12529]: INFO:  notify: pre for stop - counts: active 0 - starting 0 - stopping 1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO:  notify: pre for stop - counts: active 0 - starting 0 - stopping 1
Oct 14 13:11:31 node1 lrmd: [4031]: info: Managed vmrd-res:1:notify process 12529 exited with return code 0.
Oct 14 13:11:31 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_notify_0 (call=62, rc=0, cib-update=312, confirmed=true) complete ok
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.3 -> 0.70.4 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_pre_notify_stop_0 (65) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 33 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=3, Pending=1, Fired=1, Skipped=0, Incomplete=21, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 30 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=4, Pending=1, Fired=1, Skipped=0, Incomplete=20, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=5, Pending=1, Fired=0, Skipped=0, Incomplete=20, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 stonithd: [12527]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi reset node2' output: Chassis Power Control: Reset
Oct 14 13:11:31 node1 stonithd: [12527]: debug: external_reset_req: running 'ipmi reset' returned 0
Oct 14 13:11:31 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_1 [12527] exited, its exit code: 0 when signo=0.
Oct 14 13:11:31 node1 stonithd: [4029]: info: Succeeded to STONITH the node node2: optype=RESET. whodoit: node1
Oct 14 13:11:31 node1 stonithd: [4029]: debug: stonithop_result_to_local_client: succeed in sending back final result message.
Oct 14 13:11:31 node1 crmd: [4034]: debug: stonithd_receive_ops_result: begin
Oct 14 13:11:31 node1 crmd: [4034]: info: tengine_stonith_callback: call=12527, optype=1, node_name=node2, result=0, node_list=node1, action=51:106:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 3 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 52 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 31 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 50 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=6, Pending=0, Fired=5, Skipped=0, Incomplete=15, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_rsc_command: Initiating action 54: notify vmrd-res:1_post_notify_stop_0 on node1 (local)
Oct 14 13:11:31 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=54:106:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_notify_0 )
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation notify[63] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[vmrd-res:0 ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_operation=[stop] CRM_meta_notify_slave_resource=[vmrd-res:0 vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[node2 ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_master_ to the operation list.
Oct 14 13:11:31 node1 lrmd: [4031]: info: rsc:vmrd-res:1:63: notify
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=11, Pending=1, Fired=1, Skipped=0, Incomplete=14, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.4 -> 0.70.5 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: debug: match_down_event: Match found for action 0: stonith on node2
Oct 14 13:11:31 node1 cib: [4030]: debug: cib_process_xpath: Processing cib_delete op for //node_state[@uname='node2']/lrm (/cib/status/node_state[2]/lrm)
Oct 14 13:11:31 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/314, version=0.70.6): ok (rc=0)
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 cib: [4030]: debug: cib_process_xpath: //node_state[@uname='node2']/transient_attributes was already removed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_delete): 0.70.5 -> 0.70.6 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='vmrd-res:0_demote_0'] (vmrd-res:0_demote_0 on node2)
Oct 14 13:11:31 node1 crmd: [4034]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=vmrd-res:0_demote_0, magic=2:1;23:105:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8, cib=0.70.6) : Resource op removal
Oct 14 13:11:31 node1 crmd: [4034]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Oct 14 13:11:31 node1 crmd: [4034]: info: update_abort_priority: Abort action done superceeded by restart
Oct 14 13:11:31 node1 crmd: [4034]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=11, Pending=1, Fired=0, Skipped=8, Incomplete=6, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 vmrdra[12549]: INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 cib: [4030]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=local/crmd/315, version=0.70.6): ok (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/transient_attributes": ok (rc=0)
Oct 14 13:11:31 node1 vmrdra[12549]: INFO:  notify: post for stop - counts: active 0 - starting 0 - stopping 1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO:  notify: post for stop - counts: active 0 - starting 0 - stopping 1
Oct 14 13:11:31 node1 lrmd: [4031]: info: Managed vmrd-res:1:notify process 12549 exited with return code 0.
Oct 14 13:11:31 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_notify_0 (call=63, rc=0, cib-update=316, confirmed=true) complete ok
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.6 -> 0.70.7 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_post_notify_stop_0 (54) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 53 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 106 (Complete=12, Pending=0, Fired=2, Skipped=8, Incomplete=4, Source=/var/lib/pengine/pe-warn-26647.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: run_graph: ====================================================
Oct 14 13:11:31 node1 crmd: [4034]: notice: run_graph: Transition 106 (Complete=14, Pending=0, Fired=0, Skipped=8, Incomplete=4, Source=/var/lib/pengine/pe-warn-26647.bz2): Stopped
Oct 14 13:11:31 node1 crmd: [4034]: info: te_graph_trigger: Transition 106 is now complete
Oct 14 13:11:31 node1 crmd: [4034]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Oct 14 13:11:31 node1 crmd: [4034]: debug: notify_crmd: Transition 106 status: restart - Resource op removal
Oct 14 13:11:31 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:11:31 node1 crmd: [4034]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:11:31 node1 crmd: [4034]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE
Oct 14 13:11:31 node1 crmd: [4034]: info: do_pe_invoke: Query 317: Requesting the current CIB: S_POLICY_ENGINE
Oct 14 13:11:31 node1 crmd: [4034]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1255547491-289, seq=91296, quorate=0
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'symmetric-cluster'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'default-resource-stickiness'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'is-managed-default'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'maintenance-mode'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'start-failure-is-fatal'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stonith-enabled'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'reboot' for cluster option 'stonith-action'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'stonith-timeout'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'startup-fencing'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '60s' for cluster option 'cluster-delay'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '30' for cluster option 'batch-limit'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'stop-all-resources'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-resources'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'true' for cluster option 'stop-orphan-actions'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'false' for cluster option 'remove-after-stop'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-error-series-max'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-warn-series-max'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '-1' for cluster option 'pe-input-series-max'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value 'none' for cluster option 'node-health-strategy'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-green'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '0' for cluster option 'node-health-yellow'
Oct 14 13:11:31 node1 pengine: [4033]: debug: cluster_option: Using default value '-INFINITY' for cluster option 'node-health-red'
Oct 14 13:11:31 node1 pengine: [4033]: debug: unpack_config: STONITH timeout: 60000
Oct 14 13:11:31 node1 pengine: [4033]: debug: unpack_config: STONITH of failed nodes is enabled
Oct 14 13:11:31 node1 pengine: [4033]: debug: unpack_config: Stop all active resources: false
Oct 14 13:11:31 node1 pengine: [4033]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default
Oct 14 13:11:31 node1 pengine: [4033]: debug: unpack_config: Default stickiness: 0
Oct 14 13:11:31 node1 pengine: [4033]: notice: unpack_config: On loss of CCM Quorum: Ignore
Oct 14 13:11:31 node1 pengine: [4033]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 14 13:11:31 node1 pengine: [4033]: info: determine_online_status: Node node1 is online
Oct 14 13:11:31 node1 pengine: [4033]: info: unpack_rsc_op: vsstvm-res_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Oct 14 13:11:31 node1 pengine: [4033]: notice: unpack_rsc_op: Operation vsstvm-res_monitor_0 found resource vsstvm-res active on node1
Oct 14 13:11:31 node1 pengine: [4033]: debug: determine_online_status_fencing: Node node2 is down: join_state=down, expected=down
Oct 14 13:11:31 node1 pengine: [4033]: notice: clone_print: Clone Set: testdummy-clone
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_active: Resource testdummy-res:1 active on node1
Oct 14 13:11:31 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:11:31 node1 pengine: [4033]: notice: print_list: #011Stopped: [ testdummy-res:0 ]
Oct 14 13:11:31 node1 pengine: [4033]: notice: clone_print: Clone Set: ipmi-stonith-clone
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_active: Resource ipmi-stonith-res:1 active on node1
Oct 14 13:11:31 node1 pengine: [4033]: notice: print_list: #011Started: [ node1 ]
Oct 14 13:11:31 node1 pengine: [4033]: notice: print_list: #011Stopped: [ ipmi-stonith-res:0 ]
Oct 14 13:11:31 node1 pengine: [4033]: notice: clone_print: Master/Slave Set: vmrd-master-res
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_active: Resource vmrd-res:1 active on node1
Oct 14 13:11:31 node1 pengine: [4033]: notice: print_list: #011Slaves: [ node1 ]
Oct 14 13:11:31 node1 pengine: [4033]: notice: print_list: #011Stopped: [ vmrd-res:0 ]
Oct 14 13:11:31 node1 pengine: [4033]: notice: native_print: vsstvm-res#011(ocf::peakpoint:vsstvm):#011Stopped 
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_rsc_location: Constraint (vmrd-master-prefer-location-rule) is not active (role : Master)
Oct 14 13:11:31 node1 pengine:last message repeated 2 times
Oct 14 13:11:31 node1 pengine: [4033]: debug: common_apply_stickiness: Resource testdummy-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:11:31 node1 pengine: [4033]: debug: common_apply_stickiness: Resource ipmi-stonith-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:11:31 node1 pengine: [4033]: info: get_failcount: ipmi-stonith-clone has failed 1 times on node1
Oct 14 13:11:31 node1 pengine: [4033]: notice: common_apply_stickiness: ipmi-stonith-clone can fail 999999 more times on node1 before being forced off
Oct 14 13:11:31 node1 pengine: [4033]: debug: common_apply_stickiness: Resource vmrd-res:1: preferring current location (node=node1, weight=1)
Oct 14 13:11:31 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to testdummy-res:1
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource testdummy-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:11:31 node1 pengine: [4033]: WARN: native_color: Resource testdummy-res:0 cannot run anywhere
Oct 14 13:11:31 node1 pengine: [4033]: debug: clone_color: Allocated 1 testdummy-clone instances of a possible 2
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to ipmi-stonith-res:1
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource ipmi-stonith-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:11:31 node1 pengine: [4033]: WARN: native_color: Resource ipmi-stonith-res:0 cannot run anywhere
Oct 14 13:11:31 node1 pengine: [4033]: debug: clone_color: Allocated 1 ipmi-stonith-clone instances of a possible 2
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vmrd-res:1
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: All nodes for resource vmrd-res:0 are unavailable, unclean or shutting down (node2: 0, -1000000)
Oct 14 13:11:31 node1 pengine: [4033]: WARN: native_color: Resource vmrd-res:0 cannot run anywhere
Oct 14 13:11:31 node1 pengine: [4033]: debug: clone_color: Allocated 1 vmrd-master-res instances of a possible 2
Oct 14 13:11:31 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 105
Oct 14 13:11:31 node1 pengine: [4033]: info: master_color: Promoting vmrd-res:1 (Slave node1)
Oct 14 13:11:31 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:11:31 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:11:31 node1 pengine: [4033]: debug: master_color: vmrd-res:1 master score: 205
Oct 14 13:11:31 node1 pengine: [4033]: debug: master_color: vmrd-res:0 master score: 0
Oct 14 13:11:31 node1 pengine: [4033]: info: master_color: vmrd-master-res: Promoted 1 instances of a possible 1 to master
Oct 14 13:11:31 node1 pengine: [4033]: debug: native_assign_node: Assigning node1 to vsstvm-res
Oct 14 13:11:31 node1 pengine: [4033]: debug: master_create_actions: Creating actions for vmrd-master-res
Oct 14 13:11:31 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:31 node1 pengine:last message repeated 3 times
Oct 14 13:11:31 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:11:31 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:31 node1 pengine: [4033]: notice: RecurringOp:  Start recurring monitor (7s) for vmrd-res:1 on node1
Oct 14 13:11:31 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:31 node1 pengine: [4033]: debug: text2task: Unsupported action: stonith_complete
Oct 14 13:11:31 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:31 node1 pengine:last message repeated 3 times
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:0#011(Stopped)
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Leave resource testdummy-res:1#011(Started node1)
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:0#011(Stopped)
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Leave resource ipmi-stonith-res:1#011(Started node1)
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Leave resource vmrd-res:0#011(Stopped)
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Promote vmrd-res:1#011(Slave -> Master node1)
Oct 14 13:11:31 node1 pengine: [4033]: notice: LogActions: Start vsstvm-res#011(node1)
Oct 14 13:11:31 node1 pengine: [4033]: ERROR: crm_abort: crm_strdup_fn: Triggered assert at utils.c:775 : src != NULL
Oct 14 13:11:31 node1 pengine:last message repeated 7 times
Oct 14 13:11:31 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_LOG   
Oct 14 13:11:31 node1 crmd: [4034]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE
Oct 14 13:11:31 node1 crmd: [4034]: info: unpack_graph: Unpacked transition 107: 11 actions in 11 synapses
Oct 14 13:11:31 node1 crmd: [4034]: info: do_te_invoke: Processing graph 107 (ref=pe_calc-dc-1255547491-289) derived from /var/lib/pengine/pe-warn-26648.bz2
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 37 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=0, Pending=0, Fired=1, Skipped=0, Incomplete=10, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_rsc_command: Initiating action 63: notify vmrd-res:1_pre_notify_promote_0 on node1 (local)
Oct 14 13:11:31 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=63:107:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_notify_0 )
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation notify[64] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_operation=[promote] CRM_meta_notify_slave_resource=[vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta_timeout=[6000] CRM_met to the operation list.
Oct 14 13:11:31 node1 lrmd: [4031]: info: rsc:vmrd-res:1:64: notify
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=1, Pending=1, Fired=1, Skipped=0, Incomplete=9, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 pengine: [4033]: WARN: process_pe_message: Transition 107: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26648.bz2
Oct 14 13:11:31 node1 pengine: [4033]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Oct 14 13:11:31 node1 vmrdra[12563]: INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 vmrdra[12563]: INFO:  notify: pre for promote - counts: active 0 - starting 0 - stopping 0
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO:  notify: pre for promote - counts: active 0 - starting 0 - stopping 0
Oct 14 13:11:31 node1 lrmd: [4031]: info: Managed vmrd-res:1:notify process 12563 exited with return code 0.
Oct 14 13:11:31 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_notify_0 (call=64, rc=0, cib-update=318, confirmed=true) complete ok
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.7 -> 0.70.8 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_pre_notify_promote_0 (63) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=2, Pending=0, Fired=1, Skipped=0, Incomplete=8, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=3, Pending=0, Fired=1, Skipped=0, Incomplete=7, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_rsc_command: Initiating action 21: promote vmrd-res:1_promote_0 on node1 (local)
Oct 14 13:11:31 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=21:107:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_promote_0 )
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation promote[65] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_slave_resource=[vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta_timeout=[6000] CRM_meta_clone_max=[2] CRM_meta_notify_dem to the operation list.
Oct 14 13:11:31 node1 lrmd: [4031]: info: rsc:vmrd-res:1:65: promote
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=4, Pending=1, Fired=1, Skipped=0, Incomplete=6, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.8 -> 0.70.9 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_promote_0 (21) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 39 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=5, Pending=0, Fired=2, Skipped=0, Incomplete=4, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_rsc_command: Initiating action 64: notify vmrd-res:1_post_notify_promote_0 on node1 (local)
Oct 14 13:11:31 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=64:107:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_notify_0 )
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation notify[66] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_operation=[promote] CRM_meta_notify_slave_resource=[vmrd-res:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_active_uname=[ ] CRM_meta_notify_promote_resource=[vmrd-res:1 ] CRM_meta_notify_stop_uname=[ ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_demote_uname=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta_timeout=[6000] CRM_met to the operation list.
Oct 14 13:11:31 node1 lrmd: [4031]: info: rsc:vmrd-res:1:66: notify
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=7, Pending=1, Fired=1, Skipped=0, Incomplete=3, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 vmrdra[12604]: INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO: action: notify, clone instance vmrd-res:1
Oct 14 13:11:31 node1 vmrdra[12604]: INFO:  notify: post for promote - counts: active 0 - starting 0 - stopping 0
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:notify:stderr) 2009/10/14_13:11:31 INFO:  notify: post for promote - counts: active 0 - starting 0 - stopping 0
Oct 14 13:11:31 node1 lrmd: [4031]: info: Managed vmrd-res:1:notify process 12604 exited with return code 0.
Oct 14 13:11:31 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_notify_0 (call=66, rc=0, cib-update=320, confirmed=true) complete ok
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.9 -> 0.70.10 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_post_notify_promote_0 (64) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: info: te_pseudo_action: Pseudo action 40 fired and confirmed
Oct 14 13:11:31 node1 crmd: [4034]: info: te_rsc_command: Initiating action 47: start vsstvm-res_start_0 on node1 (local)
Oct 14 13:11:31 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=47:107:0:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vsstvm-res_start_0 )
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op:2359: copying parameters for rsc vsstvm-res
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation start[67] on ocf::vsstvm::vsstvm-res for client 4034, its parameters: CRM_meta_timeout=[6000] crm_feature_set=[3.0.1]  to the operation list.
Oct 14 13:11:31 node1 lrmd: [4031]: info: rsc:vsstvm-res:67: start
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=8, Pending=1, Fired=2, Skipped=0, Incomplete=1, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 crmd: [4034]: info: te_rsc_command: Initiating action 22: monitor vmrd-res:1_monitor_7000 on node1 (local)
Oct 14 13:11:31 node1 crmd: [4034]: info: do_lrm_rsc_op: Performing key=22:107:8:5796e0cd-bf36-4e41-afc7-335e064a4ec8 op=vmrd-res:1_monitor_7000 )
Oct 14 13:11:31 node1 lrmd: [4031]: debug: on_msg_perform_op: add an operation operation monitor[68] on ocf::vmrdra::vmrd-res:1 for client 4034, its parameters: CRM_meta_interval=[7000] CRM_meta_op_target_rc=[8] CRM_meta_role=[Master] CRM_meta_timeout=[3000] CRM_meta_clone_max=[2] crm_feature_set=[3.0.1] CRM_meta_globally_unique=[false] CRM_meta_name=[monitor] CRM_meta_clone=[1]  to the operation list.
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=9, Pending=2, Fired=1, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vsstvm-res:start:stdout) USER  HOME 
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vsstvm-res:start:stdout) VM_LIST: vmrd#012aes-dom
Oct 14 13:11:31 node1 vmrdra[12619]: INFO: action: monitor, clone instance vmrd-res:1
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:11:31 INFO: action: monitor, clone instance vmrd-res:1

Oct 14 13:11:31 node1 vmrdra[12619]: INFO: vmrd status: ACTIVE
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:11:31 INFO: vmrd status: ACTIVE
Oct 14 13:11:31 node1 crmd: [4034]: info: process_lrm_event: LRM operation vmrd-res:1_monitor_7000 (call=68, rc=8, cib-update=321, confirmed=false) complete master
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.10 -> 0.70.11 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vmrd-res:1_monitor_7000 (22) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: debug: run_graph: Transition 107 (Complete=10, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-26648.bz2): In-progress
Oct 14 13:11:31 node1 vsstvm[12618]: INFO: Xen domain  already running.
Oct 14 13:11:31 node1 lrmd: [4031]: info: RA output: (vsstvm-res:start:stderr) 2009/10/14_13:11:31 INFO: Xen domain  already running.
Oct 14 13:11:31 node1 lrmd: [4031]: info: Managed vsstvm-res:start process 12618 exited with return code 0.
Oct 14 13:11:31 node1 crmd: [4034]: info: process_lrm_event: LRM operation vsstvm-res_start_0 (call=67, rc=0, cib-update=322, confirmed=true) complete ok
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: update cib finished
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed
Oct 14 13:11:31 node1 mgmtd: [4035]: debug: send evt: evt:cib_changed done
Oct 14 13:11:31 node1 haclient: on_event:evt:cib_changed
Oct 14 13:11:31 node1 crmd: [4034]: debug: te_update_diff: Processing diff (cib_modify): 0.70.11 -> 0.70.12 (S_TRANSITION_ENGINE)
Oct 14 13:11:31 node1 crmd: [4034]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff
Oct 14 13:11:31 node1 crmd: [4034]: info: match_graph_event: Action vsstvm-res_start_0 (47) confirmed on node1 (rc=0)
Oct 14 13:11:31 node1 crmd: [4034]: info: run_graph: ====================================================
Oct 14 13:11:31 node1 crmd: [4034]: notice: run_graph: Transition 107 (Complete=11, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-26648.bz2): Complete
Oct 14 13:11:31 node1 crmd: [4034]: info: te_graph_trigger: Transition 107 is now complete
Oct 14 13:11:31 node1 crmd: [4034]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE
Oct 14 13:11:31 node1 crmd: [4034]: info: notify_crmd: Transition 107 status: done - <null>
Oct 14 13:11:31 node1 crmd: [4034]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_LOG   
Oct 14 13:11:31 node1 crmd: [4034]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 14 13:11:31 node1 crmd: [4034]: info: do_state_transition: Starting PEngine Recheck Timer
Oct 14 13:11:31 node1 crmd: [4034]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=528
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP
Oct 14 13:11:31 node1 crmd: [4034]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: cib_query#012cib
Oct 14 13:11:32 node1 mgmtd: [4035]: info: CIB query: cib
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012<cib epoch="70" num_updates="12" admin_epoch="0" validate-with="pacemaker-1.0" crm_feature_set="3.0.1" have-quorum="0" cib-last-written="Tue Oct 13 15:10:01 2009" dc-uuid="node1">#012  <configuration>#012    <crm_config>#012      <cluster_property_set id="cib-bootstrap-options">#012        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7"/>#012        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="openais"/>#012        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>#012        <nvpair id="cib-bootstrap-options-dc-deadtime" name="dc-deadtime" value="6s"/>#012        <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2"/>#012        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1255479423"/>#012        <nvpair id="cib-bootstrap-options-default-action-timeout" name="default-action-timeout" value="6s"/>#012      </cluster_property_set>#012    </crm_config>#012    <nodes>#012      <node id="node2" uname="node2" type="normal"/>#012      <node id="node1" uname="node1" type="normal"/>#012    </nodes>#012    <resources>#012      <clone id="testdummy-clone">#012        <meta_attributes id="testdummy-clone-meta_attributes">#012          <nvpair id="testdummy-clone-meta_attributes-target-role" name="target-role" value="started"/>#012        </meta_attributes>#012        <primitive class="ocf" id="testdummy-res" provider="peakpoint" type="testdummy">#012          <operations id="testdummy-res-operations">#012            <op id="testdummy-res-op-monitor-10" interval="10" name="monitor" start-delay="0" timeout="20"/>#012          </operations>#012          <meta_attributes id="testdummy-res-meta_attributes">#012            <nvpair id="t
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: active_cib
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: all_nodes
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: f
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: crm_nodes
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node2#012node1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: active_nodes
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: cluster_type
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012openais
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: node_config#012node2
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node2#012False#012False#012False#012False#012False#012False#012member#012False#012False
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: node_config#012node1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node1#012True#012False#012False#012False#012True#012True#012member#012False#012False
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: all_rsc
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-clone#012ipmi-stonith-clone#012vmrd-master-res#012vsstvm-res
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-clone
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012testdummy-clone
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012testdummy-res:0#012testdummy-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012testdummy-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012testdummy-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012testdummy-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-clone
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012clone
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012ipmi-stonith-clone
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012ipmi-stonith-res:0#012ipmi-stonith-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012ipmi-stonith-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012ipmi-stonith-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012ipmi-stonith-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-master-res
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012master
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: sub_rsc#012vmrd-master-res
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012vmrd-res:0#012vmrd-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012not running#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:0
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vmrd-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vmrd-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012running (Master)#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vmrd-res:1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_type#012vsstvm-res
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012native
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_status#012vsstvm-res
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012running#0121000000
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: recv msg: rsc_running_on#012vsstvm-res
Oct 14 13:11:32 node1 mgmtd: [4035]: debug: send msg: o#012node1
Oct 14 13:11:32 node1 lrmd: [12657]: debug: stonithd_signon: creating connection
Oct 14 13:11:32 node1 lrmd: [12657]: debug: sending out the signon msg.
Oct 14 13:11:32 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12657 (pid=12657) succeeded to signon to stonithd.
Oct 14 13:11:32 node1 lrmd: [12657]: debug: signed on to stonithd.
Oct 14 13:11:32 node1 lrmd: [12657]: debug: waiting for the stonithRA reply msg.
Oct 14 13:11:32 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12657 [pid: 12657] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:11:32 node1 stonithd: [12658]: debug: external_status: called.
Oct 14 13:11:32 node1 stonithd: [12658]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:11:32 node1 lrmd: [12657]: debug: a stonith RA operation queue to run, call_id=12658.
Oct 14 13:11:32 node1 lrmd: [12657]: debug: stonithd_receive_ops_result: begin
Oct 14 13:11:36 node1 stonithd: [12658]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:11:36 node1 stonithd: [12658]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:11:36 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12658] exited, its exit code: 0 when signo=0.
Oct 14 13:11:36 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:11:36 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12657 (pid=12657) signed off
Oct 14 13:11:38 node1 vmrdra[12666]: INFO: action: monitor, clone instance vmrd-res:1
Oct 14 13:11:38 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:11:38 INFO: action: monitor, clone instance vmrd-res:1

Oct 14 13:11:38 node1 vmrdra[12666]: INFO: vmrd status: ACTIVE
Oct 14 13:11:38 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:11:38 INFO: vmrd status: ACTIVE
Oct 14 13:11:40 node1 testdummy[12688]: DEBUG: testdummy-res:1 monitor : 0
Oct 14 13:11:45 node1 vmrdra[12696]: INFO: action: monitor, clone instance vmrd-res:1
Oct 14 13:11:45 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:11:45 INFO: action: monitor, clone instance vmrd-res:1

Oct 14 13:11:45 node1 vmrdra[12696]: INFO: vmrd status: ACTIVE
Oct 14 13:11:45 node1 lrmd: [4031]: info: RA output: (vmrd-res:1:monitor:stderr) 2009/10/14_13:11:45 INFO: vmrd status: ACTIVE
Oct 14 13:11:46 node1 lrmd: [12718]: debug: stonithd_signon: creating connection
Oct 14 13:11:46 node1 lrmd: [12718]: debug: sending out the signon msg.
Oct 14 13:11:46 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12718 (pid=12718) succeeded to signon to stonithd.
Oct 14 13:11:46 node1 lrmd: [12718]: debug: signed on to stonithd.
Oct 14 13:11:46 node1 lrmd: [12718]: debug: waiting for the stonithRA reply msg.
Oct 14 13:11:46 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12718 [pid: 12718] requests a resource operation monitor on ipmi-stonith-res:1 (external/ipmi)
Oct 14 13:11:46 node1 stonithd: [12719]: debug: external_status: called.
Oct 14 13:11:46 node1 stonithd: [12719]: debug: external_run_cmd: Calling '/usr/lib/stonith/plugins/external/ipmi status'
Oct 14 13:11:46 node1 lrmd: [12718]: debug: a stonith RA operation queue to run, call_id=12719.
Oct 14 13:11:46 node1 lrmd: [12718]: debug: stonithd_receive_ops_result: begin
Oct 14 13:11:46 node1 stonithd: [12719]: debug: external_run_cmd: '/usr/lib/stonith/plugins/external/ipmi status' output: IPMI plugin: node2 172.16.127.131#012Chassis Power is on
Oct 14 13:11:46 node1 stonithd: [12719]: debug: external_status: running 'ipmi status' returned 0
Oct 14 13:11:46 node1 stonithd: [4029]: debug: Child process external_ipmi-stonith-res:1_monitor [12719] exited, its exit code: 0 when signo=0.
Oct 14 13:11:46 node1 stonithd: [4029]: debug: ipmi-stonith-res:1's (external/ipmi) op monitor finished. op_result=0
Oct 14 13:11:46 node1 stonithd: [4029]: debug: client STONITH_RA_EXEC_12718 (pid=12718) signed off
