[Pacemaker] crmd restart due to internal error - pacemaker 1.1.8

Andrew Beekhof andrew at beekhof.net
Thu May 9 20:51:51 EDT 2013


On 08/05/2013, at 9:16 PM, pavan tc <pavan.tc at gmail.com> wrote:

> Hi,
> 
> I have a two-node cluster with STONITH disabled.

Thats not a good idea.

> I am still running with the pcmk plugin as opposed to the recommended CMAN plugin.

On rhel6?

> 
> With 1.1.8, I see some messages (appended to this mail) once in a while. I do not understand some keywords here - There is a "Leave" action. I am not sure what that is.

It means the cluster is not going to change the state of the resource.

> And, there is a CIB update failure that leads to a RECOVER action. There is a message that says the RECOVER action is not supported. Finally this leads to a stop and start of my resource.

Well, and also Pacemaker's crmd process.
My guess... the node is overloaded which is causing the cib queries to time out.

> I can copy the "crm configure show" output, but nothing special there.
> 
> Thanks much.
> Pavan
> 
> PS: The resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb is stale. The underlying device that represents this resource has been removed. However, the resource is still part of the CIB. All errors related to that resource can be ignored. But can this cause a node to be stopped/fenced?

Not if fencing is disabled.

> 
> ----------------------------------------------------------------------------------------------------------------------------------------
> May 07 05:15:24 [10845] vsan15    pengine:     info: short_print:          Masters: [ vsan15 ]
> May 07 05:15:24 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-090f26ed-5991-4f40-833e-02e76759dd41:1 ]
> May 07 05:15:24 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan15
> May 07 05:15:24 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:15:24 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan15
> May 07 05:15:24 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:15:24 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan16
> May 07 05:15:24 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:15:24 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan16
> May 07 05:15:24 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:15:24 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 cannot run anywhere
> May 07 05:15:24 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 cannot run anywhere
> May 07 05:15:24 [10845] vsan15    pengine:     info: master_color:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb: Promoted 0 instances of a possible 1 to master
> May 07 05:15:24 [10845] vsan15    pengine:     info: native_color:     Resource vha-090f26ed-5991-4f40-833e-02e76759dd41:1 cannot run anywhere
> May 07 05:15:24 [10845] vsan15    pengine:     info: master_color:     Promoting vha-090f26ed-5991-4f40-833e-02e76759dd41:0 (Master vsan15)
> May 07 05:15:24 [10845] vsan15    pengine:     info: master_color:     ms-090f26ed-5991-4f40-833e-02e76759dd41: Promoted 1 instances of a possible 1 to master
> May 07 05:15:24 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (30s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:0 on vsan15
> May 07 05:15:24 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (30s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:0 on vsan15
> May 07 05:15:24 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (Stopped)
> May 07 05:15:24 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1    (Stopped)
> May 07 05:15:24 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:0    (Master vsan15)
> May 07 05:15:24 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:1    (Stopped)
> May 07 05:15:24 [10845] vsan15    pengine:   notice: process_pe_message:     Calculated Transition 9: /var/lib/pacemaker/pengine/pe-input-189.bz2
> May 07 05:15:24 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> May 07 05:15:24 [10846] vsan15       crmd:     info: do_te_invoke:     Processing graph 9 (ref=pe_calc-dc-1367928924-43) derived from /var/lib/pacemaker/pengine/pe-input-189.bz2
> May 07 05:15:24 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 16: monitor vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_30000 on vsan15 (local)
> May 07 05:15:24 [10846] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_30000 (call=50, rc=8, cib-update=66, confirmed=false) master
> May 07 05:15:24 [10846] vsan15       crmd:   notice: run_graph:     Transition 9 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-189.bz2): Complete
> May 07 05:15:24 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
> May 07 05:16:25 corosync [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 14136: memb=1, new=0, lost=0
> May 07 05:16:25 corosync [pcmk  ] info: pcmk_peer_update: memb: vsan15 1682182316
> May 07 05:16:25 corosync [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 14136: memb=2, new=1, lost=0
> May 07 05:16:25 corosync [pcmk  ] info: update_member: Node 1698959532/vsan16 is now: member
> May 07 05:16:25 corosync [pcmk  ] info: pcmk_peer_update: NEW:  vsan16 1698959532
> May 07 05:16:25 corosync [pcmk  ] info: pcmk_peer_update: MEMB: vsan15 1682182316
> May 07 05:16:25 [10846] vsan15       crmd:   notice: ais_dispatch_message:     Membership 14136: quorum acquired
> May 07 05:16:25 corosync [pcmk  ] info: pcmk_peer_update: MEMB: vsan16 1698959532
> May 07 05:16:25 [10846] vsan15       crmd:   notice: crm_update_peer_state:     crm_update_ais_node: Node vsan16[1698959532] - state is now member
> May 07 05:16:25 corosync [pcmk  ] info: send_member_notification: Sending membership update 14136 to 2 children
> May 07 05:16:25 [10846] vsan15       crmd:     info: peer_update_callback:     vsan16 is now member (was lost)
> May 07 05:16:25 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
> May 07 05:16:25 [10841] vsan15        cib:   notice: ais_dispatch_message:     Membership 14136: quorum acquired
> May 07 05:16:25 [10841] vsan15        cib:   notice: crm_update_peer_state:     crm_update_ais_node: Node vsan16[1698959532] - state is now member
> May 07 05:16:25 corosync [pcmk  ] info: update_member: 0x15d4730 Node 1698959532 (vsan16) born on: 14136
> May 07 05:16:25 corosync [pcmk  ] info: send_member_notification: Sending membership update 14136 to 2 children
> May 07 05:16:25 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/67, version=0.1344.91): OK (rc=0)
> May 07 05:16:25 [10841] vsan15        cib:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:16:25 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section cib (origin=local/crmd/69, version=0.1344.93): OK (rc=0)
> May 07 05:16:25 [10846] vsan15       crmd:     info: crmd_ais_dispatch:     Setting expected votes to 2
> May 07 05:16:25 [10846] vsan15       crmd:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:16:25 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/71, version=0.1344.94): OK (rc=0)
> May 07 05:16:25 corosync [CPG   ] chosen downlist: sender r(0) ip(172.16.68.100) ; members(old:1 left:0)
> May 07 05:16:25 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/72, version=0.1344.95): OK (rc=0)
> May 07 05:16:25 corosync [MAIN  ] Completed service synchronization, ready to provide service.
> May 07 05:16:25 [10846] vsan15       crmd:     info: crmd_ais_dispatch:     Setting expected votes to 2
> May 07 05:16:25 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/75, version=0.1344.97): OK (rc=0)
> May 07 05:16:28 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [offline] (DC=true)
> May 07 05:16:28 [10846] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [offline] (DC=true)
> May 07 05:16:28 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [offline] (DC=true)
> May 07 05:16:28 [10846] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [offline] (DC=true)
> May 07 05:16:28 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [offline] (DC=true)
> May 07 05:16:28 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [online] (DC=true)
> May 07 05:16:28 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:  warning: match_down_event:     No match for shutdown action on vsan16
> May 07 05:16:28 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
> May 07 05:16:28 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:16:28 [10846] vsan15       crmd:     info: abort_transition_graph:     do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt
> May 07 05:16:28 [10846] vsan15       crmd:     info: join_make_offer:     Making join offers based on membership 14136
> May 07 05:16:28 [10846] vsan15       crmd:     info: do_dc_join_offer_all:     join-2: Waiting on 2 outstanding join acks
> May 07 05:16:28 [10846] vsan15       crmd:     info: update_dc:     Set DC to vsan15 (3.0.7)
> May 07 05:16:29 [10841] vsan15        cib:  warning: cib_process_diff:     Diff 0.1344.0 -> 0.1344.1 from vsan16 not applied to 0.1344.98: current "num_updates" is greater than required
> May 07 05:16:30 [10846] vsan15       crmd:     info: do_dc_join_offer_all:     A new node joined the cluster
> May 07 05:16:30 [10846] vsan15       crmd:     info: do_dc_join_offer_all:     join-3: Waiting on 2 outstanding join acks
> May 07 05:16:30 [10846] vsan15       crmd:     info: update_dc:     Set DC to vsan15 (3.0.7)
> May 07 05:16:31 [10846] vsan15       crmd:     info: crm_update_peer_expected:     do_dc_join_filter_offer: Node vsan16[1698959532] - expected state is now member
> May 07 05:16:31 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
> May 07 05:16:31 [10846] vsan15       crmd:     info: do_dc_join_finalize:     join-3: Syncing the CIB from vsan15 to the rest of the cluster
> May 07 05:16:31 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_sync for section 'all' (origin=local/crmd/79, version=0.1344.98): OK (rc=0)
> May 07 05:16:31 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/80, version=0.1344.99): OK (rc=0)
> May 07 05:16:31 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/81, version=0.1344.100): OK (rc=0)
> May 07 05:16:31 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan16']/transient_attributes (origin=vsan16/crmd/7, version=0.1344.101): OK (rc=0)
> May 07 05:16:32 [10846] vsan15       crmd:     info: services_os_action_execute:     Managed vgc-cm-agent.ocf_meta-data_0 process 14403 exited with rc=0
> May 07 05:16:32 [10846] vsan15       crmd:     info: do_dc_join_ack:     join-3: Updating node state to member for vsan16
> May 07 05:16:32 [10846] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan16']/lrm
> May 07 05:16:32 [10846] vsan15       crmd:     info: do_dc_join_ack:     join-3: Updating node state to member for vsan15
> May 07 05:16:32 [10846] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan15']/lrm
> May 07 05:16:32 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan16']/lrm (origin=local/crmd/82, version=0.1344.102): OK (rc=0)
> May 07 05:16:32 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
> May 07 05:16:32 [10846] vsan15       crmd:     info: abort_transition_graph:     do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
> May 07 05:16:32 [10844] vsan15      attrd:   notice: attrd_local_callback:     Sending full refresh (origin=crmd)
> May 07 05:16:32 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: master-vha-090f26ed-5991-4f40-833e-02e76759dd41 (4)
> May 07 05:16:32 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan15']/lrm (origin=local/crmd/84, version=0.1344.104): OK (rc=0)
> May 07 05:16:32 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:271 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_last_0, magic=0:0;1:2:0:ec7dd1c6-710a-4cee-b9d3-26e09c6ffb53, cib=0.1344.104) : Resource op removal
> May 07 05:16:32 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:227 - Triggered transition abort (complete=1) : LRM Refresh
> May 07 05:16:32 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/86, version=0.1344.106): OK (rc=0)
> May 07 05:16:32 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section cib (origin=local/crmd/88, version=0.1344.108): OK (rc=0)
> May 07 05:16:32 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (INFINITY)
> May 07 05:16:32 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: last-failure-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (1367926970)
> May 07 05:16:32 [10845] vsan15    pengine:     info: unpack_config:     Startup probes: enabled
> May 07 05:16:32 [10845] vsan15    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
> May 07 05:16:32 [10845] vsan15    pengine:     info: unpack_config:     Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> May 07 05:16:32 [10845] vsan15    pengine:     info: unpack_domains:     Unpacking domains
> May 07 05:16:32 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan15 is online
> May 07 05:16:32 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan16 is online
> May 07 05:16:32 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan15 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:16:32 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan15: not running (7)
> May 07 05:16:32 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan15 to vha-090f26ed-5991-4f40-833e-02e76759dd41:0
> May 07 05:16:32 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb [vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb]
> May 07 05:16:32 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 ]
> May 07 05:16:32 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-090f26ed-5991-4f40-833e-02e76759dd41 [vha-090f26ed-5991-4f40-833e-02e76759dd41]
> May 07 05:16:32 [10845] vsan15    pengine:     info: short_print:          Masters: [ vsan15 ]
> May 07 05:16:32 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-090f26ed-5991-4f40-833e-02e76759dd41:1 ]
> May 07 05:16:32 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan15
> May 07 05:16:32 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:16:32 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan15
> May 07 05:16:32 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:16:32 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 cannot run anywhere
> May 07 05:16:32 [10845] vsan15    pengine:     info: master_color:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb: Promoted 0 instances of a possible 1 to master
> May 07 05:16:32 [10845] vsan15    pengine:     info: master_color:     Promoting vha-090f26ed-5991-4f40-833e-02e76759dd41:0 (Master vsan15)
> May 07 05:16:32 [10845] vsan15    pengine:     info: master_color:     ms-090f26ed-5991-4f40-833e-02e76759dd41: Promoted 1 instances of a possible 1 to master
> May 07 05:16:32 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: probe_complete (true)
> May 07 05:16:32 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan16
> May 07 05:16:32 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan16
> May 07 05:16:32 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:16:32 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:16:32 [10845] vsan15    pengine:   notice: LogActions:     Start   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (vsan16)
> May 07 05:16:32 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1    (Stopped)
> May 07 05:16:32 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:0    (Master vsan15)
> May 07 05:16:32 [10845] vsan15    pengine:   notice: LogActions:     Start   vha-090f26ed-5991-4f40-833e-02e76759dd41:1    (vsan16)
> May 07 05:16:32 [10845] vsan15    pengine:   notice: process_pe_message:     Calculated Transition 10: /var/lib/pacemaker/pengine/pe-input-190.bz2
> May 07 05:16:32 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> May 07 05:16:32 [10846] vsan15       crmd:     info: do_te_invoke:     Processing graph 10 (ref=pe_calc-dc-1367928992-56) derived from /var/lib/pacemaker/pengine/pe-input-190.bz2
> May 07 05:16:32 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 6: monitor vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_monitor_0 on vsan16
> May 07 05:16:32 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 7: monitor vha-090f26ed-5991-4f40-833e-02e76759dd41:1_monitor_0 on vsan16
> May 07 05:16:33 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 5: probe_complete probe_complete on vsan16 - no waiting
> May 07 05:16:33 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 8: start vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_start_0 on vsan16
> May 07 05:16:33 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 22: start vha-090f26ed-5991-4f40-833e-02e76759dd41:1_start_0 on vsan16
> May 07 05:16:33 [10846] vsan15       crmd:  warning: status_from_rc:     Action 8 (vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_start_0) on vsan16 failed (target: 0 vs. rc: 7): Error
> May 07 05:16:33 [10846] vsan15       crmd:  warning: update_failcount:     Updating failcount for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan16 after failed start: rc=7 (update=INFINITY, time=1367928993)
> May 07 05:16:33 [10846] vsan15       crmd:     info: abort_transition_graph:     match_graph_event:275 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_last_failure_0, magic=0:7;8:10:0:635c50ca-8f47-4a94-a92c-eed5e2566766, cib=0.1344.117) : Event failed
> May 07 05:16:33 [10846] vsan15       crmd:  warning: update_failcount:     Updating failcount for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan16 after failed start: rc=7 (update=INFINITY, time=1367928993)
> May 07 05:16:33 [10846] vsan15       crmd:     info: process_graph_event:     Detected action (10.8) vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_start_0.15=not running: failed
> May 07 05:16:33 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-vsan16-fail-count-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb, name=fail-count-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb, value=INFINITY, magic=NA, cib=0.1344.118) : Transient attribute: update
> May 07 05:16:33 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-vsan16-last-failure-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb, name=last-failure-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb, value=1367928993, magic=NA, cib=0.1344.119) : Transient attribute: update
> May 07 05:16:34 [10846] vsan15       crmd:   notice: run_graph:     Transition 10 (Complete=10, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-190.bz2): Stopped
> May 07 05:16:34 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
> May 07 05:16:34 [10845] vsan15    pengine:     info: unpack_config:     Startup probes: enabled
> May 07 05:16:34 [10845] vsan15    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
> May 07 05:16:34 [10845] vsan15    pengine:     info: unpack_config:     Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> May 07 05:16:34 [10845] vsan15    pengine:     info: unpack_domains:     Unpacking domains
> May 07 05:16:34 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan15 is online
> May 07 05:16:34 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan16 is online
> May 07 05:16:34 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan15 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:16:34 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan15: not running (7)
> May 07 05:16:34 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan15 to vha-090f26ed-5991-4f40-833e-02e76759dd41:0
> May 07 05:16:34 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan16 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:16:34 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan16: not running (7)
> May 07 05:16:34 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan16 to vha-090f26ed-5991-4f40-833e-02e76759dd41:1
> May 07 05:16:34 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb [vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb]
> May 07 05:16:34 [10845] vsan15    pengine:     info: native_print:          vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (ocf::heartbeat:vgc-cm-agent.ocf):    Slave vsan16 FAILED
> May 07 05:16:34 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 ]
> May 07 05:16:34 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-090f26ed-5991-4f40-833e-02e76759dd41 [vha-090f26ed-5991-4f40-833e-02e76759dd41]
> May 07 05:16:34 [10845] vsan15    pengine:     info: short_print:          Masters: [ vsan15 ]
> May 07 05:16:34 [10845] vsan15    pengine:     info: short_print:          Slaves: [ vsan16 ]
> May 07 05:16:34 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan15
> May 07 05:16:34 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:16:34 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan15
> May 07 05:16:34 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:16:34 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan16
> May 07 05:16:34 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:16:34 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan16
> May 07 05:16:34 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:16:34 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 cannot run anywhere
> May 07 05:16:34 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 cannot run anywhere
> May 07 05:16:34 [10845] vsan15    pengine:     info: master_color:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb: Promoted 0 instances of a possible 1 to master
> May 07 05:16:34 [10845] vsan15    pengine:     info: master_color:     Promoting vha-090f26ed-5991-4f40-833e-02e76759dd41:0 (Master vsan15)
> May 07 05:16:34 [10845] vsan15    pengine:     info: master_color:     ms-090f26ed-5991-4f40-833e-02e76759dd41: Promoted 1 instances of a possible 1 to master
> May 07 05:16:34 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:16:34 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:16:34 [10845] vsan15    pengine:   notice: LogActions:     Stop    vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (vsan16)
> May 07 05:16:34 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1    (Stopped)
> May 07 05:16:34 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:0    (Master vsan15)
> May 07 05:16:34 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:1    (Slave vsan16)
> May 07 05:16:34 [10845] vsan15    pengine:   notice: process_pe_message:     Calculated Transition 11: /var/lib/pacemaker/pengine/pe-input-191.bz2
> May 07 05:16:34 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> May 07 05:16:34 [10846] vsan15       crmd:     info: do_te_invoke:     Processing graph 11 (ref=pe_calc-dc-1367928994-62) derived from /var/lib/pacemaker/pengine/pe-input-191.bz2
> May 07 05:16:34 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 21: monitor vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000 on vsan16
> May 07 05:16:34 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 2: stop vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_stop_0 on vsan16
> May 07 05:16:34 [10846] vsan15       crmd:   notice: run_graph:     Transition 11 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-191.bz2): Complete
> May 07 05:16:34 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_replace_notify:     Replaced: 0.1344.128 -> 0.1345.1 from vsan16
> May 07 05:16:36 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.1345.1) : Non-status change
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     Diff: --- 0.1344.128
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     Diff: +++ 0.1345.1 0d520a23bc38850f68e350ce2bd7f47a
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     -- <cib admin_epoch="0" epoch="1344" num_updates="128" />
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     ++       <rsc_location id="ms_stop_res_on_node" rsc="ms-090f26ed-5991-4f40-833e-02e76759dd41" >
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     ++         <rule id="ms_stop_res_on_node-rule" score="-INFINITY" >
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     ++           <expression attribute="#uname" id="ms_stop_res_on_node-expression" operation="eq" value="vsan16" />
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     ++         </rule>
> May 07 05:16:36 [10841] vsan15        cib:   notice: cib:diff:     ++       </rsc_location>
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_replace for section 'all' (origin=vsan16/cibadmin/2, version=0.1345.1): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
> May 07 05:16:36 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_dc_takeover:     Taking over DC status for this partition
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/93, version=0.1345.2): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_master for section 'all' (origin=local/crmd/96, version=0.1345.4): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section cib (origin=local/crmd/97, version=0.1345.5): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/99, version=0.1345.6): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_dc_join_offer_all:     join-4: Waiting on 2 outstanding join acks
> May 07 05:16:36 [10846] vsan15       crmd:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/101, version=0.1345.8): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:     info: crmd_ais_dispatch:     Setting expected votes to 2
> May 07 05:16:36 [10846] vsan15       crmd:     info: update_dc:     Set DC to vsan15 (3.0.7)
> May 07 05:16:36 [10846] vsan15       crmd:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/104, version=0.1345.10): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:     info: crmd_ais_dispatch:     Setting expected votes to 2
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/107, version=0.1345.11): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_dc_join_finalize:     join-4: Syncing the CIB from vsan16 to the rest of the cluster
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_replace:     Digest matched on replace from vsan16: 2040dc845f64bd8da56d87369ad19883
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_replace:     Replaced 0.1345.13 with 0.1345.13 from vsan16
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_sync for section 'all' (origin=vsan16/vsan16/108, version=0.1345.13): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_dc_join_ack:     join-4: Updating node state to member for vsan15
> May 07 05:16:36 [10846] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan15']/lrm
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_dc_join_ack:     join-4: Updating node state to member for vsan16
> May 07 05:16:36 [10846] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan16']/lrm
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/109, version=0.1345.14): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/110, version=0.1345.15): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan15']/lrm (origin=local/crmd/111, version=0.1345.16): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan16']/lrm (origin=local/crmd/113, version=0.1345.21): OK (rc=0)
> May 07 05:16:36 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
> May 07 05:16:36 [10846] vsan15       crmd:     info: abort_transition_graph:     do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/115, version=0.1345.23): OK (rc=0)
> May 07 05:16:36 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section cib (origin=local/crmd/117, version=0.1345.25): OK (rc=0)
> May 07 05:16:36 [10844] vsan15      attrd:   notice: attrd_local_callback:     Sending full refresh (origin=crmd)
> May 07 05:16:36 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: master-vha-090f26ed-5991-4f40-833e-02e76759dd41 (4)
> May 07 05:16:36 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (INFINITY)
> May 07 05:16:36 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: last-failure-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (1367926970)
> May 07 05:16:36 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: probe_complete (true)
> May 07 05:16:37 [10845] vsan15    pengine:     info: unpack_config:     Startup probes: enabled
> May 07 05:16:37 [10845] vsan15    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
> May 07 05:16:37 [10845] vsan15    pengine:     info: unpack_config:     Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> May 07 05:16:37 [10845] vsan15    pengine:     info: unpack_domains:     Unpacking domains
> May 07 05:16:37 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan15 is online
> May 07 05:16:37 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan16 is online
> May 07 05:16:37 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan15 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:16:37 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan15: not running (7)
> May 07 05:16:37 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan15 to vha-090f26ed-5991-4f40-833e-02e76759dd41:0
> May 07 05:16:37 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan16 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:16:37 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan16: not running (7)
> May 07 05:16:37 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan16 to vha-090f26ed-5991-4f40-833e-02e76759dd41:1
> May 07 05:16:37 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb [vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb]
> May 07 05:16:37 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 ]
> May 07 05:16:37 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-090f26ed-5991-4f40-833e-02e76759dd41 [vha-090f26ed-5991-4f40-833e-02e76759dd41]
> May 07 05:16:37 [10845] vsan15    pengine:     info: short_print:          Masters: [ vsan15 ]
> May 07 05:16:37 [10845] vsan15    pengine:     info: short_print:          Slaves: [ vsan16 ]
> May 07 05:16:37 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan15
> May 07 05:16:37 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:16:37 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan15
> May 07 05:16:37 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:16:37 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan16
> May 07 05:16:37 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:16:37 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan16
> May 07 05:16:37 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:16:37 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 cannot run anywhere
> May 07 05:16:37 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 cannot run anywhere
> May 07 05:16:37 [10845] vsan15    pengine:     info: master_color:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb: Promoted 0 instances of a possible 1 to master
> May 07 05:16:37 [10845] vsan15    pengine:     info: native_color:     Resource vha-090f26ed-5991-4f40-833e-02e76759dd41:1 cannot run anywhere
> May 07 05:16:37 [10845] vsan15    pengine:     info: master_color:     Promoting vha-090f26ed-5991-4f40-833e-02e76759dd41:0 (Master vsan15)
> May 07 05:16:37 [10845] vsan15    pengine:     info: master_color:     ms-090f26ed-5991-4f40-833e-02e76759dd41: Promoted 1 instances of a possible 1 to master
> May 07 05:16:37 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (Stopped)
> May 07 05:16:37 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1    (Stopped)
> May 07 05:16:37 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:0    (Master vsan15)
> May 07 05:16:37 [10845] vsan15    pengine:   notice: LogActions:     Stop    vha-090f26ed-5991-4f40-833e-02e76759dd41:1    (vsan16)
> May 07 05:16:37 [10845] vsan15    pengine:   notice: process_pe_message:     Calculated Transition 12: /var/lib/pacemaker/pengine/pe-input-192.bz2
> May 07 05:16:37 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> May 07 05:16:37 [10846] vsan15       crmd:     info: do_te_invoke:     Processing graph 12 (ref=pe_calc-dc-1367928997-72) derived from /var/lib/pacemaker/pengine/pe-input-192.bz2
> May 07 05:16:37 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 19: stop vha-090f26ed-5991-4f40-833e-02e76759dd41_stop_0 on vsan16
> May 07 05:16:38 [10846] vsan15       crmd:   notice: run_graph:     Transition 12 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-192.bz2): Complete
> May 07 05:16:38 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_replace_notify:     Replaced: 0.1345.36 -> 0.1346.1 from vsan16
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     Diff: --- 0.1345.36
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     Diff: +++ 0.1346.1 d5dcc8f04f6661a4137788e300500a61
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     --       <rsc_location id="ms_stop_res_on_node" rsc="ms-090f26ed-5991-4f40-833e-02e76759dd41" >
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     --         <rule id="ms_stop_res_on_node-rule" score="-INFINITY" >
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     --           <expression attribute="#uname" id="ms_stop_res_on_node-expression" operation="eq" value="vsan16" />
> May 07 05:17:39 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:126 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.1346.1) : Non-status change
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     --         </rule>
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     --       </rsc_location>
> May 07 05:17:39 [10841] vsan15        cib:   notice: cib:diff:     ++ <cib epoch="1346" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" update-origin="vsan16" update-client="cibadmin" cib-last-written="Tue May  7 05:16:36 2013" have-quorum="1" dc-uuid="vsan15" />
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_replace for section 'all' (origin=vsan16/cibadmin/2, version=0.1346.1): OK (rc=0)
> May 07 05:17:39 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> May 07 05:17:39 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
> May 07 05:17:39 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
> May 07 05:17:39 [10846] vsan15       crmd:     info: do_dc_takeover:     Taking over DC status for this partition
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/120, version=0.1346.2): OK (rc=0)
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_master for section 'all' (origin=local/crmd/123, version=0.1346.4): OK (rc=0)
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section cib (origin=local/crmd/124, version=0.1346.5): OK (rc=0)
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/126, version=0.1346.6): OK (rc=0)
> May 07 05:17:39 [10846] vsan15       crmd:     info: do_dc_join_offer_all:     join-5: Waiting on 2 outstanding join acks
> May 07 05:17:39 [10846] vsan15       crmd:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/128, version=0.1346.8): OK (rc=0)
> May 07 05:17:39 [10846] vsan15       crmd:     info: crmd_ais_dispatch:     Setting expected votes to 2
> May 07 05:17:39 [10846] vsan15       crmd:     info: update_dc:     Set DC to vsan15 (3.0.7)
> May 07 05:17:39 [10846] vsan15       crmd:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/131, version=0.1346.10): OK (rc=0)
> May 07 05:17:39 [10846] vsan15       crmd:     info: crmd_ais_dispatch:     Setting expected votes to 2
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section crm_config (origin=local/crmd/134, version=0.1346.11): OK (rc=0)
> May 07 05:17:39 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
> May 07 05:17:39 [10846] vsan15       crmd:     info: do_dc_join_finalize:     join-5: Syncing the CIB from vsan16 to the rest of the cluster
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_replace:     Digest matched on replace from vsan16: e06eba70e44824f664002c5d2e5a6dbd
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_replace:     Replaced 0.1346.13 with 0.1346.13 from vsan16
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_sync for section 'all' (origin=vsan16/vsan16/135, version=0.1346.13): OK (rc=0)
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/136, version=0.1346.14): OK (rc=0)
> May 07 05:17:39 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/137, version=0.1346.15): OK (rc=0)
> May 07 05:17:40 [10846] vsan15       crmd:     info: services_os_action_execute:     Managed vgc-cm-agent.ocf_meta-data_0 process 14517 exited with rc=0
> May 07 05:17:40 [10846] vsan15       crmd:     info: do_dc_join_ack:     join-5: Updating node state to member for vsan16
> May 07 05:17:40 [10846] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan16']/lrm
> May 07 05:17:40 [10846] vsan15       crmd:     info: do_dc_join_ack:     join-5: Updating node state to member for vsan15
> May 07 05:17:40 [10846] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan15']/lrm
> May 07 05:17:40 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan16']/lrm (origin=local/crmd/138, version=0.1346.16): OK (rc=0)
> May 07 05:17:40 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_delete for section //node_state[@uname='vsan15']/lrm (origin=local/crmd/140, version=0.1346.21): OK (rc=0)
> May 07 05:17:40 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
> May 07 05:17:40 [10846] vsan15       crmd:     info: abort_transition_graph:     do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled
> May 07 05:17:40 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/142, version=0.1346.23): OK (rc=0)
> May 07 05:17:40 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section cib (origin=local/crmd/144, version=0.1346.25): OK (rc=0)
> May 07 05:17:40 [10844] vsan15      attrd:   notice: attrd_local_callback:     Sending full refresh (origin=crmd)
> May 07 05:17:40 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: master-vha-090f26ed-5991-4f40-833e-02e76759dd41 (4)
> May 07 05:17:40 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (INFINITY)
> May 07 05:17:40 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: last-failure-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (1367926970)
> May 07 05:17:40 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: probe_complete (true)
> May 07 05:17:41 [10845] vsan15    pengine:     info: unpack_config:     Startup probes: enabled
> May 07 05:17:41 [10845] vsan15    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
> May 07 05:17:41 [10845] vsan15    pengine:     info: unpack_config:     Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> May 07 05:17:41 [10845] vsan15    pengine:     info: unpack_domains:     Unpacking domains
> May 07 05:17:41 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan15 is online
> May 07 05:17:41 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan16 is online
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan15 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:17:41 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan15: not running (7)
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan15 to vha-090f26ed-5991-4f40-833e-02e76759dd41:0
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan16 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:17:41 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan16: not running (7)
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan16 to vha-090f26ed-5991-4f40-833e-02e76759dd41:1
> May 07 05:17:41 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb [vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb]
> May 07 05:17:41 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-090f26ed-5991-4f40-833e-02e76759dd41 [vha-090f26ed-5991-4f40-833e-02e76759dd41]
> May 07 05:17:41 [10845] vsan15    pengine:     info: short_print:          Masters: [ vsan15 ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-090f26ed-5991-4f40-833e-02e76759dd41:1 ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan15
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan15
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 cannot run anywhere
> May 07 05:17:41 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 cannot run anywhere
> May 07 05:17:41 [10845] vsan15    pengine:     info: master_color:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb: Promoted 0 instances of a possible 1 to master
> May 07 05:17:41 [10845] vsan15    pengine:     info: master_color:     Promoting vha-090f26ed-5991-4f40-833e-02e76759dd41:0 (Master vsan15)
> May 07 05:17:41 [10845] vsan15    pengine:     info: master_color:     ms-090f26ed-5991-4f40-833e-02e76759dd41: Promoted 1 instances of a possible 1 to master
> May 07 05:17:41 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (Stopped)
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1    (Stopped)
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:0    (Master vsan15)
> May 07 05:17:41 [10845] vsan15    pengine:   notice: LogActions:     Start   vha-090f26ed-5991-4f40-833e-02e76759dd41:1    (vsan16)
> May 07 05:17:41 [10845] vsan15    pengine:   notice: process_pe_message:     Calculated Transition 13: /var/lib/pacemaker/pengine/pe-input-193.bz2
> May 07 05:17:41 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> May 07 05:17:41 [10846] vsan15       crmd:     info: do_te_invoke:     Processing graph 13 (ref=pe_calc-dc-1367929061-81) derived from /var/lib/pacemaker/pengine/pe-input-193.bz2
> May 07 05:17:41 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 18: start vha-090f26ed-5991-4f40-833e-02e76759dd41_start_0 on vsan16
> May 07 05:17:41 [10846] vsan15       crmd:     info: abort_transition_graph:     te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-vsan16-master-vha-090f26ed-5991-4f40-833e-02e76759dd41, name=master-vha-090f26ed-5991-4f40-833e-02e76759dd41, value=3, magic=NA, cib=0.1346.36) : Transient attribute: update
> May 07 05:17:41 [10846] vsan15       crmd:   notice: run_graph:     Transition 13 (Complete=3, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-193.bz2): Stopped
> May 07 05:17:41 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: unpack_config:     Startup probes: enabled
> May 07 05:17:41 [10845] vsan15    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore
> May 07 05:17:41 [10845] vsan15    pengine:     info: unpack_config:     Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> May 07 05:17:41 [10845] vsan15    pengine:     info: unpack_domains:     Unpacking domains
> May 07 05:17:41 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan15 is online
> May 07 05:17:41 [10845] vsan15    pengine:     info: determine_online_status:     Node vsan16 is online
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan15 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:17:41 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan15: not running (7)
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan15 to vha-090f26ed-5991-4f40-833e-02e76759dd41:0
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb on vsan16 to vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0
> May 07 05:17:41 [10845] vsan15    pengine:  warning: unpack_rsc_op:     Processing failed op start for vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 on vsan16: not running (7)
> May 07 05:17:41 [10845] vsan15    pengine:     info: find_anonymous_clone:     Internally renamed vha-090f26ed-5991-4f40-833e-02e76759dd41 on vsan16 to vha-090f26ed-5991-4f40-833e-02e76759dd41:1
> May 07 05:17:41 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb [vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb]
> May 07 05:17:41 [10845] vsan15    pengine:     info: short_print:          Stopped: [ vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: clone_print:      Master/Slave Set: ms-090f26ed-5991-4f40-833e-02e76759dd41 [vha-090f26ed-5991-4f40-833e-02e76759dd41]
> May 07 05:17:41 [10845] vsan15    pengine:     info: short_print:          Masters: [ vsan15 ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: short_print:          Slaves: [ vsan16 ]
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan15
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan15
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan15 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 has failed INFINITY times on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: get_failcount:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb has failed INFINITY times on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:  warning: common_apply_stickiness:     Forcing ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb away from vsan16 after 1000000 failures (max=1000000)
> May 07 05:17:41 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0 cannot run anywhere
> May 07 05:17:41 [10845] vsan15    pengine:     info: native_color:     Resource vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1 cannot run anywhere
> May 07 05:17:41 [10845] vsan15    pengine:     info: master_color:     ms-bcd94724-3ec0-4a8d-8951-9d27be3a6acb: Promoted 0 instances of a possible 1 to master
> May 07 05:17:41 [10845] vsan15    pengine:     info: master_color:     Promoting vha-090f26ed-5991-4f40-833e-02e76759dd41:0 (Master vsan15)
> May 07 05:17:41 [10845] vsan15    pengine:     info: master_color:     ms-090f26ed-5991-4f40-833e-02e76759dd41: Promoted 1 instances of a possible 1 to master
> May 07 05:17:41 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:     info: RecurringOp:      Start recurring monitor (31s) for vha-090f26ed-5991-4f40-833e-02e76759dd41:1 on vsan16
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:0    (Stopped)
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb:1    (Stopped)
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:0    (Master vsan15)
> May 07 05:17:41 [10845] vsan15    pengine:     info: LogActions:     Leave   vha-090f26ed-5991-4f40-833e-02e76759dd41:1    (Slave vsan16)
> May 07 05:17:41 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> May 07 05:17:41 [10845] vsan15    pengine:   notice: process_pe_message:     Calculated Transition 14: /var/lib/pacemaker/pengine/pe-input-194.bz2
> May 07 05:17:41 [10846] vsan15       crmd:     info: do_te_invoke:     Processing graph 14 (ref=pe_calc-dc-1367929061-83) derived from /var/lib/pacemaker/pengine/pe-input-194.bz2
> May 07 05:17:41 [10846] vsan15       crmd:     info: te_rsc_command:     Initiating action 20: monitor vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000 on vsan16
> May 07 05:17:42 [10846] vsan15       crmd:   notice: run_graph:     Transition 14 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-194.bz2): Complete
> May 07 05:17:42 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
> May 07 05:18:25 [10846] vsan15       crmd:    error: node_list_update_callback:     CIB Update 72 failed: Timer expired
> May 07 05:18:25 [10846] vsan15       crmd:  warning: node_list_update_callback:     update:failed: No data to dump as XML
> May 07 05:18:25 [10846] vsan15       crmd:    error: do_log:     FSA: Input I_ERROR from node_list_update_callback() received in state S_IDLE
> May 07 05:18:25 [10846] vsan15       crmd:   notice: do_state_transition:     State transition S_IDLE -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=node_list_update_callback ]
> May 07 05:18:25 [10846] vsan15       crmd:    error: do_recover:     Action A_RECOVER (0000000001000000) not supported
> May 07 05:18:25 [10846] vsan15       crmd:  warning: do_election_vote:     Not voting in election, we're in state S_RECOVERY
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_dc_release:     DC role released
> May 07 05:18:25 [10846] vsan15       crmd:     info: pe_ipc_destroy:     Connection to the Policy Engine released
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_te_control:     Transitioner is now inactive
> May 07 05:18:25 [10846] vsan15       crmd:    error: do_log:     FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_state_transition:     State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ]
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_shutdown:     Disconnecting STONITH...
> May 07 05:18:25 [10846] vsan15       crmd:     info: tengine_stonith_connection_destroy:     Fencing daemon disconnected
> May 07 05:18:25 [10843] vsan15       lrmd:     info: cancel_recurring_action:     Cancelling operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_30000
> May 07 05:18:25 [10846] vsan15       crmd:    error: verify_stopped:     Resource vha-090f26ed-5991-4f40-833e-02e76759dd41 was active at shutdown.  You may ignore this error if it is unmanaged.
> May 07 05:18:25 [10846] vsan15       crmd:     info: lrmd_api_disconnect:     Disconnecting from lrmd service
> May 07 05:18:25 [10843] vsan15       lrmd:     info: lrmd_ipc_destroy:     LRMD client disconnecting 0x19129f0 - name: crmd id: 1a9d196e-7822-4895-b066-7e2702a75f45
> May 07 05:18:25 [10846] vsan15       crmd:     info: lrmd_connection_destroy:     connection destroyed
> May 07 05:18:25 [10846] vsan15       crmd:     info: lrm_connection_destroy:     LRM Connection disconnected
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_lrm_control:     Disconnected from the LRM
> May 07 05:18:25 [10846] vsan15       crmd:     info: crm_cluster_disconnect:     Disconnecting from cluster infrastructure: classic openais (with plugin)
> May 07 05:18:25 [10846] vsan15       crmd:   notice: terminate_cs_connection:     Disconnecting from Corosync
> May 07 05:18:25 [10846] vsan15       crmd:     info: crm_cluster_disconnect:     Disconnected from classic openais (with plugin)
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_ha_control:     Disconnected from the cluster
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_cib_control:     Disconnecting CIB
> May 07 05:18:25 [10841] vsan15        cib:     info: cib_process_readwrite:     We are now in R/O mode
> May 07 05:18:25 [10841] vsan15        cib:  warning: qb_ipcs_event_sendv:     new_event_notification (10841-10846-12): Broken pipe (32)
> May 07 05:18:25 [10841] vsan15        cib:     info: crm_ipcs_send:     Event 722 failed, size=162, to=0x1721890[10846], queue=1, retries=0, rc=-32: <cib-reply t="cib" cib_op="cib_slave" cib_callid="148" cib_clientid="0003ee04-6a88-496a-b0fd-1d0e24c1510b" cib_callopt="
> May 07 05:18:25 [10841] vsan15        cib:  warning: do_local_notify:     A-Sync reply to crmd failed: No message of desired type
> May 07 05:18:25 [10846] vsan15       crmd:     info: crmd_cib_connection_destroy:     Connection to the CIB terminated...
> May 07 05:18:25 [10846] vsan15       crmd:     info: qb_ipcs_us_withdraw:     withdrawing server sockets
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_exit:     Performing A_EXIT_0 - gracefully exiting the CRMd
> May 07 05:18:25 [10846] vsan15       crmd:    error: do_exit:     Could not recover from internal error
> May 07 05:18:25 [10846] vsan15       crmd:     info: do_exit:     [crmd] stopped (2)
> May 07 05:18:25 [10846] vsan15       crmd:     info: crmd_exit:     Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ]
> May 07 05:18:25 [10846] vsan15       crmd:     info: crmd_exit:     Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ]
> May 07 05:18:25 [10846] vsan15       crmd:     info: crmd_exit:     Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ]
> May 07 05:18:25 [10846] vsan15       crmd:     info: lrmd_api_disconnect:     Disconnecting from lrmd service
> May 07 05:18:25 [10846] vsan15       crmd:     info: crm_xml_cleanup:     Cleaning up memory from libxml2
> May 07 05:18:25 corosync [pcmk  ] info: pcmk_ipc_exit: Client crmd (conn=0x15f4560, async-conn=0x15f4560) left
> May 07 05:18:25 [10835] vsan15 pacemakerd:    error: pcmk_child_exit:     Child process crmd exited (pid=10846, rc=2)
> May 07 05:18:25 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan15[1682182316] - unknown is now (null)
> May 07 05:18:25 [10835] vsan15 pacemakerd:   notice: pcmk_process_exit:     Respawning failed child process: crmd
> May 07 05:18:25 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan15[1682182316] - unknown is now (null)
> May 07 05:18:25 [10835] vsan15 pacemakerd:     info: start_child:     Forked child 14640 for process crmd
> May 07 05:18:25 [10842] vsan15 stonith-ng:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan15[1682182316] - unknown is now (null)
> May 07 05:18:25 [10841] vsan15        cib:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan15[1682182316] - unknown is now (null)
> May 07 05:18:25 corosync [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_log_init:     Cannot change active directory to /var/lib/pacemaker/cores/hacluster: Permission denied (13)
> May 07 05:18:25 [14640] vsan15       crmd:   notice: main:     CRM Git Version: 394e906
> May 07 05:18:25 [14640] vsan15       crmd:     info: get_cluster_type:     Cluster type is: 'openais'
> May 07 05:18:25 [14640] vsan15       crmd:     info: do_cib_control:     CIB connection established
> May 07 05:18:25 [14640] vsan15       crmd:   notice: crm_cluster_connect:     Connecting to cluster infrastructure: classic openais (with plugin)
> May 07 05:18:25 [14640] vsan15       crmd:     info: init_cs_connection_classic:     Creating connection to our Corosync plugin
> May 07 05:18:25 [14640] vsan15       crmd:     info: init_cs_connection_classic:     AIS connection established
> May 07 05:18:25 corosync [pcmk  ] info: pcmk_ipc: Recorded connection 0x15f4560 for crmd/0
> May 07 05:18:25 corosync [pcmk  ] info: pcmk_ipc: Sending membership update 14136 to crmd
> May 07 05:18:25 [14640] vsan15       crmd:     info: get_ais_nodeid:     Server details: id=1682182316 uname=vsan15 cname=pcmk
> May 07 05:18:25 [14640] vsan15       crmd:     info: init_cs_connection_once:     Connection to 'classic openais (with plugin)': established
> May 07 05:18:25 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_modify for section nodes (origin=local/crmd/3, version=0.1346.40): OK (rc=0)
> May 07 05:18:25 [14640] vsan15       crmd:     info: do_ha_control:     Connected to the cluster
> May 07 05:18:25 [14640] vsan15       crmd:     info: lrmd_api_connect:     Connecting to lrmd
> May 07 05:18:25 [10843] vsan15       lrmd:     info: lrmd_ipc_accept:     Accepting client connection: 0x191a860 pid=14640 for uid=495 gid=0
> May 07 05:18:25 [14640] vsan15       crmd:     info: do_started:     Delaying start, no membership data (0000000000100000)
> May 07 05:18:25 [14640] vsan15       crmd:     info: do_started:     Delaying start, no membership data (0000000000100000)
> May 07 05:18:25 [14640] vsan15       crmd:   notice: ais_dispatch_message:     Membership 14136: quorum acquired
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_get_peer:     Node vsan15 now has id: 1682182316
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_get_peer:     Node 1682182316 is now known as vsan15
> May 07 05:18:25 [14640] vsan15       crmd:     info: peer_update_callback:     vsan15 is now (null)
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_get_peer:     Node 1682182316 has uuid vsan15
> May 07 05:18:25 [14640] vsan15       crmd:   notice: crm_update_peer_state:     crm_update_ais_node: Node vsan15[1682182316] - state is now member
> May 07 05:18:25 [14640] vsan15       crmd:     info: peer_update_callback:     vsan15 is now member (was (null))
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_update_peer:     crm_update_ais_node: Node vsan15: id=1682182316 state=member addr=r(0) ip(172.16.68.100)  (new) votes=1 (new) born=14128 seen=14136 proc=00000000000000000000000000000000
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_get_peer:     Node vsan16 now has id: 1698959532
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_get_peer:     Node 1698959532 is now known as vsan16
> May 07 05:18:25 [14640] vsan15       crmd:     info: peer_update_callback:     vsan16 is now (null)
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_get_peer:     Node 1698959532 has uuid vsan16
> May 07 05:18:25 [14640] vsan15       crmd:   notice: crm_update_peer_state:     crm_update_ais_node: Node vsan16[1698959532] - state is now member
> May 07 05:18:25 [14640] vsan15       crmd:     info: peer_update_callback:     vsan16 is now member (was (null))
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_update_peer:     crm_update_ais_node: Node vsan16: id=1698959532 state=member addr=r(0) ip(172.16.68.101)  (new) votes=1 (new) born=14136 seen=14136 proc=00000000000000000000000000000000
> May 07 05:18:25 [14640] vsan15       crmd:     info: ais_dispatch_message:     Membership 14136: quorum retained
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan15[1682182316] - unknown is now (null)
> May 07 05:18:25 [14640] vsan15       crmd:     info: peer_update_callback:     Client vsan15/peer now has status [online] (DC=<null>)
> May 07 05:18:25 [14640] vsan15       crmd:     info: crm_update_peer_proc:     pcmk_mcp_dispatch: Node vsan16[1698959532] - unknown is now (null)
> May 07 05:18:25 [14640] vsan15       crmd:     info: peer_update_callback:     Client vsan16/peer now has status [online] (DC=<null>)
> May 07 05:18:25 [14640] vsan15       crmd:     info: qb_ipcs_us_publish:     server name: crmd
> May 07 05:18:25 [14640] vsan15       crmd:   notice: do_started:     The local CRM is operational
> May 07 05:18:25 [14640] vsan15       crmd:     info: do_state_transition:     State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
> May 07 05:18:27 [10842] vsan15 stonith-ng:     info: stonith_command:     Processed register from crmd.14640: OK (0)
> May 07 05:18:27 [10842] vsan15 stonith-ng:     info: stonith_command:     Processed st_notify from crmd.14640: OK (0)
> May 07 05:18:27 [10842] vsan15 stonith-ng:     info: stonith_command:     Processed st_notify from crmd.14640: OK (0)
> May 07 05:18:46 [14640] vsan15       crmd:     info: crm_timer_popped:     Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
> May 07 05:18:46 [14640] vsan15       crmd:  warning: do_log:     FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
> May 07 05:18:46 [14640] vsan15       crmd:     info: do_state_transition:     State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
> May 07 05:18:46 [14640] vsan15       crmd:     info: do_election_count_vote:     Election 3 (owner: vsan16) lost: vote from vsan16 (Uptime)
> May 07 05:18:46 [14640] vsan15       crmd:   notice: do_state_transition:     State transition S_ELECTION -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
> May 07 05:18:46 [14640] vsan15       crmd:     info: do_dc_release:     DC role released
> May 07 05:18:46 [14640] vsan15       crmd:     info: do_te_control:     Transitioner is now inactive
> May 07 05:18:46 [14640] vsan15       crmd:     info: update_dc:     Set DC to vsan16 (3.0.7)
> May 07 05:18:46 [10841] vsan15        cib:     info: cib_process_request:     Operation complete: op cib_sync for section 'all' (origin=vsan16/crmd/35, version=0.1346.46): OK (rc=0)
> May 07 05:18:46 [14640] vsan15       crmd:     info: erase_status_tag:     Deleting xpath: //node_state[@uname='vsan15']/transient_attributes
> May 07 05:18:46 [14640] vsan15       crmd:     info: update_attrd:     Connecting to attrd... 5 retries remaining
> May 07 05:18:46 [14640] vsan15       crmd:   notice: do_state_transition:     State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
> May 07 05:18:46 [10844] vsan15      attrd:   notice: attrd_local_callback:     Sending full refresh (origin=crmd)
> May 07 05:18:46 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: master-vha-090f26ed-5991-4f40-833e-02e76759dd41 (4)
> May 07 05:18:46 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (INFINITY)
> May 07 05:18:46 [10841] vsan15        cib:     info: cib_process_replace:     Digest matched on replace from vsan16: 8eaadcc9996565180563eac16495c423
> May 07 05:18:46 [10841] vsan15        cib:     info: cib_process_replace:     Replaced 0.1346.46 with 0.1346.46 from vsan16
> May 07 05:18:46 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: last-failure-vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb (1367926970)
> May 07 05:18:46 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: probe_complete (true)
> May 07 05:18:46 [10844] vsan15      attrd:  warning: attrd_cib_callback:     Update 144 for master-vha-090f26ed-5991-4f40-833e-02e76759dd41=4 failed: No such device or address
> May 07 05:18:49 [14640] vsan15       crmd:     info: services_os_action_execute:     Managed vgc-cm-agent.ocf_meta-data_0 process 14679 exited with rc=0
> May 07 05:18:49 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-bcd94724-3ec0-4a8d-8951-9d27be3a6acb_monitor_0 (call=55, rc=7, cib-update=10, confirmed=true) not running
> May 07 05:18:49 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_0 (call=57, rc=7, cib-update=11, confirmed=true) not running
> May 07 05:18:50 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_start_0 (call=61, rc=0, cib-update=12, confirmed=true) ok
> May 07 05:18:50 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000 (call=64, rc=8, cib-update=13, confirmed=false) master
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_ais_dispatch:     Update relayed from vsan16
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-vha-090f26ed-5991-4f40-833e-02e76759dd41 (1)
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_perform_update:     Sent update 165: fail-count-vha-090f26ed-5991-4f40-833e-02e76759dd41=1
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_ais_dispatch:     Update relayed from vsan16
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: last-failure-vha-090f26ed-5991-4f40-833e-02e76759dd41 (1367929130)
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_perform_update:     Sent update 168: last-failure-vha-090f26ed-5991-4f40-833e-02e76759dd41=1367929130
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_ais_dispatch:     Update relayed from vsan16
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: fail-count-vha-090f26ed-5991-4f40-833e-02e76759dd41 (2)
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_perform_update:     Sent update 171: fail-count-vha-090f26ed-5991-4f40-833e-02e76759dd41=2
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_ais_dispatch:     Update relayed from vsan16
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: last-failure-vha-090f26ed-5991-4f40-833e-02e76759dd41 (1367929130)
> May 07 05:18:50 [10844] vsan15      attrd:   notice: attrd_perform_update:     Sent update 174: last-failure-vha-090f26ed-5991-4f40-833e-02e76759dd41=1367929130
> May 07 05:18:50 [10843] vsan15       lrmd:     info: cancel_recurring_action:     Cancelling operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000
> May 07 05:18:50 [14640] vsan15       crmd:     info: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000 (call=64, status=1, cib-update=0, confirmed=false) Cancelled
> May 07 05:18:50 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_demote_0 (call=68, rc=0, cib-update=14, confirmed=true) ok
> May 07 05:18:51 [10844] vsan15      attrd:   notice: attrd_trigger_update:     Sending flush op to all hosts for: master-vha-090f26ed-5991-4f40-833e-02e76759dd41 (<null>)
> May 07 05:18:51 [10844] vsan15      attrd:   notice: attrd_perform_update:     Sent delete 178: node=vsan15, attr=master-vha-090f26ed-5991-4f40-833e-02e76759dd41, id=<n/a>, set=(null), section=status
> May 07 05:18:51 [10844] vsan15      attrd:   notice: attrd_perform_update:     Sent delete 180: node=vsan15, attr=master-vha-090f26ed-5991-4f40-833e-02e76759dd41, id=<n/a>, set=(null), section=status
> May 07 05:18:51 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_stop_0 (call=72, rc=0, cib-update=15, confirmed=true) ok
> May 07 05:18:52 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_start_0 (call=75, rc=0, cib-update=16, confirmed=true) ok
> May 07 05:18:52 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000 (call=78, rc=0, cib-update=17, confirmed=false) ok
> May 07 10:58:21 [14640] vsan15       crmd:   notice: process_lrm_event:     LRM operation vha-090f26ed-5991-4f40-833e-02e76759dd41_monitor_31000 (call=78, rc=7, cib-update=18, confirmed=false) not running
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Pacemaker mailing list