Dec 14 12:34:39 [651] xstorage1 corosync notice [TOTEM ] A processor failed, forming new configuration. Dec 14 12:34:39 [651] xstorage1 corosync notice [TOTEM ] The network interface is down. Dec 14 12:34:41 [651] xstorage1 corosync notice [TOTEM ] A new membership (127.0.0.1:352) was formed. Members left: 2 Dec 14 12:34:41 [651] xstorage1 corosync notice [TOTEM ] Failed to receive the leave message. failed: 2 Dec 14 12:34:41 [679] attrd: info: pcmk_cpg_membership: Node 2 left group attrd (peer=xstha2, counter=1.0) Dec 14 12:34:41 [676] cib: info: pcmk_cpg_membership: Node 2 left group cib (peer=xstha2, counter=1.0) Dec 14 12:34:41 [679] attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline Dec 14 12:34:41 [676] cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline Dec 14 12:34:41 [679] attrd: notice: crm_update_peer_state_iter: Node xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc Dec 14 12:34:41 [681] crmd: info: pcmk_cpg_membership: Node 2 left group crmd (peer=xstha2, counter=1.0) Dec 14 12:34:41 [676] cib: notice: crm_update_peer_state_iter: Node xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc Dec 14 12:34:41 [681] crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline Dec 14 12:34:41 [679] attrd: notice: attrd_peer_remove: Removing all xstha2 attributes for peer loss Dec 14 12:34:41 [676] cib: info: crm_reap_dead_member: Removing node with name xstha2 and id 2 from membership cache Dec 14 12:34:41 [675] pacemakerd: info: pcmk_cpg_membership: Node 2 left group pacemakerd (peer=xstha2, counter=1.0) Dec 14 12:34:41 [676] cib: notice: reap_crm_member: Purged 1 peers with id=2 and/or uname=xstha2 from the membership cache Dec 14 12:34:41 [677] stonith-ng: info: pcmk_cpg_membership: Node 2 left group stonith-ng (peer=xstha2, counter=1.0) Dec 14 12:34:41 [675] pacemakerd: info: crm_update_peer_proc: pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline Dec 14 12:34:41 [676] cib: info: pcmk_cpg_membership: Node 1 still member of group cib (peer=xstha1, counter=1.0) Dec 14 12:34:41 [675] pacemakerd: info: pcmk_cpg_membership: Node 1 still member of group pacemakerd (peer=xstha1, counter=1.0) Dec 14 12:34:41 [651] xstorage1 corosync notice [QUORUM] Members[1]: 1 Dec 14 12:34:41 [679] attrd: info: crm_reap_dead_member: Removing node with name xstha2 and id 2 from membership cache Dec 14 12:34:41 [651] xstorage1 corosync notice [MAIN ] Completed service synchronization, ready to provide service. Dec 14 12:34:41 [681] crmd: info: peer_update_callback: Client xstha2/peer now has status [offline] (DC=xstha2, changed=4000000) Dec 14 12:34:41 [679] attrd: notice: reap_crm_member: Purged 1 peers with id=2 and/or uname=xstha2 from the membership cache Dec 14 12:34:41 [681] crmd: notice: peer_update_callback: Our peer on the DC (xstha2) is dead Dec 14 12:34:41 [679] attrd: info: pcmk_cpg_membership: Node 1 still member of group attrd (peer=xstha1, counter=1.0) Dec 14 12:34:41 [675] pacemakerd: info: pcmk_quorum_notification: Quorum retained | membership=352 members=1 Dec 14 12:34:41 [677] stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node xstha2[2] - corosync-cpg is now offline Dec 14 12:34:41 [675] pacemakerd: notice: crm_update_peer_state_iter: Node xstha2 state is now lost | nodeid=2 previous=member source=crm_reap_unseen_nodes Dec 14 12:34:41 [677] stonith-ng: notice: crm_update_peer_state_iter: Node xstha2 state is now lost | nodeid=2 previous=member source=crm_update_peer_proc Dec 14 12:34:41 [681] crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='xstha2']/transient_attributes Dec 14 12:34:41 [675] pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node Dec 14 12:34:41 [677] stonith-ng: info: crm_reap_dead_member: Removing node with name xstha2 and id 2 from membership cache Dec 14 12:34:41 [677] stonith-ng: notice: reap_crm_member: Purged 1 peers with id=2 and/or uname=xstha2 from the membership cache Dec 14 12:34:41 [677] stonith-ng: info: pcmk_cpg_membership: Node 1 still member of group stonith-ng (peer=xstha1, counter=1.0) Dec 14 12:34:41 [681] crmd: info: pcmk_cpg_membership: Node 1 still member of group crmd (peer=xstha1, counter=1.0) Dec 14 12:34:41 [681] crmd: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION | input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback Dec 14 12:34:41 [681] crmd: info: update_dc: Unset DC. Was xstha2 Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/20) Dec 14 12:34:41 [681] crmd: info: pcmk_quorum_notification: Quorum retained | membership=352 members=1 Dec 14 12:34:41 [681] crmd: notice: crm_update_peer_state_iter: Node xstha2 state is now lost | nodeid=2 previous=member source=crm_reap_unseen_nodes Dec 14 12:34:41 [681] crmd: info: peer_update_callback: Cluster node xstha2 is now lost (was member) Dec 14 12:34:41 [681] crmd: info: election_complete: Election election-0 complete Dec 14 12:34:41 [681] crmd: info: election_timeout_popped: Election failed: Declaring ourselves the winner Dec 14 12:34:41 [681] crmd: info: do_log: Input I_ELECTION_DC received in state S_ELECTION from election_timeout_popped Dec 14 12:34:41 [681] crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION | input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped Dec 14 12:34:41 [681] crmd: info: do_te_control: Registering TE UUID: fa7da62d-2e8d-c08a-aa5f-b51ae18735fb Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.25 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.26 e2a929bac3293b669a65cb55363ab565 Dec 14 12:34:41 [676] cib: info: cib_perform_op: -- /cib/status/node_state[@id='2']/transient_attributes[@id='2'] Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=26 Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='xstha2']/transient_attributes: OK (rc=0, origin=xstha1/crmd/20, version=0.43.26) Dec 14 12:34:41 [681] crmd: info: set_graph_functions: Setting custom graph functions Dec 14 12:34:41 [681] crmd: info: do_dc_takeover: Taking over DC status for this partition Dec 14 12:34:41 [676] cib: info: cib_process_readwrite: We are now in R/W mode Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/21, version=0.43.26) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section cib to all (origin=local/crmd/22) Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=xstha1/crmd/22, version=0.43.26) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section crm_config to all (origin=local/crmd/24) Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=xstha1/crmd/24, version=0.43.26) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section crm_config to all (origin=local/crmd/26) Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=xstha1/crmd/26, version=0.43.26) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section crm_config to all (origin=local/crmd/28) Dec 14 12:34:41 [681] crmd: info: corosync_cluster_name: Cannot get totem.cluster_name: Doesn't exist (12) Dec 14 12:34:41 [681] crmd: info: join_make_offer: Not making an offer to xstha2: not active (lost) Dec 14 12:34:41 [681] crmd: info: join_make_offer: Making join offers based on membership 352 Dec 14 12:34:41 [681] crmd: info: join_make_offer: join-1: Sending offer to xstha1 Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join: join_make_offer: Node xstha1[1] - join-1 phase 0 -> 1 Dec 14 12:34:41 [681] crmd: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks Dec 14 12:34:41 [681] crmd: warning: do_log: Input I_ELECTION_DC received in state S_INTEGRATION from do_election_check Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join: initialize_join: Node xstha1[1] - join-2 phase 1 -> 0 Dec 14 12:34:41 [681] crmd: info: join_make_offer: Not making an offer to xstha2: not active (lost) Dec 14 12:34:41 [681] crmd: info: join_make_offer: join-2: Sending offer to xstha1 Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join: join_make_offer: Node xstha1[1] - join-2 phase 0 -> 1 Dec 14 12:34:41 [681] crmd: info: do_dc_join_offer_all: join-2: Waiting on 1 outstanding join acks Dec 14 12:34:41 [681] crmd: info: update_dc: Set DC to xstha1 (3.0.10) Dec 14 12:34:41 [681] crmd: info: crm_update_peer_expected: update_dc: Node xstha1[1] - expected state is now member (was (null)) Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=xstha1/crmd/28, version=0.43.26) Dec 14 12:34:41 [681] crmd: warning: throttle_num_cores: Couldn't read /proc/cpuinfo, assuming a single processor: No such file or directory (2) Dec 14 12:34:41 [681] crmd: info: parse_notifications: No optional alerts section in cib Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join: do_dc_join_filter_offer: Node xstha1[1] - join-2 phase 1 -> 2 Dec 14 12:34:41 [681] crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN | input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state Dec 14 12:34:41 [681] crmd: info: crmd_join_phase_log: join-2: xstha2=none Dec 14 12:34:41 [681] crmd: info: crmd_join_phase_log: join-2: xstha1=integrated Dec 14 12:34:41 [681] crmd: info: do_dc_join_finalize: join-2: Syncing our CIB to the rest of the cluster Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join: finalize_join_for: Node xstha1[1] - join-2 phase 2 -> 3 Dec 14 12:34:41 [676] cib: info: cib_process_replace: Digest matched on replace from xstha1: e2a929bac3293b669a65cb55363ab565 Dec 14 12:34:41 [676] cib: info: cib_process_replace: Replaced 0.43.26 with 0.43.26 from xstha1 Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=xstha1/crmd/32, version=0.43.26) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section nodes to all (origin=local/crmd/33) Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=xstha1/crmd/33, version=0.43.26) Dec 14 12:34:41 [676] cib: info: cib_file_backup: Archived previous version as /sonicle/var/cluster/lib/pacemaker/cib/cib-31.raw Dec 14 12:34:41 [676] cib: info: cib_file_write_with_digest: Wrote version 0.43.0 of the CIB to disk (digest: 614d7f9bd4a1e1b3134b91b3b996b053) Dec 14 12:34:41 [676] cib: info: cib_file_write_with_digest: Reading cluster configuration file /sonicle/var/cluster/lib/pacemaker/cib/cib.CJaOre (digest: /sonicle/var/cluster/lib/pacemaker/cib/cib.DJaOre) Dec 14 12:34:41 [681] crmd: info: action_synced_wait: Managed ZFS_meta-data_0 process 2199 exited with rc=0 Dec 14 12:34:41 [681] crmd: info: action_synced_wait: Managed IPaddr_meta-data_0 process 2202 exited with rc=0 Dec 14 12:34:41 [681] crmd: info: crm_update_peer_join: do_dc_join_ack: Node xstha1[1] - join-2 phase 3 -> 4 Dec 14 12:34:41 [681] crmd: info: do_dc_join_ack: join-2: Updating node state to member for xstha1 Dec 14 12:34:41 [681] crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='xstha1']/lrm Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='xstha1']/lrm to all (origin=local/crmd/34) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/35) Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.26 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.27 (null) Dec 14 12:34:41 [676] cib: info: cib_perform_op: -- /cib/status/node_state[@id='1']/lrm[@id='1'] Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=27 Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='xstha1']/lrm: OK (rc=0, origin=xstha1/crmd/34, version=0.43.27) Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.27 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.28 (null) Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=28 Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib/status/node_state[@id='1']: @crm-debug-origin=do_lrm_query_internal Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']: Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/crmd/35, version=0.43.28) Dec 14 12:34:41 [681] crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE | input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state Dec 14 12:34:41 [681] crmd: info: abort_transition_graph: Transition aborted: Peer Cancelled | source=do_te_invoke:161 complete=true Dec 14 12:34:41 [679] attrd: info: attrd_client_refresh: Updating all attributes Dec 14 12:34:41 [679] attrd: info: write_attribute: Sent update 4 with 1 changes for shutdown, id=, set=(null) Dec 14 12:34:41 [679] attrd: info: write_attribute: Sent update 5 with 1 changes for terminate, id=, set=(null) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section nodes to all (origin=local/crmd/38) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/39) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section cib to all (origin=local/crmd/40) Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=xstha1/crmd/38, version=0.43.28) Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.28 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.29 (null) Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=29 Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib/status/node_state[@id='2']: @in_ccm=false, @crmd=offline, @crm-debug-origin=do_state_transition, @join=down Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib/status/node_state[@id='1']: @crm-debug-origin=do_state_transition Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/crmd/39, version=0.43.29) Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.29 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.30 (null) Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=30, @dc-uuid=1 Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=xstha1/crmd/40, version=0.43.30) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/attrd/4) Dec 14 12:34:41 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/attrd/5) Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.30 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.31 (null) Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=31 Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']: Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_perform_op: ++ Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/attrd/4, version=0.43.31) Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 4 for shutdown: OK (0) Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 4 for shutdown[xstha1]=0: OK (0) Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: --- 0.43.31 2 Dec 14 12:34:41 [676] cib: info: cib_perform_op: Diff: +++ 0.43.32 (null) Dec 14 12:34:41 [676] cib: info: cib_perform_op: + /cib: @num_updates=32 Dec 14 12:34:41 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/attrd/5, version=0.43.32) Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 5 for terminate: OK (0) Dec 14 12:34:41 [679] attrd: info: attrd_cib_callback: Update 5 for terminate[xstha1]=(null): OK (0) Dec 14 12:34:41 [681] crmd: info: abort_transition_graph: Transition aborted by transient_attributes.1 'create': Transient attribute change | cib=0.43.31 source=abort_unless_down:329 path=/cib/status/node_state[@id='1'] complete=true Dec 14 12:34:41 [680] pengine: warning: pe_fence_node: Node xstha2 will be fenced because the node is no longer part of the cluster Dec 14 12:34:41 [680] pengine: warning: determine_online_status: Node xstha2 is unclean Dec 14 12:34:41 [680] pengine: info: determine_online_status_fencing: Node xstha1 is active Dec 14 12:34:41 [680] pengine: info: determine_online_status: Node xstha1 is online Dec 14 12:34:41 [680] pengine: info: native_print: xstha1_san0_IP (ocf::heartbeat:IPaddr): Started xstha1 Dec 14 12:34:41 [680] pengine: info: native_print: xstha2_san0_IP (ocf::heartbeat:IPaddr): Started xstha2 (UNCLEAN) Dec 14 12:34:41 [680] pengine: info: native_print: zpool_data (ocf::heartbeat:ZFS): Started xstha1 Dec 14 12:34:41 [680] pengine: info: native_print: xstha1-stonith (stonith:external/ipmi): Started xstha2 (UNCLEAN) Dec 14 12:34:41 [680] pengine: info: native_print: xstha2-stonith (stonith:external/ipmi): Started xstha1 Dec 14 12:34:41 [680] pengine: info: native_color: Resource xstha1-stonith cannot run anywhere Dec 14 12:34:41 [680] pengine: warning: custom_action: Action xstha2_san0_IP_stop_0 on xstha2 is unrunnable (offline) Dec 14 12:34:41 [680] pengine: warning: custom_action: Action xstha1-stonith_stop_0 on xstha2 is unrunnable (offline) Dec 14 12:34:41 [680] pengine: warning: custom_action: Action xstha1-stonith_stop_0 on xstha2 is unrunnable (offline) Dec 14 12:34:41 [680] pengine: warning: stage6: Scheduling Node xstha2 for STONITH Dec 14 12:34:41 [680] pengine: info: native_stop_constraints: xstha2_san0_IP_stop_0 is implicit after xstha2 is fenced Dec 14 12:34:41 [680] pengine: info: native_stop_constraints: xstha1-stonith_stop_0 is implicit after xstha2 is fenced Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha1_san0_IP (Started xstha1) Dec 14 12:34:41 [680] pengine: notice: LogActions: Move xstha2_san0_IP (Started xstha2 -> xstha1) Dec 14 12:34:41 [680] pengine: info: LogActions: Leave zpool_data (Started xstha1) Dec 14 12:34:41 [680] pengine: notice: LogActions: Stop xstha1-stonith (xstha2) Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha2-stonith (Started xstha1) Dec 14 12:34:41 [681] crmd: info: handle_response: pe_calc calculation pe_calc-dc-1607945681-15 is obsolete Dec 14 12:34:41 [680] pengine: warning: process_pe_message: Calculated transition 0 (with warnings), saving inputs in /sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-42.bz2 Dec 14 12:34:41 [680] pengine: warning: pe_fence_node: Node xstha2 will be fenced because the node is no longer part of the cluster Dec 14 12:34:41 [680] pengine: warning: determine_online_status: Node xstha2 is unclean Dec 14 12:34:41 [680] pengine: info: determine_online_status_fencing: Node xstha1 is active Dec 14 12:34:41 [680] pengine: info: determine_online_status: Node xstha1 is online Dec 14 12:34:41 [680] pengine: info: native_print: xstha1_san0_IP (ocf::heartbeat:IPaddr): Started xstha1 Dec 14 12:34:41 [680] pengine: info: native_print: xstha2_san0_IP (ocf::heartbeat:IPaddr): Started xstha2 (UNCLEAN) Dec 14 12:34:41 [680] pengine: info: native_print: zpool_data (ocf::heartbeat:ZFS): Started xstha1 Dec 14 12:34:41 [680] pengine: info: native_print: xstha1-stonith (stonith:external/ipmi): Started xstha2 (UNCLEAN) Dec 14 12:34:41 [680] pengine: info: native_print: xstha2-stonith (stonith:external/ipmi): Started xstha1 Dec 14 12:34:41 [680] pengine: info: native_color: Resource xstha1-stonith cannot run anywhere Dec 14 12:34:41 [680] pengine: warning: custom_action: Action xstha2_san0_IP_stop_0 on xstha2 is unrunnable (offline) Dec 14 12:34:41 [680] pengine: warning: custom_action: Action xstha1-stonith_stop_0 on xstha2 is unrunnable (offline) Dec 14 12:34:41 [680] pengine: warning: custom_action: Action xstha1-stonith_stop_0 on xstha2 is unrunnable (offline) Dec 14 12:34:41 [680] pengine: warning: stage6: Scheduling Node xstha2 for STONITH Dec 14 12:34:41 [680] pengine: info: native_stop_constraints: xstha2_san0_IP_stop_0 is implicit after xstha2 is fenced Dec 14 12:34:41 [680] pengine: info: native_stop_constraints: xstha1-stonith_stop_0 is implicit after xstha2 is fenced Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha1_san0_IP (Started xstha1) Dec 14 12:34:41 [680] pengine: notice: LogActions: Move xstha2_san0_IP (Started xstha2 -> xstha1) Dec 14 12:34:41 [680] pengine: info: LogActions: Leave zpool_data (Started xstha1) Dec 14 12:34:41 [680] pengine: notice: LogActions: Stop xstha1-stonith (xstha2) Dec 14 12:34:41 [680] pengine: info: LogActions: Leave xstha2-stonith (Started xstha1) Dec 14 12:34:41 [681] crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response Dec 14 12:34:41 [680] pengine: warning: process_pe_message: Calculated transition 1 (with warnings), saving inputs in /sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-43.bz2 Dec 14 12:34:41 [681] crmd: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1607945681-16) derived from /sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-43.bz2 Dec 14 12:34:41 [681] crmd: notice: te_fence_node: Requesting fencing (poweroff) of node xstha2 | action=13 timeout=60000 Dec 14 12:34:41 [677] stonith-ng: notice: handle_request: Client crmd.681.0e689fbb wants to fence (poweroff) 'xstha2' with device '(any)' Dec 14 12:34:41 [677] stonith-ng: notice: initiate_remote_stonith_op: Requesting peer fencing (poweroff) of xstha2 | id=ec77b99f-e029-656a-806f-d95e341b33db state=0 Dec 14 12:34:42 [677] stonith-ng: info: process_remote_stonith_query: Query result 1 of 1 from xstha1 for xstha2/poweroff (1 devices) ec77b99f-e029-656a-806f-d95e341b33db Dec 14 12:34:42 [677] stonith-ng: info: call_remote_stonith: Total timeout set to 60 for peer's fencing of xstha2 for crmd.681|id=ec77b99f-e029-656a-806f-d95e341b33db Dec 14 12:34:42 [677] stonith-ng: info: call_remote_stonith: Requesting that 'xstha1' perform op 'xstha2 poweroff' for crmd.681 (72s, 0s) Dec 14 12:34:43 [677] stonith-ng: info: stonith_fence_get_devices_cb: Found 1 matching devices for 'xstha2' Dec 14 12:34:44 [677] stonith-ng: notice: log_operation: Operation 'poweroff' [2235] (call 2 from crmd.681) for host 'xstha2' with device 'xstha2-stonith' returned: 0 (OK) Dec 14 12:34:44 [677] stonith-ng: notice: remote_op_done: Operation poweroff of xstha2 by xstha1 for crmd.681@xstha1.ec77b99f: OK Dec 14 12:34:44 [681] crmd: notice: tengine_stonith_callback: Stonith operation 2/13:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb: OK (0) Dec 14 12:34:44 [681] crmd: info: crm_update_peer_expected: crmd_peer_down: Node xstha2[2] - expected state is now down (was member) Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='xstha2']/lrm Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='xstha2']/transient_attributes Dec 14 12:34:44 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/43) Dec 14 12:34:44 [676] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm to all (origin=local/crmd/44) Dec 14 12:34:44 [681] crmd: notice: tengine_stonith_notify: Peer xstha2 was terminated (poweroff) by xstha1 for xstha1: OK (ref=ec77b99f-e029-656a-806f-d95e341b33db) by client crmd.681 Dec 14 12:34:44 [676] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/45) Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='xstha2']/lrm Dec 14 12:34:44 [681] crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='xstha2']/transient_attributes Dec 14 12:34:44 [681] crmd: notice: te_rsc_command: Initiating start operation xstha2_san0_IP_start_0 locally on xstha1 | action 6 Dec 14 12:34:44 [681] crmd: info: do_lrm_rsc_op: Performing key=6:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb op=xstha2_san0_IP_start_0 Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: --- 0.43.32 2 Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: +++ 0.43.33 (null) Dec 14 12:34:44 [676] cib: info: cib_perform_op: + /cib: @num_updates=33 Dec 14 12:34:44 [676] cib: info: cib_perform_op: + /cib/status/node_state[@id='2']: @crm-debug-origin=send_stonith_update, @expected=down Dec 14 12:34:44 [678] lrmd: info: log_execute: executing - rsc:xstha2_san0_IP action:start call_id:26 Dec 14 12:34:44 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/crmd/43, version=0.43.33) Dec 14 12:34:44 [681] crmd: info: cib_fencing_updated: Fencing update 43 for xstha2: complete Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: --- 0.43.33 2 Dec 14 12:34:44 [676] cib: info: cib_perform_op: Diff: +++ 0.43.34 (null) Dec 14 12:34:44 [676] cib: info: cib_perform_op: -- /cib/status/node_state[@id='2']/lrm[@id='2'] Dec 14 12:34:44 [676] cib: info: cib_perform_op: + /cib: @num_updates=34 Dec 14 12:34:44 [676] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm: OK (rc=0, origin=xstha1/crmd/44, version=0.43.34) Dec 14 12:34:44 [681] crmd: warning: match_down_event: No reason to expect node 2 to be down Dec 14 12:34:44 [681] crmd: notice: abort_transition_graph: Transition aborted by deletion of lrm[@id='2']: Resource state removal | cib=0.43.34 source=abort_unless_down:343 path=/cib/status/node_state[@id='2']/lrm[@id='2'] complete=false Dec 14 12:34:44 [676] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='xstha2']/transient_attributes: OK (rc=0, origin=xstha1/crmd/45, version=0.43.34) Dec 14 12:34:44 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/46) Dec 14 12:34:44 [676] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='xstha2']/lrm to all (origin=local/crmd/47) Dec 14 12:34:44 [676] cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='xstha2']/transient_attributes to all (origin=local/crmd/48) Dec 14 12:34:44 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/crmd/46, version=0.43.34) Dec 14 12:34:44 [681] crmd: info: cib_fencing_updated: Fencing update 46 for xstha2: complete Dec 14 12:34:44 [676] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='xstha2']/lrm: OK (rc=0, origin=xstha1/crmd/47, version=0.43.34) Dec 14 12:34:44 [676] cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='xstha2']/transient_attributes: OK (rc=0, origin=xstha1/crmd/48, version=0.43.34) IPaddr(xstha2_san0_IP)[2248]: 2020/12/14_12:34:45 INFO: eval ifconfig san0:10 inet 10.10.10.2 && ifconfig san0:10 netmask 255.255.255.0 && ifconfig san0:10 up Dec 14 12:34:45 [678] lrmd: notice: operation_finished: xstha2_san0_IP_start_0:2248:stderr [ Converted dotted-quad netmask to CIDR as: 24 ] Dec 14 12:34:45 [678] lrmd: info: log_finished: finished - rsc:xstha2_san0_IP action:start call_id:26 pid:2248 exit-code:0 exec-time:461ms queue-time:0ms Dec 14 12:34:45 [681] crmd: info: action_synced_wait: Managed IPaddr_meta-data_0 process 2384 exited with rc=0 Dec 14 12:34:45 [681] crmd: notice: process_lrm_event: Result of start operation for xstha2_san0_IP on xstha1: 0 (ok) | call=26 key=xstha2_san0_IP_start_0 confirmed=true cib-update=49 Dec 14 12:34:45 [676] cib: info: cib_process_request: Forwarding cib_modify operation for section status to all (origin=local/crmd/49) Dec 14 12:34:45 [676] cib: info: cib_perform_op: Diff: --- 0.43.34 2 Dec 14 12:34:45 [676] cib: info: cib_perform_op: Diff: +++ 0.43.35 (null) Dec 14 12:34:45 [676] cib: info: cib_perform_op: + /cib: @num_updates=35 Dec 14 12:34:45 [676] cib: info: cib_perform_op: + /cib/status/node_state[@id='1']: @crm-debug-origin=do_update_resource Dec 14 12:34:45 [676] cib: info: cib_perform_op: + /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources/lrm_resource[@id='xstha2_san0_IP']/lrm_rsc_op[@id='xstha2_san0_IP_last_0']: @operation_key=xstha2_san0_IP_start_0, @operation=start, @crm-debug-origin=do_update_resource, @transition-key=6:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb, @transition-magic=0:0;6:1:0:fa7da62d-2e8d-c08a-aa5f-b51ae18735fb, @call-id=26, @rc-code=0, @last-run=1607945684, @last-rc-change=1607945684, @exec-time=461 Dec 14 12:34:45 [681] crmd: info: match_graph_event: Action xstha2_san0_IP_start_0 (6) confirmed on xstha1 (rc=0) Dec 14 12:34:45 [676] cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=xstha1/crmd/49, version=0.43.35) Dec 14 12:34:45 [681] crmd: notice: run_graph: Transition 1 (Complete=6, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-warn-43.bz2): Complete Dec 14 12:34:45 [681] crmd: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE | input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd Dec 14 12:34:45 [680] pengine: info: determine_online_status_fencing: Node xstha1 is active Dec 14 12:34:45 [680] pengine: info: determine_online_status: Node xstha1 is online Dec 14 12:34:45 [680] pengine: info: native_print: xstha1_san0_IP (ocf::heartbeat:IPaddr): Started xstha1 Dec 14 12:34:45 [680] pengine: info: native_print: xstha2_san0_IP (ocf::heartbeat:IPaddr): Started xstha1 Dec 14 12:34:45 [680] pengine: info: native_print: zpool_data (ocf::heartbeat:ZFS): Started xstha1 Dec 14 12:34:45 [680] pengine: info: native_print: xstha1-stonith (stonith:external/ipmi): Stopped Dec 14 12:34:45 [680] pengine: info: native_print: xstha2-stonith (stonith:external/ipmi): Started xstha1 Dec 14 12:34:45 [680] pengine: info: native_color: Resource xstha1-stonith cannot run anywhere Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha1_san0_IP (Started xstha1) Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha2_san0_IP (Started xstha1) Dec 14 12:34:45 [680] pengine: info: LogActions: Leave zpool_data (Started xstha1) Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha1-stonith (Stopped) Dec 14 12:34:45 [680] pengine: info: LogActions: Leave xstha2-stonith (Started xstha1) Dec 14 12:34:45 [680] pengine: notice: process_pe_message: Calculated transition 2, saving inputs in /sonicle/var/cluster/lib/pacemaker/pengine/pe-input-125.bz2 Dec 14 12:34:45 [681] crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE | input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response Dec 14 12:34:45 [681] crmd: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1607945685-18) derived from /sonicle/var/cluster/lib/pacemaker/pengine/pe-input-125.bz2 Dec 14 12:34:45 [681] crmd: notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/sonicle/var/cluster/lib/pacemaker/pengine/pe-input-125.bz2): Complete Dec 14 12:34:45 [681] crmd: info: do_log: Input I_TE_SUCCESS received in state S_TRANSITION_ENGINE from notify_crmd Dec 14 12:34:45 [681] crmd: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE | input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd Dec 14 12:34:50 [676] cib: info: cib_process_ping: Reporting our current digest to xstha1: d3e769f75eaf1fd102b3e5ffd4269975 for 0.43.35 (8518f10 0)