Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] Initializing transport (UDP/IP Unicast). Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] Initializing transport (UDP/IP Unicast). Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] The network interface [192.168.101.12] is now up. Aug 2 14:49:45 bl460g6d corosync[26156]: [SERV ] Service engine loaded: corosync configuration map access [0] Aug 2 14:49:45 bl460g6d corosync[26156]: [QB ] server name: cmap Aug 2 14:49:45 bl460g6d corosync[26156]: [SERV ] Service engine loaded: corosync configuration service [1] Aug 2 14:49:45 bl460g6d corosync[26156]: [QB ] server name: cfg Aug 2 14:49:45 bl460g6d corosync[26156]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2] Aug 2 14:49:45 bl460g6d corosync[26156]: [QB ] server name: cpg Aug 2 14:49:45 bl460g6d corosync[26156]: [SERV ] Service engine loaded: corosync profile loading service [4] Aug 2 14:49:45 bl460g6d corosync[26156]: [QUORUM] Using quorum provider corosync_votequorum Aug 2 14:49:45 bl460g6d corosync[26156]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5] Aug 2 14:49:45 bl460g6d corosync[26156]: [QB ] server name: votequorum Aug 2 14:49:45 bl460g6d corosync[26156]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3] Aug 2 14:49:45 bl460g6d corosync[26156]: [QB ] server name: quorum Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] adding new UDPU member {192.168.101.11} Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] adding new UDPU member {192.168.101.12} Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] The network interface [192.168.102.12] is now up. Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] adding new UDPU member {192.168.102.12} Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] adding new UDPU member {192.168.102.12} Aug 2 14:49:45 bl460g6d corosync[26156]: [QUORUM] Members[1]: 2 Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] A processor joined or left the membership and a new membership (192.168.101.12:2136) was formed. Aug 2 14:49:45 bl460g6d corosync[26156]: [MAIN ] Completed service synchronization, ready to provide service. Aug 2 14:49:45 bl460g6d corosync[26156]: [QUORUM] Members[2]: 1 2 Aug 2 14:49:45 bl460g6d corosync[26156]: [TOTEM ] A processor joined or left the membership and a new membership (192.168.101.11:2144) was formed. Aug 2 14:49:45 bl460g6d corosync[26156]: [QUORUM] This node is within the primary component and will provide service. Aug 2 14:49:45 bl460g6d corosync[26156]: [QUORUM] Members[2]: 1 2 Aug 2 14:49:45 bl460g6d corosync[26156]: [MAIN ] Completed service synchronization, ready to provide service. Aug 2 14:49:45 bl460g6d pacemakerd[26174]: notice: crm_log_args: crm_log_args: Invoked: pacemakerd Aug 2 14:49:45 bl460g6d pacemakerd[26174]: info: crm_update_callsites: Enabling callsites based on priority=6, files=(null), functions=(null), formats=(null), tags=(null) Aug 2 14:49:45 bl460g6d pacemakerd[26174]: notice: main: Starting Pacemaker 1.1.7 (Build: e986274): agent-manpages ncurses libqb-logging libqb-ipc lha-fencing heartbeat corosync-native Aug 2 14:49:45 bl460g6d pacemakerd[26174]: notice: update_node_processes: 0x2481820 Node 2 now known as bl460g6d, was: Aug 2 14:49:45 bl460g6d pacemakerd[26174]: notice: update_node_processes: 0x2482710 Node 1 now known as bl460g6c, was: Aug 2 14:49:45 bl460g6d lrmd[26178]: notice: crm_log_args: crm_log_args: Invoked: /usr/libexec/pacemaker/lrmd Aug 2 14:49:45 bl460g6d cib[26176]: notice: crm_log_args: crm_log_args: Invoked: /usr/libexec/pacemaker/cib Aug 2 14:49:45 bl460g6d lrmd[26178]: info: crm_update_callsites: Enabling callsites based on priority=6, files=(null), functions=(null), formats=(null), tags=(null) Aug 2 14:49:45 bl460g6d cib[26176]: info: crm_update_callsites: Enabling callsites based on priority=6, files=(null), functions=(null), formats=(null), tags=(null) Aug 2 14:49:45 bl460g6d stonith-ng[26177]: notice: crm_log_args: crm_log_args: Invoked: /usr/libexec/pacemaker/stonithd Aug 2 14:49:45 bl460g6d stonith-ng[26177]: info: crm_update_callsites: Enabling callsites based on priority=6, files=(null), functions=(null), formats=(null), tags=(null) Aug 2 14:49:45 bl460g6d cib[26176]: notice: main: Using new config location: /var/lib/pacemaker/cib Aug 2 14:49:45 bl460g6d stonith-ng[26177]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Aug 2 14:49:45 bl460g6d attrd[26179]: notice: crm_log_args: crm_log_args: Invoked: /usr/libexec/pacemaker/attrd Aug 2 14:49:45 bl460g6d cib[26176]: warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml Aug 2 14:49:45 bl460g6d cib[26176]: warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backup... Aug 2 14:49:45 bl460g6d cib[26176]: warning: readCibXmlFile: Continuing with an empty configuration. Aug 2 14:49:45 bl460g6d pengine[26180]: notice: crm_log_args: crm_log_args: Invoked: /usr/libexec/pacemaker/pengine Aug 2 14:49:45 bl460g6d attrd[26179]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Aug 2 14:49:45 bl460g6d crmd[26181]: notice: crm_log_args: crm_log_args: Invoked: /usr/libexec/pacemaker/crmd Aug 2 14:49:45 bl460g6d crmd[26181]: info: crm_update_callsites: Enabling callsites based on priority=6, files=(null), functions=(null), formats=(null), tags=(null) Aug 2 14:49:45 bl460g6d crmd[26181]: notice: main: CRM Git Version: e986274 Aug 2 14:49:45 bl460g6d attrd[26179]: notice: main: Starting mainloop... Aug 2 14:49:45 bl460g6d cib[26176]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Aug 2 14:49:46 bl460g6d crmd[26181]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Aug 2 14:49:46 bl460g6d stonith-ng[26177]: notice: setup_cib: Watching for stonith topology changes Aug 2 14:49:46 bl460g6d crmd[26181]: notice: init_quorum_connection: Quorum acquired Aug 2 14:49:46 bl460g6d crmd[26181]: error: corosync_node_name: Unable to get node name for nodeid 1 Aug 2 14:49:46 bl460g6d crmd[26181]: error: corosync_node_name: Unable to get node name for nodeid 2 Aug 2 14:49:46 bl460g6d crmd[26181]: error: corosync_node_name: Unable to get node name for nodeid 1 Aug 2 14:49:46 bl460g6d crmd[26181]: error: corosync_node_name: Unable to get node name for nodeid 2 Aug 2 14:49:47 bl460g6d crmd[26181]: error: corosync_node_name: Unable to get node name for nodeid 1 Aug 2 14:49:47 bl460g6d crmd[26181]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1] - state is now member Aug 2 14:49:47 bl460g6d crmd[26181]: notice: crm_update_peer_state: pcmk_quorum_notification: Node bl460g6d[2] - state is now member Aug 2 14:49:47 bl460g6d crmd[26181]: notice: do_started: The local CRM is operational Aug 2 14:50:05 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] Aug 2 14:50:05 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Diff: --- 0.0.3 Aug 2 14:50:05 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Diff: +++ 0.1.1 Aug 2 14:50:05 bl460g6d cib[26176]: notice: cib:diff: -- Aug 2 14:50:05 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:05 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:05 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:05 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Local-only Change: 0.2.1 Aug 2 14:50:05 bl460g6d cib[26176]: notice: cib:diff: -- Aug 2 14:50:05 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:06 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Local-only Change: 0.3.1 Aug 2 14:50:06 bl460g6d cib[26176]: notice: cib:diff: -- Aug 2 14:50:06 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:06 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Local-only Change: 0.4.1 Aug 2 14:50:06 bl460g6d cib[26176]: notice: cib:diff: -- Aug 2 14:50:06 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:06 bl460g6d attrd[26179]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Aug 2 14:50:07 bl460g6d pengine[26180]: error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined Aug 2 14:50:07 bl460g6d pengine[26180]: error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option Aug 2 14:50:07 bl460g6d pengine[26180]: error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity Aug 2 14:50:07 bl460g6d pengine[26180]: notice: stage6: Delaying fencing operations until there are resources to manage Aug 2 14:50:07 bl460g6d pengine[26180]: notice: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pacemaker/pengine/pe-input-0.bz2 Aug 2 14:50:07 bl460g6d pengine[26180]: notice: process_pe_message: Configuration ERRORs found during PE processing. Please run "crm_verify -L" to identify issues. Aug 2 14:50:07 bl460g6d crmd[26181]: notice: run_graph: Transition 0 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete Aug 2 14:50:07 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Aug 2 14:50:07 bl460g6d attrd[26179]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Aug 2 14:50:07 bl460g6d attrd[26179]: notice: attrd_perform_update: Sent update 4: probe_complete=true Aug 2 14:50:07 bl460g6d attrd[26179]: notice: attrd_perform_update: Sent update 6: probe_complete=true Aug 2 14:50:27 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Diff: --- 0.4.14 Aug 2 14:50:27 bl460g6d cib[26176]: notice: log_cib_diff: cib:diff: Diff: +++ 0.5.1 Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: -- Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d cib[26176]: notice: cib:diff: ++ Aug 2 14:50:27 bl460g6d attrd[26179]: warning: attrd_cib_callback: Update shutdown=(null) failed: No such device or address Aug 2 14:50:27 bl460g6d attrd[26179]: warning: attrd_cib_callback: Update terminate=(null) failed: No such device or address Aug 2 14:50:27 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Aug 2 14:50:27 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] Aug 2 14:50:27 bl460g6d attrd[26179]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Aug 2 14:50:27 bl460g6d attrd[26179]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Aug 2 14:50:29 bl460g6d pengine[26180]: notice: unpack_config: On loss of CCM Quorum: Ignore Aug 2 14:50:29 bl460g6d pengine[26180]: warning: unpack_nodes: Blind faith: not fencing unseen nodes Aug 2 14:50:29 bl460g6d pengine[26180]: notice: LogActions: Start stonith-1 (bl460g6d) Aug 2 14:50:29 bl460g6d pengine[26180]: notice: LogActions: Start stonith-2 (bl460g6c) Aug 2 14:50:29 bl460g6d pengine[26180]: notice: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pacemaker/pengine/pe-input-1.bz2 Aug 2 14:50:29 bl460g6d crmd[26181]: warning: get_rsc_metadata: No metadata found for fence_kdump::stonith:heartbeat Aug 2 14:50:29 bl460g6d crmd[26181]: error: string2xml: Can't parse NULL input Aug 2 14:50:29 bl460g6d crmd[26181]: error: get_rsc_restart_list: Metadata for (null)::stonith:fence_kdump is not valid XML Aug 2 14:50:30 bl460g6d stonith-ng[26177]: notice: stonith_device_register: Added 'stonith-1' to the device list (1 active devices) Aug 2 14:50:40 bl460g6d stonith-ng[26177]: notice: log_operation: Operation 'monitor' [26201] for device 'stonith-1' returned: -1001 Aug 2 14:50:40 bl460g6d stonith-ng[26177]: warning: log_operation: stonith-1: [debug]: waiting for message from '192.168.133.11' Aug 2 14:50:40 bl460g6d stonith-ng[26177]: warning: log_operation: stonith-1: [debug]: timeout after 10 seconds Aug 2 14:50:40 bl460g6d crmd[26181]: warning: get_rsc_metadata: No metadata found for fence_kdump::stonith:heartbeat Aug 2 14:50:40 bl460g6d crmd[26181]: error: string2xml: Can't parse NULL input Aug 2 14:50:40 bl460g6d crmd[26181]: error: get_rsc_restart_list: Metadata for (null)::stonith:fence_kdump is not valid XML Aug 2 14:50:40 bl460g6d crmd[26181]: error: process_lrm_event: LRM operation stonith-1_start_0 (call=12, status=4, cib-update=55, confirmed=true) Error Aug 2 14:50:40 bl460g6d crmd[26181]: warning: status_from_rc: Action 8 (stonith-1_start_0) on bl460g6d failed (target: 0 vs. rc: 1): Error Aug 2 14:50:40 bl460g6d crmd[26181]: warning: update_failcount: Updating failcount for stonith-1 on bl460g6d after failed start: rc=1 (update=INFINITY, time=1343886640) Aug 2 14:50:40 bl460g6d attrd[26179]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-stonith-1 (INFINITY) Aug 2 14:50:40 bl460g6d crmd[26181]: warning: status_from_rc: Action 9 (stonith-2_start_0) on bl460g6c failed (target: 0 vs. rc: 1): Error Aug 2 14:50:40 bl460g6d crmd[26181]: warning: update_failcount: Updating failcount for stonith-2 on bl460g6c after failed start: rc=1 (update=INFINITY, time=1343886640) Aug 2 14:50:40 bl460g6d attrd[26179]: notice: attrd_perform_update: Sent update 17: fail-count-stonith-1=INFINITY Aug 2 14:50:40 bl460g6d attrd[26179]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-stonith-1 (1343886640) Aug 2 14:50:40 bl460g6d attrd[26179]: notice: attrd_perform_update: Sent update 20: last-failure-stonith-1=1343886640 Aug 2 14:50:40 bl460g6d crmd[26181]: notice: run_graph: Transition 1 (Complete=9, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete Aug 2 14:50:40 bl460g6d attrd[26179]: warning: attrd_cib_callback: Update fail-count-stonith-2=(null) failed: No such device or address Aug 2 14:50:40 bl460g6d attrd[26179]: warning: attrd_cib_callback: Update last-failure-stonith-2=(null) failed: No such device or address Aug 2 14:50:42 bl460g6d pengine[26180]: notice: unpack_config: On loss of CCM Quorum: Ignore Aug 2 14:50:42 bl460g6d pengine[26180]: warning: unpack_nodes: Blind faith: not fencing unseen nodes Aug 2 14:50:42 bl460g6d pengine[26180]: warning: unpack_rsc_op: Processing failed op start for stonith-2 on bl460g6c: unknown error (1) Aug 2 14:50:42 bl460g6d pengine[26180]: warning: unpack_rsc_op: Processing failed op start for stonith-1 on bl460g6d: unknown error (1) Aug 2 14:50:42 bl460g6d pengine[26180]: warning: common_apply_stickiness: Forcing stonith-2 away from bl460g6c after 1000000 failures (max=1) Aug 2 14:50:42 bl460g6d pengine[26180]: warning: common_apply_stickiness: Forcing stonith-1 away from bl460g6d after 1000000 failures (max=1) Aug 2 14:50:42 bl460g6d pengine[26180]: notice: LogActions: Stop stonith-1 (bl460g6d) Aug 2 14:50:42 bl460g6d pengine[26180]: notice: LogActions: Stop stonith-2 (bl460g6c) Aug 2 14:50:42 bl460g6d pengine[26180]: notice: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pacemaker/pengine/pe-input-2.bz2 Aug 2 14:50:42 bl460g6d crmd[26181]: notice: process_lrm_event: LRM operation stonith-1_stop_0 (call=15, rc=0, cib-update=57, confirmed=true) ok Aug 2 14:50:42 bl460g6d crmd[26181]: notice: run_graph: Transition 2 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Complete Aug 2 14:50:42 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Aug 2 15:05:42 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ] Aug 2 15:05:42 bl460g6d pengine[26180]: notice: unpack_config: On loss of CCM Quorum: Ignore Aug 2 15:05:42 bl460g6d pengine[26180]: warning: unpack_nodes: Blind faith: not fencing unseen nodes Aug 2 15:05:42 bl460g6d pengine[26180]: warning: unpack_rsc_op: Processing failed op start for stonith-2 on bl460g6c: unknown error (1) Aug 2 15:05:42 bl460g6d pengine[26180]: warning: unpack_rsc_op: Processing failed op start for stonith-1 on bl460g6d: unknown error (1) Aug 2 15:05:42 bl460g6d pengine[26180]: warning: common_apply_stickiness: Forcing stonith-2 away from bl460g6c after 1000000 failures (max=1) Aug 2 15:05:42 bl460g6d pengine[26180]: warning: common_apply_stickiness: Forcing stonith-1 away from bl460g6d after 1000000 failures (max=1) Aug 2 15:05:42 bl460g6d pengine[26180]: notice: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pacemaker/pengine/pe-input-3.bz2 Aug 2 15:05:42 bl460g6d crmd[26181]: notice: run_graph: Transition 3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-3.bz2): Complete Aug 2 15:05:42 bl460g6d crmd[26181]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]