Jan 15 15:38:05 bl460g1n7 corosync[29904]:   [MAIN  ] main.c:main:1176 Corosync Cluster Engine ('2.3.3'): started and ready to provide service.
Jan 15 15:38:05 bl460g1n7 corosync[29904]:   [MAIN  ] main.c:main:1177 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.217] is now up.
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cmap
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cfg
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cpg
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [WD    ] wd.c:setup_watchdog:651 Watchdog is now been tickled by corosync.
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: votequorum
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: quorum
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.217] is now up.
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.217:4) was formed. Members joined: -1062705703
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705703
Jan 15 15:38:05 bl460g1n7 corosync[29905]:   [MAIN  ] main.c:corosync_sync_completed:279 Completed service synchronization, ready to provide service.
Jan 15 15:38:06 bl460g1n7 corosync[29905]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.216:8) was formed. Members joined: -1062705704
Jan 15 15:38:06 bl460g1n7 corosync[29905]:   [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Jan 15 15:38:06 bl460g1n7 corosync[29905]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[2]: -1062705704 -1062705703
Jan 15 15:38:06 bl460g1n7 corosync[29905]:   [MAIN  ] main.c:corosync_sync_completed:279 Completed service synchronization, ready to provide service.
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: main: Starting Pacemaker 1.1.11-0.27.b48276b.git.el6 (Build: b48276b):  generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: main: Maximum core file size is: 18446744073709551615
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: qb_ipcs_us_publish: server name: pacemakerd
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_get_peer: Created entry 184560ec-60db-4113-8f0d-5fff78262667/0x1387160 for node (null)/3232261593 (1 total)
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: cluster_connect_quorum: Quorum acquired
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Using uid=189 and group=189 for process cib
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Forked child 29918 for process cib
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Forked child 29919 for process stonith-ng
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Forked child 29920 for process lrmd
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Using uid=189 and group=189 for process attrd
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Forked child 29921 for process attrd
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Using uid=189 and group=189 for process pengine
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Forked child 29922 for process pengine
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Using uid=189 and group=189 for process crmd
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: start_child: Forked child 29923 for process crmd
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: main: Starting mainloop
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: pcmk_quorum_notification: Membership 8: quorum retained (2)
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_get_peer: Created entry 5c887858-a95c-45d1-88f5-2358ed890a15/0x1389790 for node (null)/3232261592 (2 total)
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n7 cib[29918]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: main: Using new config location: /var/lib/pacemaker/cib
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n7 lrmd[29920]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Jan 15 15:38:08 bl460g1n7 lrmd[29920]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n7 cib[29918]:  warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Jan 15 15:38:08 bl460g1n7 cib[29918]:  warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Jan 15 15:38:08 bl460g1n7 cib[29918]:  warning: readCibXmlFile: Continuing with an empty configuration.
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: validate_with_relaxng: Creating RNG parser context
Jan 15 15:38:08 bl460g1n7 lrmd[29920]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n7 lrmd[29920]:     info: qb_ipcs_us_publish: server name: lrmd
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n7 lrmd[29920]:     info: main: Starting
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n7 attrd[29921]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n7 pengine[29922]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n7 pengine[29922]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: main: Starting up
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:08 bl460g1n7 pengine[29922]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n7 pengine[29922]:     info: qb_ipcs_us_publish: server name: pengine
Jan 15 15:38:08 bl460g1n7 pengine[29922]:     info: main: Starting pengine
Jan 15 15:38:08 bl460g1n7 crmd[29923]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n7 crmd[29923]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n7 crmd[29923]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n7 crmd[29923]:   notice: main: CRM Git Version: b48276b
Jan 15 15:38:08 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jan 15 15:38:08 bl460g1n7 crmd[29923]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n7 crmd[29923]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n7 crmd[29923]:     info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261592
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: startCib: CIB Initialization completed successfully
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: crm_get_peer: Created entry 48d6e0a3-590e-458f-a03a-71e981cd5dc2/0x671200 for node (null)/3232261593 (1 total)
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_get_peer: Created entry a0d6559f-66a0-48d0-8f09-12396e4a27ed/0x15c0730 for node (null)/3232261593 (1 total)
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_get_peer: Created entry 9e2a9086-2399-4011-ad3c-ab94730d4f90/0x1592fa0 for node (null)/3232261593 (1 total)
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261592] - state is now member (was (null))
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node bl460g1n7[3232261593] - state is now member (was (null))
Jan 15 15:38:08 bl460g1n7 pacemakerd[29914]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261593] - state is now member (was (null))
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 attrd[29921]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: main: Cluster connection active
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: qb_ipcs_us_publish: server name: attrd
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: main: Accepting attribute updates
Jan 15 15:38:08 bl460g1n7 attrd[29921]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: qb_ipcs_us_publish: server name: cib_ro
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: qb_ipcs_us_publish: server name: cib_rw
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: qb_ipcs_us_publish: server name: cib_shm
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: cib_init: Starting cib mainloop
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: pcmk_cpg_membership: Joined[0.0] cib.3232261593 
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: pcmk_cpg_membership: Member[0.0] cib.3232261593 
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: pcmk_cpg_membership: Joined[1.0] cib.3232261592 
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_get_peer: Created entry 00162ae5-9156-49f6-bf63-c684e3a1100d/0x1595970 for node (null)/3232261592 (2 total)
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 cib[29918]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: pcmk_cpg_membership: Member[1.0] cib.3232261592 
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: pcmk_cpg_membership: Member[1.1] cib.3232261593 
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x15959e0 for uid=0 gid=0 pid=29919 id=4618e973-0e71-459c-be79-2a08f6a0e22c
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: setup_cib: Watching for stonith topology changes
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: qb_ipcs_us_publish: server name: stonith-ng
Jan 15 15:38:08 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: main: Starting stonith-ng mainloop
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: init_cib_cache_cb: Updating device list from the cib: init
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: unpack_nodes: Creating a fake local node
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: pcmk_cpg_membership: Joined[0.0] stonith-ng.3232261593 
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: pcmk_cpg_membership: Member[0.0] stonith-ng.3232261593 
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: pcmk_cpg_membership: Joined[1.0] stonith-ng.3232261592 
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_get_peer: Created entry 64b7914d-531b-4f07-8ff6-53c0a888ed86/0x16c9060 for node (null)/3232261592 (2 total)
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: pcmk_cpg_membership: Member[1.0] stonith-ng.3232261592 
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n7 stonith-ng[29919]:     info: pcmk_cpg_membership: Member[1.1] stonith-ng.3232261593 
Jan 15 15:38:08 bl460g1n7 cib[29924]:     info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: d3813d3f6bc333e7748d9257dda8345d)
Jan 15 15:38:08 bl460g1n7 cib[29924]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.BiuCie (digest: /var/lib/pacemaker/cib/cib.HpZNbY)
Jan 15 15:38:09 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1619ca0 for uid=189 gid=189 pid=29923 id=131e5040-b419-4c92-9766-5a80708c69b8
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: do_cib_control: CIB connection established
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:09 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: crm_get_peer: Created entry 3af1bd09-8c08-4d88-9b41-f1d13721d8b8/0x11e3ce0 for node (null)/3232261593 (1 total)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: peer_update_callback: bl460g1n7 is now (null)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: cluster_connect_quorum: Quorum acquired
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: do_ha_control: Connected to the cluster
Jan 15 15:38:09 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: lrmd_ipc_connect: Connecting to lrmd
Jan 15 15:38:09 bl460g1n7 lrmd[29920]:     info: crm_client_new: Connecting 0x2481cb0 for uid=189 gid=189 pid=29923 id=a9a84b5f-e401-4e90-8c4d-c4f19dcb8653
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: do_lrm_control: LRM connection established
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jan 15 15:38:09 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: pcmk_quorum_notification: Membership 8: quorum retained (2)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: crm_get_peer: Created entry 1b81a8b6-d409-41a1-85b2-7d89c8ee62dc/0x132b790 for node (null)/3232261592 (2 total)
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261592
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x13e4c90 for uid=189 gid=189 pid=29921 id=66514ae8-419a-4d6f-a927-6a4e54e4ca48
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: attrd_cib_connect: Connected to the CIB after 2 attempts
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: main: CIB connection active
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: pcmk_cpg_membership: Joined[0.0] attrd.3232261593 
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: pcmk_cpg_membership: Member[0.0] attrd.3232261593 
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: pcmk_cpg_membership: Joined[1.0] attrd.3232261592 
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: crm_get_peer: Created entry c528522d-e377-4980-9e07-e71f4c2e77a7/0x677150 for node (null)/3232261592 (2 total)
Jan 15 15:38:09 bl460g1n7 attrd[29921]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 attrd[29921]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: pcmk_cpg_membership: Member[1.0] attrd.3232261592 
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:09 bl460g1n7 attrd[29921]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261592] - state is now member (was (null))
Jan 15 15:38:09 bl460g1n7 attrd[29921]:     info: pcmk_cpg_membership: Member[1.1] attrd.3232261593 
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261592] - state is now member (was (null))
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node bl460g1n7[3232261593] - state is now member (was (null))
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: peer_update_callback: bl460g1n7 is now member (was (null))
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: qb_ipcs_us_publish: server name: crmd
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: do_started: The local CRM is operational
Jan 15 15:38:09 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jan 15 15:38:09 bl460g1n7 crmd[29923]:   notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jan 15 15:38:09 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Jan 15 15:38:09 bl460g1n7 stonith-ng[29919]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: pcmk_cpg_membership: Joined[0.0] crmd.3232261593 
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: pcmk_cpg_membership: Member[0.0] crmd.3232261593 
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: pcmk_cpg_membership: Joined[1.0] crmd.3232261592 
Jan 15 15:38:10 bl460g1n7 crmd[29923]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:10 bl460g1n7 crmd[29923]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: pcmk_cpg_membership: Member[1.0] crmd.3232261592 
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: pcmk_cpg_membership: Member[1.1] crmd.3232261593 
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:10 bl460g1n7 crmd[29923]:     info: peer_update_callback: bl460g1n6 is now member
Jan 15 15:38:11 bl460g1n7 stonith-ng[29919]:     info: crm_client_new: Connecting 0x16c5380 for uid=189 gid=189 pid=29923 id=18bd78f9-e6ab-4189-be84-06f2b7a04179
Jan 15 15:38:11 bl460g1n7 stonith-ng[29919]:     info: stonith_command: Processed register from crmd.29923: OK (0)
Jan 15 15:38:11 bl460g1n7 stonith-ng[29919]:     info: stonith_command: Processed st_notify from crmd.29923: OK (0)
Jan 15 15:38:11 bl460g1n7 stonith-ng[29919]:     info: stonith_command: Processed st_notify from crmd.29923: OK (0)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: election_count_vote: Election 1 (owner: 3232261592) lost: vote from bl460g1n6 (Uptime)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:   notice: do_state_transition: State transition S_ELECTION -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: do_dc_release: DC role released
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.0.0)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: do_te_control: Transitioner is now inactive
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_RELEASE_SUCCESS from do_dc_release() received in state S_PENDING
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: update_dc: Set DC to bl460g1n6 (3.0.8)
Jan 15 15:38:30 bl460g1n7 cib[29918]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:30 bl460g1n7 cib[29918]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section cib: OK (rc=0, origin=bl460g1n6/crmd/7, version=0.0.1)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: election_count_vote: Election 2 (owner: 3232261592) lost: vote from bl460g1n6 (Uptime)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=bl460g1n6/crmd/9, version=0.1.1)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: update_dc: Unset DC. Was bl460g1n6
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/7, version=0.1.1)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:  warning: join_query_callback: No DC for join-1
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: update_dc: Set DC to bl460g1n6 (3.0.8)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/8, version=0.1.1)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=bl460g1n6/crmd/11, version=0.2.1)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_replace: Digest matched on replace from bl460g1n6: b854d3dfc3ceff652257ff88e226d3ee
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_replace: Replaced 0.2.1 with 0.2.1 from bl460g1n6
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=bl460g1n6/crmd/17, version=0.2.1)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=bl460g1n6/crmd/18, version=0.3.1)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=bl460g1n6/crmd/19, version=0.4.1)
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='bl460g1n7']/transient_attributes
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: update_attrd_helper: Connecting to attrd... 5 retries remaining
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='bl460g1n7']/transient_attributes to master (origin=local/crmd/9)
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x674540 for uid=189 gid=189 pid=29923 id=e83c1a38-b19d-45d7-a676-9875e2d932d2
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: attrd_client_message: Starting an election to determine the writer
Jan 15 15:38:30 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Jan 15 15:38:30 bl460g1n7 crmd[29923]:   notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Jan 15 15:38:30 bl460g1n7 cib[29939]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Jan 15 15:38:30 bl460g1n7 attrd[29921]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:30 bl460g1n7 attrd[29921]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting terminate[bl460g1n7] = (null)
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting shutdown[bl460g1n7] = 0
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: election_count_vote: Election 1 (owner: 3232261592) lost: vote from bl460g1n6 (Uptime)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/22, version=0.4.2)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/24, version=0.4.3)
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: election_count_vote: Election 2 (owner: 3232261592) lost: vote from bl460g1n6 (Uptime)
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting probe_complete[bl460g1n7] = true
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section cib: OK (rc=0, origin=bl460g1n6/crmd/27, version=0.4.4)
Jan 15 15:38:30 bl460g1n7 cib[29939]:     info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: 95284c32320f2298eff00c4881b9db37)
Jan 15 15:38:30 bl460g1n7 attrd[29921]:   notice: attrd_peer_message: Processing sync-response from bl460g1n6
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/attrd/2, version=0.4.5)
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/attrd/4, version=0.4.6)
Jan 15 15:38:30 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting probe_complete[bl460g1n7] = true
Jan 15 15:38:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/attrd/5, version=0.4.7)
Jan 15 15:38:30 bl460g1n7 cib[29939]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.ccxrhh (digest: /var/lib/pacemaker/cib/cib.Rj7qoZ)
Jan 15 15:38:30 bl460g1n7 cib[29940]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Jan 15 15:38:30 bl460g1n7 cib[29940]:     info: write_cib_contents: Wrote version 0.4.0 of the CIB to disk (digest: 034d1fed1360797812b0c9fb59cc7300)
Jan 15 15:38:30 bl460g1n7 cib[29940]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.HVacCq (digest: /var/lib/pacemaker/cib/cib.ZhjS38)
Jan 15 15:38:39 bl460g1n7 crmd[29923]:     info: throttle_send_command: Updated throttle state to 0000
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: election_count_vote: Election 3 (owner: 3232261592) lost: vote from bl460g1n6 (Uptime)
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: update_dc: Unset DC. Was bl460g1n6
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_NOT_DC
Jan 15 15:38:46 bl460g1n7 crmd[29923]:   notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/10, version=0.4.7)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section 'all': OK (rc=0, origin=bl460g1n6/cibadmin/2, version=0.5.1)
Jan 15 15:38:46 bl460g1n7 stonith-ng[29919]:     info: update_cib_stonith_devices: Updating device list from the cib: new location constraint
Jan 15 15:38:46 bl460g1n7 stonith-ng[29919]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:38:46 bl460g1n7 stonith-ng[29919]:  warning: handle_startup_fencing: Blind faith: not fencing unseen nodes
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: update_dc: Set DC to bl460g1n6 (3.0.8)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=bl460g1n6/crmd/37, version=0.6.1)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/11, version=0.6.1)
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: election_count_vote: Election 4 (owner: 3232261592) lost: vote from bl460g1n6 (Uptime)
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: update_dc: Unset DC. Was bl460g1n6
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: update_dc: Set DC to bl460g1n6 (3.0.8)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/12, version=0.6.1)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section crm_config: OK (rc=0, origin=bl460g1n6/crmd/39, version=0.7.1)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_replace: Digest matched on replace from bl460g1n6: 202d79c3f98d593bb780f3e8c5773f31
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_replace: Replaced 0.7.1 with 0.7.1 from bl460g1n6
Jan 15 15:38:46 bl460g1n7 crmd[29923]:     info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
Jan 15 15:38:46 bl460g1n7 crmd[29923]:   notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=bl460g1n6/crmd/45, version=0.7.1)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='bl460g1n6']/lrm: OK (rc=0, origin=bl460g1n6/crmd/48, version=0.7.2)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/49, version=0.7.3)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section //node_state[@uname='bl460g1n7']/lrm: OK (rc=0, origin=bl460g1n6/crmd/50, version=0.7.4)
Jan 15 15:38:46 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/51, version=0.7.5)
Jan 15 15:38:46 bl460g1n7 cib[29947]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Jan 15 15:38:47 bl460g1n7 cib[29947]:     info: write_cib_contents: Wrote version 0.5.0 of the CIB to disk (digest: 261105df11f196285308572a835e985e)
Jan 15 15:38:47 bl460g1n7 cib[29947]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.zfhlxI (digest: /var/lib/pacemaker/cib/cib.lRqN29)
Jan 15 15:38:47 bl460g1n7 cib[29948]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-3.raw
Jan 15 15:38:47 bl460g1n7 cib[29948]:     info: write_cib_contents: Wrote version 0.7.0 of the CIB to disk (digest: 5a361672811793a573b8a0a00219f249)
Jan 15 15:38:47 bl460g1n7 cib[29948]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.nfLdwK (digest: /var/lib/pacemaker/cib/cib.KRpcnc)
Jan 15 15:38:48 bl460g1n7 lrmd[29920]:     info: process_lrmd_get_rsc_info: Resource 'prmVM2' not found (0 active resources)
Jan 15 15:38:48 bl460g1n7 lrmd[29920]:     info: process_lrmd_rsc_register: Added 'prmVM2' to the rsc list (1 active resources)
Jan 15 15:38:48 bl460g1n7 crmd[29923]:     info: do_lrm_rsc_op: Performing key=7:2:7:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_monitor_0
Jan 15 15:38:48 bl460g1n7 lrmd[29920]:     info: process_lrmd_get_rsc_info: Resource 'prmPing' not found (1 active resources)
Jan 15 15:38:48 bl460g1n7 lrmd[29920]:     info: process_lrmd_get_rsc_info: Resource 'prmPing:1' not found (1 active resources)
Jan 15 15:38:48 bl460g1n7 lrmd[29920]:     info: process_lrmd_rsc_register: Added 'prmPing' to the rsc list (2 active resources)
Jan 15 15:38:48 bl460g1n7 crmd[29923]:     info: do_lrm_rsc_op: Performing key=8:2:7:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_monitor_0
Jan 15 15:38:49 bl460g1n7 crmd[29923]:     info: services_os_action_execute: Managed ping_meta-data_0 process 29961 exited with rc=0
Jan 15 15:38:49 bl460g1n7 crmd[29923]:   notice: process_lrm_event: LRM operation prmPing_monitor_0 (call=10, rc=7, cib-update=13, confirmed=true) not running
Jan 15 15:38:49 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/13)
Jan 15 15:38:49 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/13, version=0.7.6)
Jan 15 15:38:49 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/56, version=0.7.7)
Jan 15 15:38:49 bl460g1n7 VirtualDomain(prmVM2)[29949]: DEBUG: Virtual domain vm2 is currently error: failed to get domain 'vm2'
error: domain not found: no domain with matching name 'vm2'.
Jan 15 15:38:49 bl460g1n7 lrmd[29920]:   notice: operation_finished: prmVM2_monitor_0:29949:stderr [ error: failed to get domain 'vm2' ]
Jan 15 15:38:49 bl460g1n7 lrmd[29920]:   notice: operation_finished: prmVM2_monitor_0:29949:stderr [ error: Domain not found: no domain with matching name 'vm2' ]
Jan 15 15:38:49 bl460g1n7 lrmd[29920]:   notice: operation_finished: prmVM2_monitor_0:29949:stderr [ error: failed to get domain 'vm2' ]
Jan 15 15:38:49 bl460g1n7 lrmd[29920]:   notice: operation_finished: prmVM2_monitor_0:29949:stderr [ error: Domain not found: no domain with matching name 'vm2' ]
Jan 15 15:38:49 bl460g1n7 crmd[29923]:     info: services_os_action_execute: Managed VirtualDomain_meta-data_0 process 29998 exited with rc=0
Jan 15 15:38:49 bl460g1n7 crmd[29923]:   notice: process_lrm_event: LRM operation prmVM2_monitor_0 (call=5, rc=7, cib-update=14, confirmed=true) not running
Jan 15 15:38:49 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/14)
Jan 15 15:38:49 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting probe_complete[bl460g1n7] = true
Jan 15 15:38:49 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/14, version=0.7.8)
Jan 15 15:38:49 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/57, version=0.7.9)
Jan 15 15:38:49 bl460g1n7 crmd[29923]:     info: do_lrm_rsc_op: Performing key=11:2:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_start_0
Jan 15 15:38:49 bl460g1n7 lrmd[29920]:     info: log_execute: executing - rsc:prmPing action:start call_id:11
Jan 15 15:38:50 bl460g1n7 attrd_updater[30021]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:50 bl460g1n7 attrd_updater[30021]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:50 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x6971d0 for uid=0 gid=0 pid=30021 id=b18bbef6-a30d-4663-a91d-f3c1ae9c678f
Jan 15 15:38:50 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:38:50 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:50 bl460g1n7 lrmd[29920]:     info: log_finished: finished - rsc:prmPing action:start call_id:11 pid:30004 exit-code:0 exec-time:1038ms queue-time:0ms
Jan 15 15:38:50 bl460g1n7 crmd[29923]:   notice: process_lrm_event: LRM operation prmPing_start_0 (call=11, rc=0, cib-update=15, confirmed=true) ok
Jan 15 15:38:50 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/15)
Jan 15 15:38:50 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/attrd/9, version=0.7.10)
Jan 15 15:38:50 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/15, version=0.7.11)
Jan 15 15:38:50 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/attrd/10, version=0.7.12)
Jan 15 15:38:50 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/58, version=0.7.13)
Jan 15 15:38:52 bl460g1n7 crmd[29923]:     info: do_lrm_rsc_op: Performing key=12:3:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_monitor_10000
Jan 15 15:38:53 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section resources: OK (rc=0, origin=bl460g1n6/crm_resource/5, version=0.8.1)
Jan 15 15:38:53 bl460g1n7 cib[30037]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-4.raw
Jan 15 15:38:53 bl460g1n7 cib[30037]:     info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: caa5d622b396eb492d1acba1234836ed)
Jan 15 15:38:53 bl460g1n7 cib[30037]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.SBhrr2 (digest: /var/lib/pacemaker/cib/cib.71KNOK)
Jan 15 15:38:53 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section resources: OK (rc=0, origin=bl460g1n6/crm_resource/5, version=0.9.1)
Jan 15 15:38:53 bl460g1n7 attrd_updater[30040]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:53 bl460g1n7 attrd_updater[30040]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:38:53 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x697d00 for uid=0 gid=0 pid=30040 id=15b1c188-32bf-400d-9b65-5b04e398170b
Jan 15 15:38:53 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:38:53 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:53 bl460g1n7 crmd[29923]:   notice: process_lrm_event: LRM operation prmPing_monitor_10000 (call=12, rc=0, cib-update=16, confirmed=false) ok
Jan 15 15:38:53 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/16)
Jan 15 15:38:53 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/60, version=0.9.2)
Jan 15 15:38:53 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/16, version=0.9.3)
Jan 15 15:38:53 bl460g1n7 cib[30041]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-5.raw
Jan 15 15:38:53 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/61, version=0.9.4)
Jan 15 15:38:53 bl460g1n7 cib[30041]:     info: write_cib_contents: Wrote version 0.9.0 of the CIB to disk (digest: aca48e4d4ed7a4f0d998bcd77e84d6e9)
Jan 15 15:38:53 bl460g1n7 cib[30041]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.Msfw0b (digest: /var/lib/pacemaker/cib/cib.ZvauIU)
Jan 15 15:38:55 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/63, version=0.9.5)
Jan 15 15:39:04 bl460g1n7 attrd_updater[30063]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:04 bl460g1n7 attrd_updater[30063]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:04 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x697d00 for uid=0 gid=0 pid=30063 id=4e9e7209-5dbf-4994-aa2c-c95fbd712728
Jan 15 15:39:04 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:39:04 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:15 bl460g1n7 attrd_updater[30083]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:15 bl460g1n7 attrd_updater[30083]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:15 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x697d00 for uid=0 gid=0 pid=30083 id=ae014ab9-bfd7-44dc-ae25-0acad7f0addd
Jan 15 15:39:15 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:39:15 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:20 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section nodes: OK (rc=0, origin=bl460g1n6/crm_attribute/4, version=0.10.1)
Jan 15 15:39:20 bl460g1n7 cib[30087]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-6.raw
Jan 15 15:39:20 bl460g1n7 cib[30087]:     info: write_cib_contents: Wrote version 0.10.0 of the CIB to disk (digest: c3a684b15ebe3cc70d3e0b780cde1564)
Jan 15 15:39:20 bl460g1n7 cib[30087]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.CDwrtF (digest: /var/lib/pacemaker/cib/cib.oq2csA)
Jan 15 15:39:22 bl460g1n7 attrd[29921]:     info: attrd_peer_update: Setting default_ping_set[bl460g1n6]: 100 -> (null) from bl460g1n6
Jan 15 15:39:22 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/65, version=0.10.2)
Jan 15 15:39:26 bl460g1n7 attrd_updater[30232]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:26 bl460g1n7 attrd_updater[30232]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:26 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x697d00 for uid=0 gid=0 pid=30232 id=e84ad079-f027-4ff6-99fe-6e6793047297
Jan 15 15:39:26 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:39:26 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:27 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/attrd/11, version=0.10.3)
Jan 15 15:39:28 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/66, version=0.10.4)
Jan 15 15:39:30 bl460g1n7 crmd[29923]:     info: do_lrm_rsc_op: Performing key=8:6:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_start_0
Jan 15 15:39:30 bl460g1n7 lrmd[29920]:     info: log_execute: executing - rsc:prmVM2 action:start call_id:13
Jan 15 15:39:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/68, version=0.10.5)
Jan 15 15:39:30 bl460g1n7 VirtualDomain(prmVM2)[30252]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:30 bl460g1n7 VirtualDomain(prmVM2)[30252]: INFO: Virtual domain vm2 already running.
Jan 15 15:39:30 bl460g1n7 lrmd[29920]:     info: log_finished: finished - rsc:prmVM2 action:start call_id:13 pid:30252 exit-code:0 exec-time:73ms queue-time:0ms
Jan 15 15:39:30 bl460g1n7 crmd[29923]:     info: services_os_action_execute: Managed VirtualDomain_meta-data_0 process 30284 exited with rc=0
Jan 15 15:39:30 bl460g1n7 crmd[29923]:   notice: process_lrm_event: LRM operation prmVM2_start_0 (call=13, rc=0, cib-update=17, confirmed=true) ok
Jan 15 15:39:30 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/17)
Jan 15 15:39:30 bl460g1n7 crmd[29923]:     info: do_lrm_rsc_op: Performing key=9:6:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_monitor_10000
Jan 15 15:39:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/17, version=0.10.6)
Jan 15 15:39:30 bl460g1n7 VirtualDomain(prmVM2)[30290]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:30 bl460g1n7 crm_resource[30323]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:30 bl460g1n7 crm_resource[30323]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:30 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1743bb0 for uid=0 gid=0 pid=30323 id=b0f3c9d1-969f-45ad-8d54-a548a4e1d47b
Jan 15 15:39:30 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.10.6)
Jan 15 15:39:30 bl460g1n7 cib[29918]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:31 bl460g1n7 crm_resource[30329]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:31 bl460g1n7 crm_resource[30329]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:31 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1743bb0 for uid=0 gid=0 pid=30329 id=db062f73-f796-4693-a5f5-fcf432d9aa18
Jan 15 15:39:31 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.10.6)
Jan 15 15:39:31 bl460g1n7 cib[29918]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:31 bl460g1n7 crmd[29923]:   notice: process_lrm_event: LRM operation prmVM2_monitor_10000 (call=14, rc=0, cib-update=18, confirmed=false) ok
Jan 15 15:39:31 bl460g1n7 cib[29918]:     info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/18)
Jan 15 15:39:31 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_apply_diff operation for section status: OK (rc=0, origin=bl460g1n6/crmd/18, version=0.10.7)
Jan 15 15:39:37 bl460g1n7 attrd_updater[30368]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:37 bl460g1n7 attrd_updater[30368]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:37 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x697d00 for uid=0 gid=0 pid=30368 id=89222a10-cbbc-4581-a3f7-edafdb60f542
Jan 15 15:39:37 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:39:37 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:41 bl460g1n7 VirtualDomain(prmVM2)[30373]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:41 bl460g1n7 crm_resource[30406]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:41 bl460g1n7 crm_resource[30406]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:41 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1743bb0 for uid=0 gid=0 pid=30406 id=4fb8537d-0eb7-4b37-a069-29c123febfcf
Jan 15 15:39:41 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.10.7)
Jan 15 15:39:41 bl460g1n7 cib[29918]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:41 bl460g1n7 crm_resource[30412]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:41 bl460g1n7 crm_resource[30412]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:41 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1743bb0 for uid=0 gid=0 pid=30412 id=a613960e-d012-41fa-acc8-93d73ded79f6
Jan 15 15:39:41 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.10.7)
Jan 15 15:39:41 bl460g1n7 cib[29918]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:48 bl460g1n7 attrd_updater[30429]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:48 bl460g1n7 attrd_updater[30429]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:48 bl460g1n7 attrd[29921]:     info: crm_client_new: Connecting 0x697d00 for uid=0 gid=0 pid=30429 id=ec478d3a-da30-42f8-8757-263c2212b761
Jan 15 15:39:48 bl460g1n7 attrd[29921]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n7] = 100
Jan 15 15:39:48 bl460g1n7 attrd[29921]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:51 bl460g1n7 VirtualDomain(prmVM2)[30430]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:51 bl460g1n7 crm_resource[30463]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:51 bl460g1n7 crm_resource[30463]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:51 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1743bb0 for uid=0 gid=0 pid=30463 id=a434b4c6-5c10-471d-b072-060f3da8f750
Jan 15 15:39:51 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.10.7)
Jan 15 15:39:51 bl460g1n7 cib[29918]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:51 bl460g1n7 crm_resource[30469]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:51 bl460g1n7 crm_resource[30469]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jan 15 15:39:51 bl460g1n7 cib[29918]:     info: crm_client_new: Connecting 0x1743bb0 for uid=0 gid=0 pid=30469 id=9c9e09dc-77d8-4670-b85b-bea9ff191243
Jan 15 15:39:51 bl460g1n7 cib[29918]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.10.7)
Jan 15 15:39:51 bl460g1n7 cib[29918]:     info: crm_client_destroy: Destroying 0 events
