Jan 15 15:38:05 bl460g1n6 corosync[30773]:   [MAIN  ] main.c:main:1176 Corosync Cluster Engine ('2.3.3'): started and ready to provide service.
Jan 15 15:38:05 bl460g1n6 corosync[30773]:   [MAIN  ] main.c:main:1177 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.216] is now up.
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cmap
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cfg
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: cpg
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [WD    ] wd.c:setup_watchdog:651 Watchdog is now been tickled by corosync.
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: votequorum
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QB    ] ipc_setup.c:qb_ipcs_us_publish:377 server name: quorum
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.216] is now up.
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.216:4) was formed. Members joined: -1062705704
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705704
Jan 15 15:38:05 bl460g1n6 corosync[30775]:   [MAIN  ] main.c:corosync_sync_completed:279 Completed service synchronization, ready to provide service.
Jan 15 15:38:06 bl460g1n6 corosync[30775]:   [TOTEM ] totemsrp.c:memb_state_operational_enter:2016 A new membership (192.168.101.216:8) was formed. Members joined: -1062705703
Jan 15 15:38:06 bl460g1n6 corosync[30775]:   [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Jan 15 15:38:06 bl460g1n6 corosync[30775]:   [QUORUM] vsf_quorum.c:log_view_list:132 Members[2]: -1062705704 -1062705703
Jan 15 15:38:06 bl460g1n6 corosync[30775]:   [MAIN  ] main.c:corosync_sync_completed:279 Completed service synchronization, ready to provide service.
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: main: Starting Pacemaker 1.1.11-0.27.b48276b.git.el6 (Build: b48276b):  generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: main: Maximum core file size is: 18446744073709551615
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: qb_ipcs_us_publish: server name: pacemakerd
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_get_peer: Created entry 1e0d5776-8f4a-4d29-841a-8c424c088ba9/0xf94360 for node (null)/3232261592 (1 total)
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: cluster_connect_quorum: Quorum acquired
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Using uid=189 and group=189 for process cib
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Forked child 30790 for process cib
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Forked child 30791 for process stonith-ng
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Forked child 30792 for process lrmd
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Using uid=189 and group=189 for process attrd
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Forked child 30793 for process attrd
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Using uid=189 and group=189 for process pengine
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Forked child 30794 for process pengine
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Using uid=189 and group=189 for process crmd
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: start_child: Forked child 30795 for process crmd
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: main: Starting mainloop
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: pcmk_quorum_notification: Membership 8: quorum retained (2)
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node bl460g1n6[3232261592] - state is now member (was (null))
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_get_peer: Created entry b67d9d0c-4440-404b-9215-9f56ff57d0cd/0xf95c70 for node (null)/3232261593 (2 total)
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n6 cib[30790]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: main: Using new config location: /var/lib/pacemaker/cib
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Jan 15 15:38:08 bl460g1n6 cib[30790]:  warning: retrieveCib: Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Jan 15 15:38:08 bl460g1n6 cib[30790]:  warning: readCibXmlFile: Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Jan 15 15:38:08 bl460g1n6 cib[30790]:  warning: readCibXmlFile: Continuing with an empty configuration.
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: validate_with_relaxng: Creating RNG parser context
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n6 lrmd[30792]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n6 lrmd[30792]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n6 attrd[30793]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n6 lrmd[30792]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:08 bl460g1n6 lrmd[30792]:     info: qb_ipcs_us_publish: server name: lrmd
Jan 15 15:38:08 bl460g1n6 lrmd[30792]:     info: main: Starting
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: main: Starting up
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:08 bl460g1n6 pengine[30794]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n6 pengine[30794]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n6 pengine[30794]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n6 pengine[30794]:     info: qb_ipcs_us_publish: server name: pengine
Jan 15 15:38:08 bl460g1n6 pengine[30794]:     info: main: Starting pengine
Jan 15 15:38:08 bl460g1n6 crmd[30795]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:08 bl460g1n6 crmd[30795]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:08 bl460g1n6 crmd[30795]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 15 15:38:08 bl460g1n6 crmd[30795]:   notice: main: CRM Git Version: b48276b
Jan 15 15:38:08 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jan 15 15:38:08 bl460g1n6 crmd[30795]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jan 15 15:38:08 bl460g1n6 crmd[30795]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jan 15 15:38:08 bl460g1n6 crmd[30795]:     info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261593
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: startCib: CIB Initialization completed successfully
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: crm_get_peer: Created entry dca981cd-90df-407b-b087-0acd5655a372/0x172b3b0 for node (null)/3232261592 (1 total)
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: crm_get_peer: Created entry 21034731-3fef-40be-a112-0958fa8b199d/0xb078a0 for node (null)/3232261592 (1 total)
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_get_peer: Created entry 0b61719b-98ca-4080-a5d7-12327d3ac138/0x1772180 for node (null)/3232261592 (1 total)
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261592] - state is now member (was (null))
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261593] - state is now member (was (null))
Jan 15 15:38:08 bl460g1n6 pacemakerd[30786]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:08 bl460g1n6 stonith-ng[30791]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 attrd[30793]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: main: Cluster connection active
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: qb_ipcs_us_publish: server name: attrd
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: main: Accepting attribute updates
Jan 15 15:38:08 bl460g1n6 attrd[30793]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: qb_ipcs_us_publish: server name: cib_ro
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: qb_ipcs_us_publish: server name: cib_rw
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: qb_ipcs_us_publish: server name: cib_shm
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: cib_init: Starting cib mainloop
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: pcmk_cpg_membership: Joined[0.0] cib.3232261592 
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: pcmk_cpg_membership: Member[0.0] cib.3232261592 
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_get_peer: Created entry 4e1ac5a1-b161-474b-bc50-8efe070b04cc/0x1774b50 for node (null)/3232261593 (2 total)
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 cib[30790]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: pcmk_cpg_membership: Member[0.1] cib.3232261593 
Jan 15 15:38:08 bl460g1n6 cib[30790]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:08 bl460g1n6 cib[30796]:     info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: d3813d3f6bc333e7748d9257dda8345d)
Jan 15 15:38:08 bl460g1n6 cib[30796]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.ffKpx3 (digest: /var/lib/pacemaker/cib/cib.P53SeC)
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x1775230 for uid=189 gid=189 pid=30795 id=cc068cb9-d0e9-4926-8c5d-e7263aa1c9fe
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: do_cib_control: CIB connection established
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: crm_get_peer: Created entry 86f26d75-ef6c-41c1-8dfd-018826c1911a/0x18a2e80 for node (null)/3232261592 (1 total)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261592
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: crm_get_peer: Node 3232261592 has uuid 3232261592
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: crm_update_peer_proc: cluster_connect_cpg: Node (null)[3232261592] - corosync-cpg is now online
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: init_cs_connection_once: Connection to 'corosync': established
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: crm_get_peer: Node 3232261592 is now known as bl460g1n6
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: peer_update_callback: bl460g1n6 is now (null)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: cluster_connect_quorum: Quorum acquired
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: do_ha_control: Connected to the cluster
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: lrmd_ipc_connect: Connecting to lrmd
Jan 15 15:38:09 bl460g1n6 lrmd[30792]:     info: crm_client_new: Connecting 0x1c40df0 for uid=189 gid=189 pid=30795 id=ea951299-8a4d-4fd6-8900-d6588e07ac38
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: do_lrm_control: LRM connection established
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: pcmk_quorum_notification: Membership 8: quorum retained (2)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node bl460g1n6[3232261592] - state is now member (was (null))
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: peer_update_callback: bl460g1n6 is now member (was (null))
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: crm_get_peer: Created entry 42d1c91f-ad53-45b5-8167-d2405c4fe866/0x19ea180 for node (null)/3232261593 (2 total)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: pcmk_quorum_notification: Obtaining name for new node 3232261593
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x15c3e70 for uid=0 gid=0 pid=30791 id=9371ae0a-f42a-4124-b3e5-4b0c45c649ba
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x17f62e0 for uid=189 gid=189 pid=30793 id=f1049688-9613-4dc5-b406-51d46c6ad9c5
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[3232261593] - state is now member (was (null))
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: attrd_cib_connect: Connected to the CIB after 2 attempts
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: main: CIB connection active
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:   notice: setup_cib: Watching for stonith topology changes
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: pcmk_cpg_membership: Joined[0.0] attrd.3232261592 
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: qb_ipcs_us_publish: server name: stonith-ng
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: pcmk_cpg_membership: Member[0.0] attrd.3232261592 
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: main: Starting stonith-ng mainloop
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: crm_get_peer: Created entry fcb032c1-ac8f-4064-bc44-9eabc58057ed/0x1731300 for node (null)/3232261593 (2 total)
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: pcmk_cpg_membership: Joined[0.0] stonith-ng.3232261592 
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: pcmk_cpg_membership: Member[0.0] stonith-ng.3232261592 
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: crm_get_peer: Created entry 1118be01-a601-485d-aee5-1422541a4372/0xb0b940 for node (null)/3232261593 (2 total)
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: qb_ipcs_us_publish: server name: crmd
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: do_started: The local CRM is operational
Jan 15 15:38:09 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jan 15 15:38:09 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jan 15 15:38:09 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Jan 15 15:38:09 bl460g1n6 attrd[30793]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 attrd[30793]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: pcmk_cpg_membership: Member[0.1] attrd.3232261593 
Jan 15 15:38:09 bl460g1n6 attrd[30793]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:09 bl460g1n6 attrd[30793]:   notice: crm_update_peer_state: attrd_peer_change_cb: Node (null)[3232261593] - state is now member (was (null))
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: crm_get_peer: Node 3232261593 has uuid 3232261593
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: pcmk_cpg_membership: Member[0.1] stonith-ng.3232261593 
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: init_cib_cache_cb: Updating device list from the cib: init
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: unpack_nodes: Creating a fake local node
Jan 15 15:38:09 bl460g1n6 stonith-ng[30791]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:10 bl460g1n6 crmd[30795]:     info: pcmk_cpg_membership: Joined[0.0] crmd.3232261592 
Jan 15 15:38:10 bl460g1n6 crmd[30795]:     info: pcmk_cpg_membership: Member[0.0] crmd.3232261592 
Jan 15 15:38:10 bl460g1n6 crmd[30795]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261593
Jan 15 15:38:10 bl460g1n6 crmd[30795]:   notice: get_node_name: Could not obtain a node name for corosync nodeid 3232261593
Jan 15 15:38:10 bl460g1n6 crmd[30795]:     info: pcmk_cpg_membership: Member[0.1] crmd.3232261593 
Jan 15 15:38:10 bl460g1n6 crmd[30795]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[3232261593] - corosync-cpg is now online
Jan 15 15:38:10 bl460g1n6 crmd[30795]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:10 bl460g1n6 crmd[30795]:     info: peer_update_callback: bl460g1n7 is now member
Jan 15 15:38:10 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x17f6840 for uid=0 gid=0 pid=29122 id=875af07b-f85b-4f25-9e41-a417703178c0
Jan 15 15:38:10 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/3, version=0.0.0)
Jan 15 15:38:11 bl460g1n6 stonith-ng[30791]:     info: crm_client_new: Connecting 0xb0f310 for uid=189 gid=189 pid=30795 id=ba6342f3-f8b0-4b20-917d-e976d74e8389
Jan 15 15:38:11 bl460g1n6 stonith-ng[30791]:     info: stonith_command: Processed register from crmd.30795: OK (0)
Jan 15 15:38:11 bl460g1n6 stonith-ng[30791]:     info: stonith_command: Processed st_notify from crmd.30795: OK (0)
Jan 15 15:38:11 bl460g1n6 stonith-ng[30791]:     info: stonith_command: Processed st_notify from crmd.30795: OK (0)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: election_count_vote: Election 1 (owner: 3232261593) pass: vote from bl460g1n7 (Uptime)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: election_complete: Election election-0 complete
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: election_timeout_popped: Election failed: Declaring ourselves the winner
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_ELECTION_DC from election_timeout_popped() received in state S_ELECTION
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_te_control: Registering TE UUID: be72ea63-75a9-4de4-a591-e716f960743b
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: set_graph_functions: Setting custom graph functions
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: crm_client_new: Connecting 0x1fde950 for uid=189 gid=189 pid=30795 id=264993ce-afe0-4fb9-89eb-67e7abc7232f
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_dc_takeover: Taking over DC status for this partition
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_readwrite: We are now in R/W mode
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.0.0)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.0.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: No such device or address (rc=-6, origin=local/crmd/8, version=0.0.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.1.1
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="0" num_updates="1"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <cluster_property_set id="cib-bootstrap-options">
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.27.b48276b.git.el6-b48276b"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: ++       </cluster_property_set>
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.1.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: No such device or address (rc=-6, origin=local/crmd/10, version=0.1.1)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: join_make_offer: Making join offers based on membership 8
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: join_make_offer: join-1: Sending offer to bl460g1n7
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n7[3232261593] - join-1 phase 0 -> 1
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: join_make_offer: join-1: Sending offer to bl460g1n6
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n6[3232261592] - join-1 phase 0 -> 1
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
Jan 15 15:38:30 bl460g1n6 crmd[30795]:  warning: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: initialize_join: Node bl460g1n7[3232261593] - join-2 phase 1 -> 0
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: initialize_join: Node bl460g1n6[3232261592] - join-2 phase 1 -> 0
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: join_make_offer: join-2: Sending offer to bl460g1n7
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n7[3232261593] - join-2 phase 0 -> 1
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: join_make_offer: join-2: Sending offer to bl460g1n6
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n6[3232261592] - join-2 phase 0 -> 1
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: update_dc: Set DC to bl460g1n6 (3.0.8)
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.2.1
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="1" num_updates="1"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/13, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/15, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/16, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node bl460g1n7[3232261593] - join-2 phase 1 -> 2
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node bl460g1n7[3232261593] - expected state is now member (was (null))
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node bl460g1n6[3232261592] - join-2 phase 1 -> 2
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node bl460g1n6[3232261592] - expected state is now member (was (null))
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crmd_join_phase_log: join-2: bl460g1n7=integrated
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crmd_join_phase_log: join-2: bl460g1n6=integrated
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_dc_join_finalize: join-2: Syncing our CIB to the rest of the cluster
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/17, version=0.2.1)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: finalize_join_for: Node bl460g1n7[3232261593] - join-2 phase 2 -> 3
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: finalize_join_for: Node bl460g1n6[3232261592] - join-2 phase 2 -> 3
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.3.1
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="2" num_updates="1"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <node id="3232261593" uname="bl460g1n7"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/18, version=0.3.1)
Jan 15 15:38:30 bl460g1n6 cib[30804]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.4.1
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="3" num_updates="1"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <node id="3232261592" uname="bl460g1n6"/>
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.4.1)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='bl460g1n6']/transient_attributes
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: update_attrd_helper: Connecting to attrd... 5 retries remaining
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='bl460g1n6']/transient_attributes: OK (rc=0, origin=local/crmd/20, version=0.4.1)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: crm_client_new: Connecting 0x172e6f0 for uid=189 gid=189 pid=30795 id=ae87db0f-35c1-4af2-b486-2948b0b726b4
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_client_message: Starting an election to determine the writer
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_ack: Node bl460g1n6[3232261592] - join-2 phase 3 -> 4
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_dc_join_ack: join-2: Updating node state to member for bl460g1n6
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='bl460g1n6']/lrm
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='bl460g1n6']/lrm: OK (rc=0, origin=local/crmd/21, version=0.4.1)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: corosync_node_name: Unable to get node name for nodeid 3232261592
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: get_node_name: Defaulting to uname -n for the local corosync node name
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting terminate[bl460g1n6] = (null)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/22, version=0.4.2)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting shutdown[bl460g1n6] = 0
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='bl460g1n7']/transient_attributes: OK (rc=0, origin=bl460g1n7/crmd/9, version=0.4.2)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: crm_get_peer: Node 3232261593 is now known as bl460g1n7
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: election_count_vote: Election 1 (owner: 3232261593) pass: vote from bl460g1n7 (Uptime)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_ack: Node bl460g1n7[3232261593] - join-2 phase 3 -> 4
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_dc_join_ack: join-2: Updating node state to member for bl460g1n7
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='bl460g1n7']/lrm
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='bl460g1n7']/lrm: OK (rc=0, origin=local/crmd/23, version=0.4.2)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/24, version=0.4.3)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/25, version=0.4.3)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: abort_transition_graph: do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/26, version=0.4.3)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/27, version=0.4.4)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/28, version=0.4.4)
Jan 15 15:38:30 bl460g1n6 pengine[30794]:    error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
Jan 15 15:38:30 bl460g1n6 pengine[30794]:    error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
Jan 15 15:38:30 bl460g1n6 pengine[30794]:    error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status_fencing: Node bl460g1n6 is active
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is online
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status_fencing: Node bl460g1n7 is active
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:38:30 bl460g1n6 pengine[30794]:   notice: stage6: Delaying fencing operations until there are resources to manage
Jan 15 15:38:30 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2
Jan 15 15:38:30 bl460g1n6 cib[30804]:     info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: 95284c32320f2298eff00c4881b9db37)
Jan 15 15:38:30 bl460g1n6 pengine[30794]:   notice: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1389767910-13) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on bl460g1n7 - no waiting
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: te_rsc_command: Action 3 confirmed - no wait
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 2: probe_complete probe_complete on bl460g1n6 (local) - no waiting
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: te_rsc_command: Action 2 confirmed - no wait
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting probe_complete[bl460g1n6] = true
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 0 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: election_complete: Election election-attrd complete
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 2 with 2 changes for shutdown, id=<n/a>, set=(null)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 3 with 2 changes for terminate, id=<n/a>, set=(null)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 4 with 1 changes for probe_complete, id=<n/a>, set=(null)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: write_attribute: Write out of 'probe_complete' delayed: update 4 in progress
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/2, version=0.4.5)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=1, node=bl460g1n6, tag=nvpair, id=status-3232261592-shutdown, name=shutdown, value=0, magic=NA, cib=0.4.5) : Transient attribute: update
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/3, version=0.4.5)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 3 for terminate: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 3 for terminate[bl460g1n6]=(null): OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 3 for terminate[bl460g1n7]=(null): OK (0)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/4, version=0.4.6)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 2 for shutdown: OK (0)
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/29, version=0.4.6)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 2 for shutdown[bl460g1n6]=0: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 2 for shutdown[bl460g1n7]=0: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 4 for probe_complete: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 4 for probe_complete[bl460g1n6]=true: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 4 for probe_complete[bl460g1n7]=(null): OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 5 with 2 changes for probe_complete, id=<n/a>, set=(null)
Jan 15 15:38:30 bl460g1n6 pengine[30794]:    error: unpack_resources: Resource start-up disabled since no STONITH resources have been defined
Jan 15 15:38:30 bl460g1n6 pengine[30794]:    error: unpack_resources: Either configure some or disable STONITH with the stonith-enabled option
Jan 15 15:38:30 bl460g1n6 pengine[30794]:    error: unpack_resources: NOTE: Clusters with shared data need STONITH to ensure data integrity
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status_fencing: Node bl460g1n6 is active
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is online
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status_fencing: Node bl460g1n7 is active
Jan 15 15:38:30 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:38:30 bl460g1n6 pengine[30794]:   notice: stage6: Delaying fencing operations until there are resources to manage
Jan 15 15:38:30 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-1.bz2
Jan 15 15:38:30 bl460g1n6 pengine[30794]:   notice: process_pe_message: Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1389767910-16) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
Jan 15 15:38:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/5, version=0.4.7)
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on bl460g1n7 - no waiting
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: te_rsc_command: Action 3 confirmed - no wait
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 1 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
Jan 15 15:38:30 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jan 15 15:38:30 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 15 15:38:30 bl460g1n6 cib[30804]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.am6WU2 (digest: /var/lib/pacemaker/cib/cib.lVkKJz)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 5 for probe_complete: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 5 for probe_complete[bl460g1n6]=true: OK (0)
Jan 15 15:38:30 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 5 for probe_complete[bl460g1n7]=true: OK (0)
Jan 15 15:38:30 bl460g1n6 cib[30805]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Jan 15 15:38:30 bl460g1n6 cib[30805]:     info: write_cib_contents: Wrote version 0.4.0 of the CIB to disk (digest: 034d1fed1360797812b0c9fb59cc7300)
Jan 15 15:38:30 bl460g1n6 cib[30805]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.8yLEVf (digest: /var/lib/pacemaker/cib/cib.cPS3aN)
Jan 15 15:38:39 bl460g1n6 crmd[30795]:     info: throttle_send_command: Updated throttle state to 0000
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=30814 id=6c9c9595-0b4f-4ff9-955a-89dcd57d2667
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.4.7)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=30815 id=8f5f8321-a9c8-4cf1-b9e5-8d53897971b7
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.4.7)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=30836 id=cd4959a7-2817-46b9-ab1b-f20e26df76a7
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, node=, tag=diff, id=(null), magic=NA, cib=0.5.1) : Non-status change
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_replace_notify: Replaced: 0.4.7 -> 0.5.1 from bl460g1n6
Jan 15 15:38:46 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_replaced_cb: Updating all attributes after cib_refresh_notify event
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 6 with 2 changes for shutdown, id=<n/a>, set=(null)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 7 with 2 changes for terminate, id=<n/a>, set=(null)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 8 with 2 changes for probe_complete, id=<n/a>, set=(null)
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: Diff: --- 0.4.7
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: Diff: +++ 0.5.1 2b4b612c2449664636a2d704509d07d1
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: --         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.27.b48276b.git.el6-b48276b"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: --         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: update_dc: Unset DC. Was bl460g1n6
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair name="no-quorum-policy" value="ignore" id="cib-bootstrap-options-no-quorum-policy"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair name="stonith-enabled" value="false" id="cib-bootstrap-options-stonith-enabled"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Jan 15 15:38:46 bl460g1n6 stonith-ng[30791]:     info: update_cib_stonith_devices: Updating device list from the cib: new location constraint
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <primitive id="prmVM2" class="ocf" provider="heartbeat" type="VirtualDomain">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <meta_attributes id="prmVM2-meta_attributes">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair name="allow-migrate" value="true" id="prmVM2-meta_attributes-allow-migrate"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </meta_attributes>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <instance_attributes id="prmVM2-instance_attributes">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair name="config" value="/migrate_test/config/vm2.xml" id="prmVM2-instance_attributes-config"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair name="hypervisor" value="qemu:///system" id="prmVM2-instance_attributes-hypervisor"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair name="migration_transport" value="ssh" id="prmVM2-instance_attributes-migration_transport"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </instance_attributes>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <operations>
Jan 15 15:38:46 bl460g1n6 stonith-ng[30791]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <op name="start" interval="0s" timeout="120s" on-fail="restart" id="prmVM2-start-0s"/>
Jan 15 15:38:46 bl460g1n6 stonith-ng[30791]:  warning: handle_startup_fencing: Blind faith: not fencing unseen nodes
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <op name="monitor" interval="10s" timeout="30s" on-fail="restart" id="prmVM2-monitor-10s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <op name="stop" interval="0s" timeout="120s" on-fail="block" id="prmVM2-stop-0s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <op name="migrate_to" interval="0s" timeout="120s" on-fail="restart" id="prmVM2-migrate_to-0s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <op name="migrate_from" interval="0s" timeout="120s" on-fail="restart" id="prmVM2-migrate_from-0s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </operations>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       </primitive>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <clone id="clnPing">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <primitive id="prmPing" class="ocf" provider="pacemaker" type="ping">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <instance_attributes id="prmPing-instance_attributes">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <nvpair name="name" value="default_ping_set" id="prmPing-instance_attributes-name"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <nvpair name="host_list" value="192.168.201.254" id="prmPing-instance_attributes-host_list"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <nvpair name="multiplier" value="100" id="prmPing-instance_attributes-multiplier"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <nvpair name="attempts" value="2" id="prmPing-instance_attributes-attempts"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <nvpair name="timeout" value="2" id="prmPing-instance_attributes-timeout"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           </instance_attributes>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <operations>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <op name="start" interval="0s" timeout="60s" on-fail="restart" id="prmPing-start-0s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <op name="monitor" interval="10s" timeout="60s" on-fail="restart" id="prmPing-monitor-10s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++             <op name="stop" interval="0s" timeout="60s" on-fail="ignore" id="prmPing-stop-0s"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           </operations>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </primitive>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       </clone>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <rsc_location id="l2" rsc="prmVM2">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <rule score="200" id="l2-rule">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <expression attribute="#uname" operation="eq" value="bl460g1n6" id="l2-expression"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </rule>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <rule score="100" id="l2-rule-0">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <expression attribute="#uname" operation="eq" value="bl460g1n7" id="l2-expression-0"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </rule>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <rule score="-INFINITY" boolean-op="or" id="l2-rule-1">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <expression operation="not_defined" attribute="default_ping_set" id="l2-expression-1"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <expression attribute="default_ping_set" operation="lt" value="100" id="l2-expression-2"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </rule>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       </rsc_location>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <rsc_colocation id="c4" score="INFINITY" rsc="prmVM2" with-rsc="clnPing"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <rsc_order id="o4" score="0" first="clnPing" then="prmVM2" symmetrical="false"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++     <rsc_defaults>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       <meta_attributes id="rsc-options">
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++       </meta_attributes>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++     </rsc_defaults>
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: election_complete: Election election-0 complete
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: election_timeout_popped: Election failed: Declaring ourselves the winner
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_ELECTION_DC from election_timeout_popped() received in state S_ELECTION
Jan 15 15:38:46 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_dc_takeover: Taking over DC status for this partition
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/30, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/31, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/32, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/33, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/6, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 6 for shutdown: OK (0)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 6 for shutdown[bl460g1n6]=0: OK (0)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 6 for shutdown[bl460g1n7]=0: OK (0)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/7, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 7 for terminate: OK (0)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 7 for terminate[bl460g1n6]=(null): OK (0)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 7 for terminate[bl460g1n7]=(null): OK (0)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/8, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 8 for probe_complete: OK (0)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 8 for probe_complete[bl460g1n6]=true: OK (0)
Jan 15 15:38:46 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 8 for probe_complete[bl460g1n7]=true: OK (0)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/34, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/35, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: No such device or address (rc=-6, origin=local/crmd/36, version=0.5.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.6.1
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="5" num_updates="1"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.27.b48276b.git.el6-b48276b"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/37, version=0.6.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: No such device or address (rc=-6, origin=local/crmd/38, version=0.6.1)
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: initialize_join: Node bl460g1n7[3232261593] - join-3 phase 4 -> 0
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: initialize_join: Node bl460g1n6[3232261592] - join-3 phase 4 -> 0
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: join_make_offer: join-3: Sending offer to bl460g1n7
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n7[3232261593] - join-3 phase 0 -> 1
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: join_make_offer: join-3: Sending offer to bl460g1n6
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n6[3232261592] - join-3 phase 0 -> 1
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Jan 15 15:38:46 bl460g1n6 crmd[30795]:  warning: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: initialize_join: Node bl460g1n7[3232261593] - join-4 phase 1 -> 0
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: initialize_join: Node bl460g1n6[3232261592] - join-4 phase 1 -> 0
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: join_make_offer: join-4: Sending offer to bl460g1n7
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n7[3232261593] - join-4 phase 0 -> 1
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: join_make_offer: join-4: Sending offer to bl460g1n6
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: join_make_offer: Node bl460g1n6[3232261592] - join-4 phase 0 -> 1
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: update_dc: Set DC to bl460g1n6 (3.0.8)
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node bl460g1n7[3232261593] - join-4 phase 1 -> 2
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.7.1
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="6" num_updates="1"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/39, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/40, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/41, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/42, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/43, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/44, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node bl460g1n6[3232261592] - join-4 phase 1 -> 2
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crmd_join_phase_log: join-4: bl460g1n7=integrated
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crmd_join_phase_log: join-4: bl460g1n6=integrated
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_dc_join_finalize: join-4: Syncing our CIB to the rest of the cluster
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/45, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: finalize_join_for: Node bl460g1n7[3232261593] - join-4 phase 2 -> 3
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: finalize_join_for: Node bl460g1n6[3232261592] - join-4 phase 2 -> 3
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/46, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/47, version=0.7.1)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=30837 id=655c30c1-9783-498b-bb07-b47b8bb605de
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_ack: Node bl460g1n6[3232261592] - join-4 phase 3 -> 4
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_dc_join_ack: join-4: Updating node state to member for bl460g1n6
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='bl460g1n6']/lrm
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='bl460g1n6']/lrm: OK (rc=0, origin=local/crmd/48, version=0.7.2)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/49, version=0.7.3)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.7.3)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: crm_update_peer_join: do_dc_join_ack: Node bl460g1n7[3232261593] - join-4 phase 3 -> 4
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_dc_join_ack: join-4: Updating node state to member for bl460g1n7
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='bl460g1n7']/lrm
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='bl460g1n7']/lrm: OK (rc=0, origin=local/crmd/50, version=0.7.4)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/51, version=0.7.5)
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 15 15:38:46 bl460g1n6 crmd[30795]:     info: abort_transition_graph: do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/52, version=0.7.5)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/53, version=0.7.5)
Jan 15 15:38:46 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/54, version=0.7.5)
Jan 15 15:38:46 bl460g1n6 cib[30838]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Jan 15 15:38:47 bl460g1n6 cib[30838]:     info: write_cib_contents: Wrote version 0.6.0 of the CIB to disk (digest: 56cebba7f790f44312b7b5bc366984aa)
Jan 15 15:38:47 bl460g1n6 cib[30838]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.etB44w (digest: /var/lib/pacemaker/cib/cib.YvRWmN)
Jan 15 15:38:47 bl460g1n6 cib[30839]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-3.raw
Jan 15 15:38:47 bl460g1n6 cib[30839]:     info: write_cib_contents: Wrote version 0.7.0 of the CIB to disk (digest: 5a361672811793a573b8a0a00219f249)
Jan 15 15:38:47 bl460g1n6 cib[30839]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.csxy1x (digest: /var/lib/pacemaker/cib/cib.KXaXFO)
Jan 15 15:38:48 bl460g1n6 crmd[30795]:     info: crm_timer_popped: New Transition Timer (I_PE_CALC) just popped (2000ms)
Jan 15 15:38:48 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/55, version=0.7.5)
Jan 15 15:38:48 bl460g1n6 pengine[30794]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is online
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: native_print: prmVM2	(ocf::heartbeat:VirtualDomain):	Stopped 
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: short_print:      Stopped: [ bl460g1n6 bl460g1n7 ]
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: native_color: Resource prmVM2 cannot run anywhere
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on bl460g1n6
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:1 on bl460g1n7
Jan 15 15:38:48 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmVM2	(Stopped)
Jan 15 15:38:48 bl460g1n6 pengine[30794]:   notice: LogActions: Start   prmPing:0	(bl460g1n6)
Jan 15 15:38:48 bl460g1n6 pengine[30794]:   notice: LogActions: Start   prmPing:1	(bl460g1n7)
Jan 15 15:38:48 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-2.bz2
Jan 15 15:38:48 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:38:48 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1389767928-29) derived from /var/lib/pacemaker/pengine/pe-input-2.bz2
Jan 15 15:38:48 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 7: monitor prmVM2_monitor_0 on bl460g1n7
Jan 15 15:38:48 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 4: monitor prmVM2_monitor_0 on bl460g1n6 (local)
Jan 15 15:38:48 bl460g1n6 lrmd[30792]:     info: process_lrmd_get_rsc_info: Resource 'prmVM2' not found (0 active resources)
Jan 15 15:38:48 bl460g1n6 lrmd[30792]:     info: process_lrmd_rsc_register: Added 'prmVM2' to the rsc list (1 active resources)
Jan 15 15:38:48 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=4:2:7:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_monitor_0
Jan 15 15:38:48 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 5: monitor prmPing:0_monitor_0 on bl460g1n6 (local)
Jan 15 15:38:48 bl460g1n6 lrmd[30792]:     info: process_lrmd_get_rsc_info: Resource 'prmPing' not found (1 active resources)
Jan 15 15:38:48 bl460g1n6 lrmd[30792]:     info: process_lrmd_get_rsc_info: Resource 'prmPing:0' not found (1 active resources)
Jan 15 15:38:48 bl460g1n6 lrmd[30792]:     info: process_lrmd_rsc_register: Added 'prmPing' to the rsc list (2 active resources)
Jan 15 15:38:48 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=5:2:7:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_monitor_0
Jan 15 15:38:48 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 8: monitor prmPing:1_monitor_0 on bl460g1n7
Jan 15 15:38:49 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=bl460g1n7/crmd/13, version=0.7.6)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: services_os_action_execute: Managed ping_meta-data_0 process 30852 exited with rc=0
Jan 15 15:38:49 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmPing_monitor_0 (call=10, rc=7, cib-update=56, confirmed=true) not running
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_monitor_0 (8) confirmed on bl460g1n7 (rc=0)
Jan 15 15:38:49 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/56, version=0.7.7)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_monitor_0 (5) confirmed on bl460g1n6 (rc=0)
Jan 15 15:38:49 bl460g1n6 VirtualDomain(prmVM2)[30840]: DEBUG: Virtual domain vm2 is currently error: failed to get domain 'vm2'
error: domain not found: no domain with matching name 'vm2'.
Jan 15 15:38:49 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=bl460g1n7/crmd/14, version=0.7.8)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_monitor_0 (7) confirmed on bl460g1n7 (rc=0)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 6: probe_complete probe_complete on bl460g1n7 - no waiting
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: te_rsc_command: Action 6 confirmed - no wait
Jan 15 15:38:49 bl460g1n6 lrmd[30792]:   notice: operation_finished: prmVM2_monitor_0:30840:stderr [ error: failed to get domain 'vm2' ]
Jan 15 15:38:49 bl460g1n6 lrmd[30792]:   notice: operation_finished: prmVM2_monitor_0:30840:stderr [ error: Domain not found: no domain with matching name 'vm2' ]
Jan 15 15:38:49 bl460g1n6 lrmd[30792]:   notice: operation_finished: prmVM2_monitor_0:30840:stderr [ error: failed to get domain 'vm2' ]
Jan 15 15:38:49 bl460g1n6 lrmd[30792]:   notice: operation_finished: prmVM2_monitor_0:30840:stderr [ error: Domain not found: no domain with matching name 'vm2' ]
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: services_os_action_execute: Managed VirtualDomain_meta-data_0 process 30891 exited with rc=0
Jan 15 15:38:49 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmVM2_monitor_0 (call=5, rc=7, cib-update=57, confirmed=true) not running
Jan 15 15:38:49 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/57, version=0.7.9)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_monitor_0 (4) confirmed on bl460g1n6 (rc=0)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on bl460g1n6 (local) - no waiting
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: te_rsc_command: Action 3 confirmed - no wait
Jan 15 15:38:49 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting probe_complete[bl460g1n6] = true (writer)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 9: start prmPing:0_start_0 on bl460g1n6 (local)
Jan 15 15:38:49 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=9:2:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_start_0
Jan 15 15:38:49 bl460g1n6 lrmd[30792]:     info: log_execute: executing - rsc:prmPing action:start call_id:11
Jan 15 15:38:49 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 11: start prmPing:1_start_0 on bl460g1n7
Jan 15 15:38:50 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 9 with 1 changes for default_ping_set, id=<n/a>, set=(null)
Jan 15 15:38:50 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/9, version=0.7.10)
Jan 15 15:38:50 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=0, node=bl460g1n7, tag=nvpair, id=status-3232261593-default_ping_set, name=default_ping_set, value=100, magic=NA, cib=0.7.10) : Transient attribute: update
Jan 15 15:38:50 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=bl460g1n7/crmd/15, version=0.7.11)
Jan 15 15:38:50 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 9 for default_ping_set: OK (0)
Jan 15 15:38:50 bl460g1n6 attrd_updater[30914]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:50 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 9 for default_ping_set[bl460g1n7]=100: OK (0)
Jan 15 15:38:50 bl460g1n6 attrd_updater[30914]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:50 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_start_0 (11) confirmed on bl460g1n7 (rc=0)
Jan 15 15:38:50 bl460g1n6 attrd[30793]:     info: crm_client_new: Connecting 0x1754c30 for uid=0 gid=0 pid=30914 id=fafc8c1d-9410-4d8d-bfe2-3ed1dd64d2b1
Jan 15 15:38:50 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n6] = 100 (writer)
Jan 15 15:38:50 bl460g1n6 attrd[30793]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:50 bl460g1n6 lrmd[30792]:     info: log_finished: finished - rsc:prmPing action:start call_id:11 pid:30897 exit-code:0 exec-time:1052ms queue-time:0ms
Jan 15 15:38:50 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 10 with 2 changes for default_ping_set, id=<n/a>, set=(null)
Jan 15 15:38:50 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmPing_start_0 (call=11, rc=0, cib-update=58, confirmed=true) ok
Jan 15 15:38:50 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/10, version=0.7.12)
Jan 15 15:38:50 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=0, node=bl460g1n6, tag=nvpair, id=status-3232261592-default_ping_set, name=default_ping_set, value=100, magic=NA, cib=0.7.12) : Transient attribute: update
Jan 15 15:38:50 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_start_0 (9) confirmed on bl460g1n6 (rc=0)
Jan 15 15:38:50 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/58, version=0.7.13)
Jan 15 15:38:50 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 2 (Complete=11, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Stopped
Jan 15 15:38:50 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 10 for default_ping_set: OK (0)
Jan 15 15:38:50 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 10 for default_ping_set[bl460g1n6]=100: OK (0)
Jan 15 15:38:50 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 10 for default_ping_set[bl460g1n7]=100: OK (0)
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: crm_timer_popped: New Transition Timer (I_PE_CALC) just popped (2000ms)
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Jan 15 15:38:52 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/59, version=0.7.13)
Jan 15 15:38:52 bl460g1n6 pengine[30794]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is online
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: native_print: prmVM2	(ocf::heartbeat:VirtualDomain):	Stopped 
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: short_print:      Started: [ bl460g1n6 bl460g1n7 ]
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmVM2 on bl460g1n6
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on bl460g1n6
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:1 on bl460g1n7
Jan 15 15:38:52 bl460g1n6 pengine[30794]:   notice: LogActions: Start   prmVM2	(bl460g1n6)
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:0	(Started bl460g1n6)
Jan 15 15:38:52 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:1	(Started bl460g1n7)
Jan 15 15:38:52 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-3.bz2
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1389767932-38) derived from /var/lib/pacemaker/pengine/pe-input-3.bz2
Jan 15 15:38:52 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 5: start prmVM2_start_0 on bl460g1n6 (local)
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=5:3:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_start_0
Jan 15 15:38:52 bl460g1n6 lrmd[30792]:     info: log_execute: executing - rsc:prmVM2 action:start call_id:12
Jan 15 15:38:52 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 9: monitor prmPing_monitor_10000 on bl460g1n6 (local)
Jan 15 15:38:52 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=9:3:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_monitor_10000
Jan 15 15:38:52 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 12: monitor prmPing_monitor_10000 on bl460g1n7
Jan 15 15:38:52 bl460g1n6 VirtualDomain(prmVM2)[30915]: DEBUG: Virtual domain vm2 is currently error: failed to get domain 'vm2'
error: domain not found: no domain with matching name 'vm2'.
Jan 15 15:38:53 bl460g1n6 VirtualDomain(prmVM2)[30915]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:38:53 bl460g1n6 crm_resource[31074]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:53 bl460g1n6 crm_resource[31074]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x1a3ddf0 for uid=0 gid=0 pid=31074 id=93d5c794-ee67-457d-af2f-b69a6c57c5ba
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.7.13)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:53 bl460g1n6 crm_resource[31076]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:53 bl460g1n6 crm_resource[31076]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x1a3ddf0 for uid=0 gid=0 pid=31076 id=7c36c693-1648-466a-b5fa-14acdb1b3450
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.7.13)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/resources//*[@id="prmVM2"]/utilization//nvpair[@name="cpu"]: No such device or address (rc=-6, origin=local/crm_resource/3, version=0.7.13)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/crm_resource/4, version=0.7.13)
Jan 15 15:38:53 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=0, node=, tag=diff, id=(null), magic=NA, cib=0.8.1) : Non-status change
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: Diff: --- 0.7.13
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: Diff: +++ 0.8.1 e48690591e91d21f853e91166a9db940
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="7" num_updates="13"/>
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <utilization id="prmVM2-utilization">
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair id="prmVM2-utilization-cpu" name="cpu" value="1"/>
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </utilization>
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section resources: OK (rc=0, origin=local/crm_resource/5, version=0.8.1)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:53 bl460g1n6 cib[31077]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-4.raw
Jan 15 15:38:53 bl460g1n6 crm_resource[31083]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:53 bl460g1n6 crm_resource[31083]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31083 id=b4c9bb55-fa30-481f-8902-e87e0aae1b9c
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.8.1)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:53 bl460g1n6 cib[31077]:     info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: caa5d622b396eb492d1acba1234836ed)
Jan 15 15:38:53 bl460g1n6 crm_resource[31085]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:53 bl460g1n6 crm_resource[31085]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31085 id=545fd781-bbb9-4b4e-887e-b016a2cd6c74
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.8.1)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/resources//*[@id="prmVM2"]/utilization//nvpair[@name="hv_memory"]: No such device or address (rc=-6, origin=local/crm_resource/3, version=0.8.1)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/crm_resource/4, version=0.8.1)
Jan 15 15:38:53 bl460g1n6 cib[31077]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.56U7aR (digest: /var/lib/pacemaker/cib/cib.xY9hko)
Jan 15 15:38:53 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=0, node=, tag=diff, id=(null), magic=NA, cib=0.9.1) : Non-status change
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: log_cib_diff: cib:diff: Local-only Change: 0.9.1
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="8" num_updates="1"/>
Jan 15 15:38:53 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair id="prmVM2-utilization-hv_memory" name="hv_memory" value="2048"/>
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section resources: OK (rc=0, origin=local/crm_resource/5, version=0.9.1)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:53 bl460g1n6 lrmd[30792]:     info: log_finished: finished - rsc:prmVM2 action:start call_id:12 pid:30915 exit-code:0 exec-time:1030ms queue-time:0ms
Jan 15 15:38:53 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmVM2_start_0 (call=12, rc=0, cib-update=60, confirmed=true) ok
Jan 15 15:38:53 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_start_0 (5) confirmed on bl460g1n6 (rc=0)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/60, version=0.9.2)
Jan 15 15:38:53 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_monitor_10000 (12) confirmed on bl460g1n7 (rc=0)
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=bl460g1n7/crmd/16, version=0.9.3)
Jan 15 15:38:53 bl460g1n6 attrd_updater[31088]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:53 bl460g1n6 attrd_updater[31088]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:53 bl460g1n6 attrd[30793]:     info: crm_client_new: Connecting 0x1754d60 for uid=0 gid=0 pid=31088 id=452d916e-1d5e-476c-8c74-5efe538560a7
Jan 15 15:38:53 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n6] = 100 (writer)
Jan 15 15:38:53 bl460g1n6 attrd[30793]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:53 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmPing_monitor_10000 (call=13, rc=0, cib-update=61, confirmed=false) ok
Jan 15 15:38:53 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_monitor_10000 (9) confirmed on bl460g1n6 (rc=0)
Jan 15 15:38:53 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 3 (Complete=3, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-3.bz2): Stopped
Jan 15 15:38:53 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/61, version=0.9.4)
Jan 15 15:38:53 bl460g1n6 cib[31089]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-5.raw
Jan 15 15:38:53 bl460g1n6 cib[31089]:     info: write_cib_contents: Wrote version 0.9.0 of the CIB to disk (digest: aca48e4d4ed7a4f0d998bcd77e84d6e9)
Jan 15 15:38:53 bl460g1n6 cib[31089]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.PQhCb1 (digest: /var/lib/pacemaker/cib/cib.hNGoGy)
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: crm_timer_popped: New Transition Timer (I_PE_CALC) just popped (2000ms)
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/62, version=0.9.4)
Jan 15 15:38:55 bl460g1n6 pengine[30794]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is online
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: native_print: prmVM2	(ocf::heartbeat:VirtualDomain):	Started bl460g1n6 
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: short_print:      Started: [ bl460g1n6 bl460g1n7 ]
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmVM2 on bl460g1n6
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmVM2	(Started bl460g1n6)
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:0	(Started bl460g1n6)
Jan 15 15:38:55 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:1	(Started bl460g1n7)
Jan 15 15:38:55 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-4.bz2
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1389767935-42) derived from /var/lib/pacemaker/pengine/pe-input-4.bz2
Jan 15 15:38:55 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 9: monitor prmVM2_monitor_10000 on bl460g1n6 (local)
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=9:4:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_monitor_10000
Jan 15 15:38:55 bl460g1n6 VirtualDomain(prmVM2)[31118]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:38:55 bl460g1n6 crm_resource[31151]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:55 bl460g1n6 crm_resource[31151]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19820d0 for uid=0 gid=0 pid=31151 id=628e786e-208a-4c39-bf4b-746314fde2bf
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.9.4)
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:55 bl460g1n6 crm_resource[31157]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:38:55 bl460g1n6 crm_resource[31157]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19820d0 for uid=0 gid=0 pid=31157 id=4d34e30c-838b-4fd1-b838-44e010549376
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.9.4)
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:38:55 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmVM2_monitor_10000 (call=14, rc=0, cib-update=63, confirmed=false) ok
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_monitor_10000 (9) confirmed on bl460g1n6 (rc=0)
Jan 15 15:38:55 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 4 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-4.bz2): Complete
Jan 15 15:38:55 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jan 15 15:38:55 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 15 15:38:55 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/63, version=0.9.5)
Jan 15 15:39:04 bl460g1n6 attrd_updater[31185]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:04 bl460g1n6 attrd_updater[31185]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:04 bl460g1n6 attrd[30793]:     info: crm_client_new: Connecting 0x1754d60 for uid=0 gid=0 pid=31185 id=022cc79f-7da2-4ebd-a6c8-88cd7f05de8d
Jan 15 15:39:04 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n6] = 100 (writer)
Jan 15 15:39:04 bl460g1n6 attrd[30793]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:05 bl460g1n6 VirtualDomain(prmVM2)[31187]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:05 bl460g1n6 crm_resource[31220]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:05 bl460g1n6 crm_resource[31220]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:05 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31220 id=15b6d7a0-a588-4a39-b2d9-199fce76df86
Jan 15 15:39:05 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.9.5)
Jan 15 15:39:05 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:05 bl460g1n6 crm_resource[31226]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:05 bl460g1n6 crm_resource[31226]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:05 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31226 id=e06bf24e-2bdd-42c0-863c-b4cc09fda7b1
Jan 15 15:39:05 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.9.5)
Jan 15 15:39:05 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:15 bl460g1n6 attrd_updater[31245]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:15 bl460g1n6 attrd_updater[31245]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:15 bl460g1n6 attrd[30793]:     info: crm_client_new: Connecting 0x1754d60 for uid=0 gid=0 pid=31245 id=bdfb287f-b935-40e7-a009-9d8a68363dbb
Jan 15 15:39:15 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n6] = 100 (writer)
Jan 15 15:39:15 bl460g1n6 attrd[30793]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:15 bl460g1n6 VirtualDomain(prmVM2)[31246]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:15 bl460g1n6 crm_resource[31279]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:15 bl460g1n6 crm_resource[31279]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:15 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31279 id=4ddf7869-0769-4285-ba4e-bbf9070ea0e7
Jan 15 15:39:15 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.9.5)
Jan 15 15:39:15 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:15 bl460g1n6 crm_resource[31285]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:15 bl460g1n6 crm_resource[31285]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:15 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31285 id=43add664-b741-4bf6-880a-4d52b4d263be
Jan 15 15:39:15 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_resource/2, version=0.9.5)
Jan 15 15:39:15 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31291 id=80ed5c61-5602-49d1-9840-0af08b3b3ed5
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section nodes: OK (rc=0, origin=local/cibadmin/2, version=0.9.5)
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19816b0 for uid=0 gid=0 pid=31292 id=8511aab6-7471-48ed-a5dd-f9e9b9338051
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section nodes: OK (rc=0, origin=local/crm_attribute/2, version=0.9.5)
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/nodes//node[@id='3232261592']//instance_attributes//nvpair[@name='standby']: No such device or address (rc=-6, origin=local/crm_attribute/3, version=0.9.5)
Jan 15 15:39:20 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:126 - Triggered transition abort (complete=1, node=, tag=diff, id=(null), magic=NA, cib=0.10.1) : Non-status change
Jan 15 15:39:20 bl460g1n6 cib[30790]:   notice: cib:diff: Diff: --- 0.9.5
Jan 15 15:39:20 bl460g1n6 cib[30790]:   notice: cib:diff: Diff: +++ 0.10.1 c07edf894987b1e8aae55344b6a7804a
Jan 15 15:39:20 bl460g1n6 cib[30790]:   notice: cib:diff: -- <cib admin_epoch="0" epoch="9" num_updates="5"/>
Jan 15 15:39:20 bl460g1n6 cib[30790]:   notice: cib:diff: ++         <instance_attributes id="nodes-3232261592">
Jan 15 15:39:20 bl460g1n6 cib[30790]:   notice: cib:diff: ++           <nvpair id="nodes-3232261592-standby" name="standby" value="on"/>
Jan 15 15:39:20 bl460g1n6 cib[30790]:   notice: cib:diff: ++         </instance_attributes>
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crm_attribute/4, version=0.10.1)
Jan 15 15:39:20 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:20 bl460g1n6 cib[31293]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-6.raw
Jan 15 15:39:20 bl460g1n6 cib[31293]:     info: write_cib_contents: Wrote version 0.10.0 of the CIB to disk (digest: c3a684b15ebe3cc70d3e0b780cde1564)
Jan 15 15:39:20 bl460g1n6 cib[31293]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.5ZU1nr (digest: /var/lib/pacemaker/cib/cib.1qM46a)
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: crm_timer_popped: New Transition Timer (I_PE_CALC) just popped (2000ms)
Jan 15 15:39:22 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Jan 15 15:39:22 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/64, version=0.10.1)
Jan 15 15:39:22 bl460g1n6 pengine[30794]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: unpack_status: Node bl460g1n6 is in standby-mode
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is standby
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: native_print: prmVM2	(ocf::heartbeat:VirtualDomain):	Started bl460g1n6 
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: short_print:      Started: [ bl460g1n6 bl460g1n7 ]
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: native_color: Resource prmPing:0 cannot run anywhere
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmVM2 on bl460g1n7
Jan 15 15:39:22 bl460g1n6 pengine[30794]:   notice: LogActions: Migrate prmVM2	(Started bl460g1n6 -> bl460g1n7)
Jan 15 15:39:22 bl460g1n6 pengine[30794]:   notice: LogActions: Stop    prmPing:0	(bl460g1n6)
Jan 15 15:39:22 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:1	(Started bl460g1n7)
Jan 15 15:39:22 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-5.bz2
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1389767962-44) derived from /var/lib/pacemaker/pengine/pe-input-5.bz2
Jan 15 15:39:22 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 11: migrate_to prmVM2_migrate_to_0 on bl460g1n6 (local)
Jan 15 15:39:22 bl460g1n6 lrmd[30792]:     info: cancel_recurring_action: Cancelling operation prmVM2_monitor_10000
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=11:5:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_migrate_to_0
Jan 15 15:39:22 bl460g1n6 lrmd[30792]:     info: log_execute: executing - rsc:prmVM2 action:migrate_to call_id:16
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: process_lrm_event: LRM operation prmVM2_monitor_10000 (call=14, status=1, cib-update=0, confirmed=true) Cancelled
Jan 15 15:39:22 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 13: stop prmPing_stop_0 on bl460g1n6 (local)
Jan 15 15:39:22 bl460g1n6 lrmd[30792]:     info: cancel_recurring_action: Cancelling operation prmPing_monitor_10000
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=13:5:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmPing_stop_0
Jan 15 15:39:22 bl460g1n6 lrmd[30792]:     info: log_execute: executing - rsc:prmPing action:stop call_id:18
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: process_lrm_event: LRM operation prmPing_monitor_10000 (call=13, status=1, cib-update=0, confirmed=true) Cancelled
Jan 15 15:39:22 bl460g1n6 attrd_updater[31308]:   notice: crm_add_logfile: Additional logging available in /var/log/ha-debug
Jan 15 15:39:22 bl460g1n6 attrd_updater[31308]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=unpack_rsc_migration,unpack_rsc_migration_failure,unpack_rsc_op, formats=(null), tags=(null)
Jan 15 15:39:22 bl460g1n6 attrd[30793]:     info: crm_client_new: Connecting 0x1754d60 for uid=0 gid=0 pid=31308 id=bbb10658-f6e9-49e6-ba24-438891fb2f7e
Jan 15 15:39:22 bl460g1n6 attrd[30793]:     info: attrd_client_message: Broadcasting default_ping_set[bl460g1n6] = (null) (writer)
Jan 15 15:39:22 bl460g1n6 attrd[30793]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:22 bl460g1n6 lrmd[30792]:     info: log_finished: finished - rsc:prmPing action:stop call_id:18 pid:31296 exit-code:0 exec-time:34ms queue-time:0ms
Jan 15 15:39:22 bl460g1n6 attrd[30793]:     info: attrd_peer_update: Setting default_ping_set[bl460g1n6]: 100 -> (null) from bl460g1n6
Jan 15 15:39:22 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmPing_stop_0 (call=18, rc=0, cib-update=65, confirmed=true) ok
Jan 15 15:39:22 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/65, version=0.10.2)
Jan 15 15:39:22 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmPing_stop_0 (13) confirmed on bl460g1n6 (rc=0)
Jan 15 15:39:22 bl460g1n6 VirtualDomain(prmVM2)[31295]: DEBUG: Virtual domain vm2 is currently running.
Jan 15 15:39:22 bl460g1n6 VirtualDomain(prmVM2)[31295]: INFO: vm2: Starting live migration to bl460g1n7 (using remote hypervisor URI qemu+ssh://bl460g1n7/system ).
Jan 15 15:39:27 bl460g1n6 attrd[30793]:   notice: write_attribute: Sent update 11 with 2 changes for default_ping_set, id=<n/a>, set=(null)
Jan 15 15:39:27 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/11, version=0.10.3)
Jan 15 15:39:27 bl460g1n6 crmd[30795]:     info: abort_transition_graph: te_update_diff:188 - Triggered transition abort (complete=0, node=bl460g1n6, tag=transient_attributes, id=3232261592, magic=NA, cib=0.10.3) : Transient attribute: removal
Jan 15 15:39:27 bl460g1n6 attrd[30793]:     info: attrd_cib_callback: Update 11 for default_ping_set: OK (0)
Jan 15 15:39:27 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 11 for default_ping_set[bl460g1n6]=(null): OK (0)
Jan 15 15:39:27 bl460g1n6 attrd[30793]:   notice: attrd_cib_callback: Update 11 for default_ping_set[bl460g1n7]=100: OK (0)
Jan 15 15:39:28 bl460g1n6 VirtualDomain(prmVM2)[31295]: INFO: vm2: live migration to bl460g1n7 succeeded.
Jan 15 15:39:28 bl460g1n6 lrmd[30792]:     info: log_finished: finished - rsc:prmVM2 action:migrate_to call_id:16 pid:31295 exit-code:0 exec-time:6208ms queue-time:0ms
Jan 15 15:39:28 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmVM2_migrate_to_0 (call=16, rc=0, cib-update=66, confirmed=true) ok
Jan 15 15:39:28 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/66, version=0.10.4)
Jan 15 15:39:28 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_migrate_to_0 (11) confirmed on bl460g1n6 (rc=0)
Jan 15 15:39:28 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 5 (Complete=4, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-5.bz2): Stopped
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: crm_timer_popped: New Transition Timer (I_PE_CALC) just popped (2000ms)
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Jan 15 15:39:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/67, version=0.10.4)
Jan 15 15:39:30 bl460g1n6 pengine[30794]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: unpack_status: Node bl460g1n6 is in standby-mode
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n6 is standby
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: determine_online_status: Node bl460g1n7 is online
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: native_print: prmVM2	(ocf::heartbeat:VirtualDomain):	FAILED bl460g1n6 
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: short_print:      Started: [ bl460g1n7 ]
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: short_print:      Stopped: [ bl460g1n6 ]
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: RecurringOp:  Start recurring monitor (10s) for prmVM2 on bl460g1n7
Jan 15 15:39:30 bl460g1n6 pengine[30794]:   notice: LogActions: Recover prmVM2	(Started bl460g1n6 -> bl460g1n7)
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:0	(Started bl460g1n7)
Jan 15 15:39:30 bl460g1n6 pengine[30794]:     info: LogActions: Leave   prmPing:1	(Stopped)
Jan 15 15:39:30 bl460g1n6 pengine[30794]:   notice: process_pe_message: Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-6.bz2
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1389767970-47) derived from /var/lib/pacemaker/pengine/pe-input-6.bz2
Jan 15 15:39:30 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 7: stop prmVM2_stop_0 on bl460g1n6 (local)
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: do_lrm_rsc_op: Performing key=7:6:0:be72ea63-75a9-4de4-a591-e716f960743b op=prmVM2_stop_0
Jan 15 15:39:30 bl460g1n6 lrmd[30792]:     info: log_execute: executing - rsc:prmVM2 action:stop call_id:19
Jan 15 15:39:30 bl460g1n6 VirtualDomain(prmVM2)[31422]: DEBUG: Virtual domain vm2 is currently error: failed to get domain 'vm2'
error: domain not found: no domain with matching name 'vm2'.
Jan 15 15:39:30 bl460g1n6 VirtualDomain(prmVM2)[31422]: INFO: Domain vm2 already stopped.
Jan 15 15:39:30 bl460g1n6 lrmd[30792]:     info: log_finished: finished - rsc:prmVM2 action:stop call_id:19 pid:31422 exit-code:0 exec-time:89ms queue-time:0ms
Jan 15 15:39:30 bl460g1n6 crmd[30795]:   notice: process_lrm_event: LRM operation prmVM2_stop_0 (call=19, rc=0, cib-update=68, confirmed=true) ok
Jan 15 15:39:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/68, version=0.10.5)
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_stop_0 (7) confirmed on bl460g1n6 (rc=0)
Jan 15 15:39:30 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 8: start prmVM2_start_0 on bl460g1n7
Jan 15 15:39:30 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=bl460g1n7/crmd/17, version=0.10.6)
Jan 15 15:39:30 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_start_0 (8) confirmed on bl460g1n7 (rc=0)
Jan 15 15:39:30 bl460g1n6 crmd[30795]:   notice: te_rsc_command: Initiating action 9: monitor prmVM2_monitor_10000 on bl460g1n7
Jan 15 15:39:31 bl460g1n6 crmd[30795]:     info: match_graph_event: Action prmVM2_monitor_10000 (9) confirmed on bl460g1n7 (rc=0)
Jan 15 15:39:31 bl460g1n6 crmd[30795]:   notice: run_graph: Transition 6 (Complete=4, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-6.bz2): Complete
Jan 15 15:39:31 bl460g1n6 crmd[30795]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jan 15 15:39:31 bl460g1n6 crmd[30795]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 15 15:39:31 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=bl460g1n7/crmd/18, version=0.10.7)
Jan 15 15:39:43 bl460g1n6 root: Mark:pcmk:1389767983
Jan 15 15:39:49 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19820d0 for uid=0 gid=0 pid=32761 id=fc4fbdb1-6e26-416b-9964-489c62187164
Jan 15 15:39:49 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/2, version=0.10.7)
Jan 15 15:39:49 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:49 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19820d0 for uid=0 gid=0 pid=32763 id=d398498e-0ee1-49c6-8dd5-1c078eed314b
Jan 15 15:39:49 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.10.7)
Jan 15 15:39:49 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
Jan 15 15:39:50 bl460g1n6 cib[30790]:     info: crm_client_new: Connecting 0x19820d0 for uid=0 gid=0 pid=380 id=0473a140-22de-49b6-b2c2-7e7c16fff018
Jan 15 15:39:50 bl460g1n6 cib[30790]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/2, version=0.10.7)
Jan 15 15:39:50 bl460g1n6 cib[30790]:     info: crm_client_destroy: Destroying 0 events
