Oct 24 08:07:33 soalaba63 abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-auxiliary Oct 24 08:07:33 soalaba63 abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta-2 Oct 24 08:07:33 soalaba63 abrtd: Can't load public GPG key /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-legacy-beta Oct 24 08:07:33 soalaba63 abrtd: Init complete, entering main loop Oct 24 08:07:33 soalaba63 corosync[3158]: [MAIN ] Corosync Cluster Engine ('1.2.3'): started and ready to provide service. Oct 24 08:07:33 soalaba63 corosync[3158]: [MAIN ] Corosync built-in features: nss rdma Oct 24 08:07:33 soalaba63 corosync[3158]: [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'. Oct 24 08:07:33 soalaba63 corosync[3158]: [TOTEM ] Initializing transport (UDP/IP). Oct 24 08:07:33 soalaba63 corosync[3158]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Oct 24 08:07:33 soalaba63 corosync[3158]: [TOTEM ] The network interface [10.10.10.2] is now up. Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: process_ais_conf: Reading configure Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: config_find_init: Local handle: 9213452461992312834 for logging Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: config_find_next: Processing additional logging options... Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Found 'off' for option: debug Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Found 'yes' for option: to_logfile Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: No default for option: logfile Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] ERROR: process_ais_conf: Logging to a file requested but no log file specified Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Found 'yes' for option: to_syslog Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Defaulting to 'daemon' for option: syslog_facility Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: config_find_init: Local handle: 2013064636357672963 for quorum Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: config_find_next: No additional configuration supplied for: quorum Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: No default for option: provider Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: config_find_init: Local handle: 4730966301143465988 for service Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: config_find_next: Processing additional service options... Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Found '0' for option: ver Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Defaulting to 'no' for option: use_logd Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: get_config_opt: Defaulting to 'no' for option: use_mgmtd Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_startup: CRM: Initialized Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] Logging: Initialized pcmk_startup Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_startup: Service: 10 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_startup: Local hostname: soalaba63 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_update_nodeid: Local node id: 34212362 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: update_member: Creating entry for node 34212362 born on 0 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: update_member: 0x21a11a0 Node 34212362 now known as soalaba63 (was: (null)) Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: update_member: Node soalaba63 now has 1 quorum votes (was 0) Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: update_member: Node 34212362/soalaba63 is now: member Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: spawn_child: Forked child 3170 for process stonith-ng Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: spawn_child: Forked child 3171 for process cib Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: spawn_child: Forked child 3172 for process lrmd Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: spawn_child: Forked child 3173 for process attrd Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: spawn_child: Forked child 3174 for process pengine Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: spawn_child: Forked child 3175 for process crmd Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: Pacemaker Cluster Manager 1.1.5 Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: corosync extended virtual synchrony service Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: Invoked: /usr/lib64/heartbeat/stonithd Oct 24 08:07:33 soalaba63 attrd: [3173]: info: Invoked: /usr/lib64/heartbeat/attrd Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: corosync configuration service Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: corosync cluster config database access v1.01 Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: corosync profile loading service Oct 24 08:07:33 soalaba63 crmd: [3175]: info: Invoked: /usr/lib64/heartbeat/crmd Oct 24 08:07:33 soalaba63 pengine: [3174]: info: Invoked: /usr/lib64/heartbeat/pengine Oct 24 08:07:33 soalaba63 corosync[3158]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 Oct 24 08:07:33 soalaba63 corosync[3158]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine. Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 844: memb=0, new=0, lost=0 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 844: memb=1, new=1, lost=0 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_peer_update: NEW: soalaba63 34212362 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_peer_update: MEMB: soalaba63 34212362 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: update_member: Node soalaba63 now has process list: 00000000000000000000000000111312 (1118994) Oct 24 08:07:33 soalaba63 corosync[3158]: [TOTEM ] A processor joined or left the membership and a new membership was formed. Oct 24 08:07:33 soalaba63 corosync[3158]: [CPG ] downlist received left_list: 0 Oct 24 08:07:33 soalaba63 corosync[3158]: [CPG ] chosen downlist from node r(0) ip(10.10.10.2) Oct 24 08:07:33 soalaba63 corosync[3158]: [MAIN ] Completed service synchronization, ready to provide service. Oct 24 08:07:33 soalaba63 attrd: [3173]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster Oct 24 08:07:33 soalaba63 crmd: [3175]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root Oct 24 08:07:33 soalaba63 crmd: [3175]: info: main: CRM Hg Version: 01e86afaaa6d4a8c4836f68df80ababd6ca3902f Oct 24 08:07:33 soalaba63 attrd: [3173]: info: main: Starting up Oct 24 08:07:33 soalaba63 cib: [3171]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster Oct 24 08:07:33 soalaba63 attrd: [3173]: info: get_cluster_type: Cluster type is: 'openais'. Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Oct 24 08:07:33 soalaba63 attrd: [3173]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Oct 24 08:07:33 soalaba63 attrd: [3173]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: get_cluster_type: Cluster type is: 'openais'. Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Oct 24 08:07:33 soalaba63 cib: [3171]: info: G_main_add_TriggerHandler: Added signal manual handler Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Oct 24 08:07:33 soalaba63 cib: [3171]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Oct 24 08:07:33 soalaba63 crmd: [3175]: info: crmd_init: Starting crmd Oct 24 08:07:33 soalaba63 crmd: [3175]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Oct 24 08:07:33 soalaba63 attrd: [3173]: info: init_ais_connection_classic: AIS connection established Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: init_ais_connection_classic: AIS connection established Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_ipc: Recorded connection 0x21ab090 for attrd/3173 Oct 24 08:07:33 soalaba63 attrd: [3173]: info: get_ais_nodeid: Server details: id=34212362 uname=soalaba63 cname=pcmk Oct 24 08:07:33 soalaba63 attrd: [3173]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Oct 24 08:07:33 soalaba63 attrd: [3173]: info: crm_new_peer: Node soalaba63 now has id: 34212362 Oct 24 08:07:33 soalaba63 attrd: [3173]: info: crm_new_peer: Node 34212362 is now known as soalaba63 Oct 24 08:07:33 soalaba63 attrd: [3173]: info: main: Cluster connection active Oct 24 08:07:33 soalaba63 cib: [3171]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig) Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_ipc: Recorded connection 0x21af3f0 for stonith-ng/3170 Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: get_ais_nodeid: Server details: id=34212362 uname=soalaba63 cname=pcmk Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: crm_new_peer: Node soalaba63 now has id: 34212362 Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: crm_new_peer: Node 34212362 is now known as soalaba63 Oct 24 08:07:33 soalaba63 cib: [3171]: info: validate_with_relaxng: Creating RNG parser context Oct 24 08:07:33 soalaba63 attrd: [3173]: info: main: Accepting attribute updates Oct 24 08:07:33 soalaba63 attrd: [3173]: info: main: Starting mainloop... Oct 24 08:07:33 soalaba63 stonith-ng: [3170]: info: main: Starting stonith-ng mainloop Oct 24 08:07:33 soalaba63 lrmd: [3172]: info: G_main_add_SignalHandler: Added signal handler for signal 15 Oct 24 08:07:33 soalaba63 lrmd: [3172]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Oct 24 08:07:33 soalaba63 lrmd: [3172]: info: enabling coredumps Oct 24 08:07:33 soalaba63 lrmd: [3172]: info: G_main_add_SignalHandler: Added signal handler for signal 10 Oct 24 08:07:33 soalaba63 lrmd: [3172]: info: G_main_add_SignalHandler: Added signal handler for signal 12 Oct 24 08:07:33 soalaba63 lrmd: [3172]: info: Started. Oct 24 08:07:33 soalaba63 cib: [3171]: info: startCib: CIB Initialization completed successfully Oct 24 08:07:33 soalaba63 cib: [3171]: info: get_cluster_type: Cluster type is: 'openais'. Oct 24 08:07:33 soalaba63 cib: [3171]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Oct 24 08:07:33 soalaba63 cib: [3171]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Oct 24 08:07:33 soalaba63 cib: [3171]: info: init_ais_connection_classic: AIS connection established Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_ipc: Recorded connection 0x21b3bc0 for cib/3171 Oct 24 08:07:33 soalaba63 corosync[3158]: [pcmk ] info: pcmk_ipc: Sending membership update 844 to cib Oct 24 08:07:33 soalaba63 cib: [3171]: info: get_ais_nodeid: Server details: id=34212362 uname=soalaba63 cname=pcmk Oct 24 08:07:33 soalaba63 cib: [3171]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Oct 24 08:07:33 soalaba63 cib: [3171]: info: crm_new_peer: Node soalaba63 now has id: 34212362 Oct 24 08:07:33 soalaba63 cib: [3171]: info: crm_new_peer: Node 34212362 is now known as soalaba63 Oct 24 08:07:33 soalaba63 cib: [3171]: info: cib_init: Starting cib mainloop Oct 24 08:07:33 soalaba63 cib: [3171]: info: ais_dispatch_message: Membership 844: quorum still lost Oct 24 08:07:33 soalaba63 cib: [3171]: info: crm_update_peer: Node soalaba63: id=34212362 state=member (new) addr=r(0) ip(10.10.10.2) (new) votes=1 (new) born=0 seen=844 proc=00000000000000000000000000111312 (new) Oct 24 08:07:34 soalaba63 crmd: [3175]: info: do_cib_control: CIB connection established Oct 24 08:07:34 soalaba63 crmd: [3175]: info: get_cluster_type: Cluster type is: 'openais'. Oct 24 08:07:34 soalaba63 crmd: [3175]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Oct 24 08:07:34 soalaba63 crmd: [3175]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Oct 24 08:07:34 soalaba63 crmd: [3175]: info: init_ais_connection_classic: AIS connection established Oct 24 08:07:34 soalaba63 corosync[3158]: [pcmk ] info: pcmk_ipc: Recorded connection 0x21b86c0 for crmd/3175 Oct 24 08:07:34 soalaba63 corosync[3158]: [pcmk ] info: pcmk_ipc: Sending membership update 844 to crmd Oct 24 08:07:34 soalaba63 crmd: [3175]: info: get_ais_nodeid: Server details: id=34212362 uname=soalaba63 cname=pcmk Oct 24 08:07:34 soalaba63 crmd: [3175]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Oct 24 08:07:34 soalaba63 crmd: [3175]: info: crm_new_peer: Node soalaba63 now has id: 34212362 Oct 24 08:07:34 soalaba63 crmd: [3175]: info: crm_new_peer: Node 34212362 is now known as soalaba63 Oct 24 08:07:34 soalaba63 crmd: [3175]: info: ais_status_callback: status: soalaba63 is now unknown Oct 24 08:07:34 soalaba63 crmd: [3175]: info: do_ha_control: Connected to the cluster Oct 24 08:07:34 soalaba63 crmd: [3175]: info: do_started: Delaying start, no membership data (0000000000100000) Oct 24 08:07:34 soalaba63 crmd: [3175]: info: crmd_init: Starting crmd's mainloop Oct 24 08:07:34 soalaba63 crmd: [3175]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms Oct 24 08:07:34 soalaba63 crmd: [3175]: info: config_query_callback: Checking for expired actions every 900000ms Oct 24 08:07:34 soalaba63 crmd: [3175]: info: config_query_callback: Sending expected-votes=2 to corosync Oct 24 08:07:34 soalaba63 crmd: [3175]: info: ais_dispatch_message: Membership 844: quorum still lost Oct 24 08:07:34 soalaba63 crmd: [3175]: notice: crmd_peer_update: Status update: Client soalaba63/crmd now has status [online] (DC=) Oct 24 08:07:34 soalaba63 crmd: [3175]: info: ais_status_callback: status: soalaba63 is now member (was unknown) Oct 24 08:07:34 soalaba63 crmd: [3175]: info: crm_update_peer: Node soalaba63: id=34212362 state=member (new) addr=r(0) ip(10.10.10.2) (new) votes=1 (new) born=0 seen=844 proc=00000000000000000000000000111312 (new) Oct 24 08:07:34 soalaba63 crmd: [3175]: info: do_started: The local CRM is operational Oct 24 08:07:34 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ] Oct 24 08:07:35 soalaba63 crmd: [3175]: info: ais_dispatch_message: Membership 844: quorum still lost Oct 24 08:07:35 soalaba63 crmd: [3175]: info: te_connect_stonith: Attempting connection to fencing daemon... Oct 24 08:07:36 soalaba63 crmd: [3175]: info: te_connect_stonith: Connected Oct 24 08:07:38 soalaba63 attrd: [3173]: info: cib_connect: Connected to the CIB after 1 signon attempts Oct 24 08:07:38 soalaba63 attrd: [3173]: info: cib_connect: Sending full refresh Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 852: memb=1, new=0, lost=0 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: pcmk_peer_update: memb: soalaba63 34212362 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 852: memb=2, new=1, lost=0 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: Creating entry for node 17435146 born on 852 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: Node 17435146/unknown is now: member Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: pcmk_peer_update: NEW: .pending. 17435146 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: pcmk_peer_update: MEMB: .pending. 17435146 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: pcmk_peer_update: MEMB: soalaba63 34212362 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: send_member_notification: Sending membership update 852 to 2 children Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: 0x21a11a0 Node 34212362 ((null)) born on: 844 Oct 24 08:07:59 soalaba63 crmd: [3175]: info: ais_dispatch_message: Membership 852: quorum still lost Oct 24 08:07:59 soalaba63 cib: [3171]: info: ais_dispatch_message: Membership 852: quorum still lost Oct 24 08:07:59 soalaba63 crmd: [3175]: info: crm_new_peer: Node now has id: 17435146 Oct 24 08:07:59 soalaba63 cib: [3171]: info: crm_new_peer: Node now has id: 17435146 Oct 24 08:07:59 soalaba63 cib: [3171]: info: crm_update_peer: Node (null): id=17435146 state=member (new) addr=r(0) ip(10.10.10.1) votes=0 born=0 seen=852 proc=00000000000000000000000000000000 Oct 24 08:07:59 soalaba63 crmd: [3175]: info: crm_update_peer: Node (null): id=17435146 state=member (new) addr=r(0) ip(10.10.10.1) votes=0 born=0 seen=852 proc=00000000000000000000000000000000 Oct 24 08:07:59 soalaba63 corosync[3158]: [TOTEM ] A processor joined or left the membership and a new membership was formed. Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: 0x21bd290 Node 17435146 (soalaba56) born on: 852 Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: 0x21bd290 Node 17435146 now known as soalaba56 (was: (null)) Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: Node soalaba56 now has process list: 00000000000000000000000000111312 (1118994) Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: update_member: Node soalaba56 now has 1 quorum votes (was 0) Oct 24 08:07:59 soalaba63 corosync[3158]: [pcmk ] info: send_member_notification: Sending membership update 852 to 2 children Oct 24 08:07:59 soalaba63 cib: [3171]: notice: ais_dispatch_message: Membership 852: quorum acquired Oct 24 08:07:59 soalaba63 crmd: [3175]: notice: ais_dispatch_message: Membership 852: quorum acquired Oct 24 08:07:59 soalaba63 cib: [3171]: info: crm_get_peer: Node 17435146 is now known as soalaba56 Oct 24 08:07:59 soalaba63 crmd: [3175]: info: crm_get_peer: Node 17435146 is now known as soalaba56 Oct 24 08:07:59 soalaba63 cib: [3171]: info: crm_update_peer: Node soalaba56: id=17435146 state=member addr=r(0) ip(10.10.10.1) votes=1 (new) born=852 seen=852 proc=00000000000000000000000000111312 (new) Oct 24 08:07:59 soalaba63 crmd: [3175]: info: ais_status_callback: status: soalaba56 is now member Oct 24 08:07:59 soalaba63 crmd: [3175]: notice: crmd_peer_update: Status update: Client soalaba56/crmd now has status [online] (DC=) Oct 24 08:07:59 soalaba63 crmd: [3175]: info: crm_update_peer: Node soalaba56: id=17435146 state=member addr=r(0) ip(10.10.10.1) votes=1 (new) born=852 seen=852 proc=00000000000000000000000000111312 (new) Oct 24 08:07:59 soalaba63 corosync[3158]: [CPG ] downlist received left_list: 0 Oct 24 08:07:59 soalaba63 corosync[3158]: [CPG ] downlist received left_list: 0 Oct 24 08:07:59 soalaba63 corosync[3158]: [CPG ] chosen downlist from node r(0) ip(10.10.10.1) Oct 24 08:07:59 soalaba63 corosync[3158]: [MAIN ] Completed service synchronization, ready to provide service. Oct 24 08:08:35 soalaba63 crmd: [3175]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped! (60000ms) Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_te_control: Registering TE UUID: 0fbc0bc0-017f-4734-b555-eb0d6dd8568d Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: cib_client_add_notify_callback: Callback already present Oct 24 08:08:35 soalaba63 crmd: [3175]: info: set_graph_functions: Setting custom graph functions Oct 24 08:08:35 soalaba63 crmd: [3175]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_dc_takeover: Taking over DC status for this partition Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_readwrite: We are now in R/W mode Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/5, version=0.20.1): ok (rc=0) Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/6, version=0.20.2): ok (rc=0) Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/8, version=0.20.3): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: join_make_offer: Making join offers based on membership 852 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.20.4): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: ais_dispatch_message: Membership 852: quorum retained Oct 24 08:08:35 soalaba63 crmd: [3175]: info: crmd_ais_dispatch: Setting expected votes to 2 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms Oct 24 08:08:35 soalaba63 crmd: [3175]: info: config_query_callback: Checking for expired actions every 900000ms Oct 24 08:08:35 soalaba63 crmd: [3175]: info: config_query_callback: Sending expected-votes=2 to corosync Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/13, version=0.20.5): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_dc: Set DC to soalaba63 (3.0.5) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: ais_dispatch_message: Membership 852: quorum retained Oct 24 08:08:35 soalaba63 crmd: [3175]: info: crmd_ais_dispatch: Setting expected votes to 2 Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/16, version=0.20.6): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: All 2 cluster nodes responded to the join offer. Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_dc_join_finalize: join-1: Syncing the CIB from soalaba63 to the rest of the cluster Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/17, version=0.20.6): ok (rc=0) Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/18, version=0.20.7): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_attrd: Connecting to attrd... Oct 24 08:08:35 soalaba63 attrd: [3173]: info: find_hash_entry: Creating hash entry for terminate Oct 24 08:08:35 soalaba63 attrd: [3173]: info: find_hash_entry: Creating hash entry for shutdown Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.20.8): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_dc_join_ack: join-1: Updating node state to member for soalaba63 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_dc_join_ack: join-1: Updating node state to member for soalaba56 Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='soalaba63']/transient_attributes (origin=local/crmd/20, version=0.20.9): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: erase_xpath_callback: Deletion of "//node_state[@uname='soalaba63']/transient_attributes": ok (rc=0) Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='soalaba63']/lrm (origin=local/crmd/21, version=0.20.10): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: erase_xpath_callback: Deletion of "//node_state[@uname='soalaba63']/lrm": ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_local_callback: Sending full refresh (origin=crmd) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: crm_update_quorum: Updating quorum status to true (call=27) Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown () Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: do_te_invoke:173 - Triggered transition abort (complete=1) : Peer Cancelled Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_pe_invoke: Query 28: Requesting the current CIB: S_POLICY_ENGINE Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='soalaba56']/lrm (origin=local/crmd/23, version=0.20.12): ok (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: erase_xpath_callback: Deletion of "//node_state[@uname='soalaba56']/lrm": ok (rc=0) Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/25, version=0.20.14): ok (rc=0) Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/27, version=0.20.16): ok (rc=0) Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate () Oct 24 08:08:35 soalaba63 cib: [3171]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='soalaba56']/transient_attributes (origin=soalaba56/crmd/7, version=0.20.17): ok (rc=0) Oct 24 08:08:35 soalaba63 attrd: [3173]: info: crm_new_peer: Node soalaba56 now has id: 17435146 Oct 24 08:08:35 soalaba63 attrd: [3173]: info: crm_new_peer: Node 17435146 is now known as soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_pe_invoke_callback: Invoking the PE: query=28, ref=pe_calc-dc-1319458115-11, seq=852, quorate=1 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: group_print: Resource Group: HAService Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: FloatingIP#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: acestatus#011(lsb:acestatus):#011Stopped Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: clone_print: Clone Set: pingdclone [pingd] Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: short_print: Stopped: [ pingd:0 pingd:1 ] Oct 24 08:08:35 soalaba63 pengine: [3174]: ERROR: RecurringOp: Invalid recurring action acestatus-start-30 wth name: 'start' Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: RecurringOp: Start recurring monitor (15s) for pingd:0 on soalaba63 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: RecurringOp: Start recurring monitor (15s) for pingd:1 on soalaba56 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Start FloatingIP#011(soalaba56) Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Start acestatus#011(soalaba56) Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Start pingd:0#011(soalaba63) Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Start pingd:1#011(soalaba56) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: unpack_graph: Unpacked transition 0: 19 actions in 19 synapses Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1319458115-11) derived from /var/lib/pengine/pe-input-207.bz2 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 13 fired and confirmed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 8: monitor FloatingIP_monitor_0 on soalaba63 (local) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=8:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=FloatingIP_monitor_0 ) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: rsc:FloatingIP:2: probe Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 4: monitor FloatingIP_monitor_0 on soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 9: monitor acestatus_monitor_0 on soalaba63 (local) Oct 24 08:08:35 soalaba63 lrmd: [3172]: notice: lrmd_rsc_new(): No lrm_rprovider field in message Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=9:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=acestatus_monitor_0 ) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: rsc:acestatus:3: probe Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 5: monitor acestatus_monitor_0 on soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 10: monitor pingd:0_monitor_0 on soalaba63 (local) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=10:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=pingd:0_monitor_0 ) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: rsc:pingd:0:4: probe Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 6: monitor pingd:1_monitor_0 on soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 21 fired and confirmed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation pingd:0_monitor_0 (call=4, rc=7, cib-update=29, confirmed=true) not running Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action pingd:0_monitor_0 (10) confirmed on soalaba63 (rc=0) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: RA output: (FloatingIP:probe:stderr) eth0:0: warning: name may be invalid Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action pingd:1_monitor_0 (6) confirmed on soalaba56 (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation FloatingIP_monitor_0 (call=2, rc=0, cib-update=30, confirmed=true) ok Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: status_from_rc: Action 8 (FloatingIP_monitor_0) on soalaba63 failed (target: 7 vs. rc: 0): Error Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: match_graph_event:265 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FloatingIP_monitor_0, magic=0:0;8:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d, cib=0.20.20) : Event failed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_abort_priority: Abort priority upgraded from 0 to 1 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_abort_priority: Abort action done superceeded by restart Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action FloatingIP_monitor_0 (8) confirmed on soalaba63 (rc=4) Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: status_from_rc: Action 4 (FloatingIP_monitor_0) on soalaba56 failed (target: 7 vs. rc: 0): Error Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: match_graph_event:265 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FloatingIP_monitor_0, magic=0:0;4:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d, cib=0.20.21) : Event failed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action FloatingIP_monitor_0 (4) confirmed on soalaba56 (rc=4) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation acestatus_monitor_0 (call=3, rc=0, cib-update=31, confirmed=true) ok Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: status_from_rc: Action 9 (acestatus_monitor_0) on soalaba63 failed (target: 7 vs. rc: 0): Error Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: match_graph_event:265 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=acestatus_monitor_0, magic=0:0;9:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d, cib=0.20.22) : Event failed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action acestatus_monitor_0 (9) confirmed on soalaba63 (rc=4) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 7: probe_complete probe_complete on soalaba63 (local) - no waiting Oct 24 08:08:35 soalaba63 attrd: [3173]: info: find_hash_entry: Creating hash entry for probe_complete Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_perform_update: Sent update 10: probe_complete=true Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: status_from_rc: Action 5 (acestatus_monitor_0) on soalaba56 failed (target: 7 vs. rc: 0): Error Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: match_graph_event:265 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=acestatus_monitor_0, magic=0:0;5:0:7:0fbc0bc0-017f-4734-b555-eb0d6dd8568d, cib=0.20.24) : Event failed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action acestatus_monitor_0 (5) confirmed on soalaba56 (rc=4) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on soalaba56 - no waiting Oct 24 08:08:35 soalaba63 crmd: [3175]: info: run_graph: ==================================================== Oct 24 08:08:35 soalaba63 crmd: [3175]: notice: run_graph: Transition 0 (Complete=10, Pending=0, Fired=0, Skipped=8, Incomplete=1, Source=/var/lib/pengine/pe-input-207.bz2): Stopped Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_graph_trigger: Transition 0 is now complete Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_pe_invoke: Query 32: Requesting the current CIB: S_POLICY_ENGINE Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_pe_invoke_callback: Invoking the PE: query=32, ref=pe_calc-dc-1319458115-20, seq=852, quorate=1 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation FloatingIP_monitor_0 found resource FloatingIP active on soalaba63 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation acestatus_monitor_0 found resource acestatus active on soalaba63 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation FloatingIP_monitor_0 found resource FloatingIP active on soalaba56 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation acestatus_monitor_0 found resource acestatus active on soalaba56 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: group_print: Resource Group: HAService Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: FloatingIP#011(ocf::heartbeat:IPaddr2) Started Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: #0111 : soalaba63 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: #0112 : soalaba56 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: acestatus#011(lsb:acestatus) Started Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: #0111 : soalaba63 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: native_print: #0112 : soalaba56 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: clone_print: Clone Set: pingdclone [pingd] Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: short_print: Stopped: [ pingd:0 pingd:1 ] Oct 24 08:08:35 soalaba63 pengine: [3174]: ERROR: native_create_actions: Resource FloatingIP (ocf::IPaddr2) is active on 2 nodes attempting recovery Oct 24 08:08:35 soalaba63 pengine: [3174]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information. Oct 24 08:08:35 soalaba63 pengine: [3174]: ERROR: native_create_actions: Resource acestatus (lsb::acestatus) is active on 2 nodes attempting recovery Oct 24 08:08:35 soalaba63 pengine: [3174]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information. Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: RecurringOp: Start recurring monitor (15s) for pingd:0 on soalaba56 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: RecurringOp: Start recurring monitor (15s) for pingd:1 on soalaba63 Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Restart FloatingIP#011(Started soalaba63) Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Restart acestatus#011(Started soalaba63) Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Start pingd:0#011(soalaba56) Oct 24 08:08:35 soalaba63 pengine: [3174]: notice: LogActions: Start pingd:1#011(soalaba63) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 24 08:08:35 soalaba63 crmd: [3175]: info: unpack_graph: Unpacked transition 1: 18 actions in 18 synapses Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1319458115-20) derived from /var/lib/pengine/pe-error-39.bz2 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 13 fired and confirmed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 9: stop acestatus_stop_0 on soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 8: stop acestatus_stop_0 on soalaba63 (local) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=8:1:0:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=acestatus_stop_0 ) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: rsc:acestatus:5: stop Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 19 fired and confirmed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on soalaba56 - no waiting Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 15: start pingd:0_start_0 on soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 17: start pingd:1_start_0 on soalaba63 (local) Oct 24 08:08:35 soalaba63 lrmd: [4845]: WARN: For LSB init script, no additional parameters are needed. Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=17:1:0:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=pingd:1_start_0 ) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: rsc:pingd:1:6: start Oct 24 08:08:35 soalaba63 pengine: [3174]: ERROR: process_pe_message: Transition 1: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-39.bz2 Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: RA output: (acestatus:stop:stdout) Already stopped... Oct 24 08:08:35 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation acestatus_stop_0 (call=5, rc=0, cib-update=33, confirmed=true) ok Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action acestatus_stop_0 (8) confirmed on soalaba63 (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action acestatus_stop_0 (9) confirmed on soalaba56 (rc=0) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 6: stop FloatingIP_stop_0 on soalaba56 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 5: stop FloatingIP_stop_0 on soalaba63 (local) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=5:1:0:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=FloatingIP_stop_0 ) Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: rsc:FloatingIP:7: stop Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: RA output: (FloatingIP:stop:stderr) eth0:0: warning: name may be invalid Oct 24 08:08:35 soalaba63 IPaddr2[4865]: INFO: IP status = ok, IP_CIP= Oct 24 08:08:35 soalaba63 IPaddr2[4865]: INFO: ip -f inet addr delete 135.20.245.155/32 dev eth0 Oct 24 08:08:35 soalaba63 lrmd: [3172]: info: RA output: (FloatingIP:stop:stderr) RTNETLINK answers: Cannot assign requested address Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: status_from_rc: Action 6 (FloatingIP_stop_0) on soalaba56 failed (target: 0 vs. rc: 1): Error Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: update_failcount: Updating failcount for FloatingIP on soalaba56 after failed stop: rc=1 (update=INFINITY, time=1319458115) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: match_graph_event:265 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FloatingIP_stop_0, magic=0:1;6:1:0:0fbc0bc0-017f-4734-b555-eb0d6dd8568d, cib=0.20.29) : Event failed Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_abort_priority: Abort priority upgraded from 0 to 1 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_abort_priority: Abort action done superceeded by restart Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action FloatingIP_stop_0 (6) confirmed on soalaba56 (rc=4) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation FloatingIP_stop_0 (call=7, rc=1, cib-update=34, confirmed=true) unknown error Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: status_from_rc: Action 5 (FloatingIP_stop_0) on soalaba63 failed (target: 0 vs. rc: 1): Error Oct 24 08:08:35 soalaba63 crmd: [3175]: WARN: update_failcount: Updating failcount for FloatingIP on soalaba63 after failed stop: rc=1 (update=INFINITY, time=1319458115) Oct 24 08:08:35 soalaba63 attrd: [3173]: info: find_hash_entry: Creating hash entry for fail-count-FloatingIP Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: match_graph_event:265 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=FloatingIP_stop_0, magic=0:1;5:1:0:0fbc0bc0-017f-4734-b555-eb0d6dd8568d, cib=0.20.30) : Event failed Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-FloatingIP (INFINITY) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: match_graph_event: Action FloatingIP_stop_0 (5) confirmed on soalaba63 (rc=4) Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_perform_update: Sent update 15: fail-count-FloatingIP=INFINITY Oct 24 08:08:35 soalaba63 attrd: [3173]: info: find_hash_entry: Creating hash entry for last-failure-FloatingIP Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-FloatingIP (1319458115) Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=0, tag=nvpair, id=status-soalaba63-fail-count-FloatingIP, magic=NA, cib=0.20.31) : Transient attribute: update Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_abort_priority: Abort priority upgraded from 1 to 1000000 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: update_abort_priority: 'Event failed' abort superceeded Oct 24 08:08:35 soalaba63 attrd: [3173]: info: attrd_perform_update: Sent update 18: last-failure-FloatingIP=1319458115 Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=0, tag=nvpair, id=status-soalaba63-last-failure-FloatingIP, magic=NA, cib=0.20.32) : Transient attribute: update Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=0, tag=nvpair, id=status-soalaba56-fail-count-FloatingIP, magic=NA, cib=0.20.33) : Transient attribute: update Oct 24 08:08:35 soalaba63 crmd: [3175]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=0, tag=nvpair, id=status-soalaba56-last-failure-FloatingIP, magic=NA, cib=0.20.34) : Transient attribute: update Oct 24 08:08:39 soalaba63 attrd: [3173]: info: find_hash_entry: Creating hash entry for pingd Oct 24 08:08:39 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation pingd:1_start_0 (call=6, rc=0, cib-update=35, confirmed=true) ok Oct 24 08:08:39 soalaba63 crmd: [3175]: info: match_graph_event: Action pingd:1_start_0 (17) confirmed on soalaba63 (rc=0) Oct 24 08:08:39 soalaba63 crmd: [3175]: info: match_graph_event: Action pingd:0_start_0 (15) confirmed on soalaba56 (rc=0) Oct 24 08:08:39 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 20 fired and confirmed Oct 24 08:08:39 soalaba63 crmd: [3175]: info: run_graph: ==================================================== Oct 24 08:08:39 soalaba63 crmd: [3175]: notice: run_graph: Transition 1 (Complete=10, Pending=0, Fired=0, Skipped=8, Incomplete=0, Source=/var/lib/pengine/pe-error-39.bz2): Stopped Oct 24 08:08:39 soalaba63 crmd: [3175]: info: te_graph_trigger: Transition 1 is now complete Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_pe_invoke: Query 36: Requesting the current CIB: S_POLICY_ENGINE Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_pe_invoke_callback: Invoking the PE: query=36, ref=pe_calc-dc-1319458119-28, seq=852, quorate=1 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation FloatingIP_monitor_0 found resource FloatingIP active on soalaba63 Oct 24 08:08:39 soalaba63 pengine: [3174]: WARN: unpack_rsc_op: Processing failed op FloatingIP_stop_0 on soalaba63: unknown error (1) Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation acestatus_monitor_0 found resource acestatus active on soalaba63 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation FloatingIP_monitor_0 found resource FloatingIP active on soalaba56 Oct 24 08:08:39 soalaba63 pengine: [3174]: WARN: unpack_rsc_op: Processing failed op FloatingIP_stop_0 on soalaba56: unknown error (1) Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation acestatus_monitor_0 found resource acestatus active on soalaba56 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: group_print: Resource Group: HAService Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: native_print: FloatingIP#011(ocf::heartbeat:IPaddr2) Started (unmanaged) FAILED Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: native_print: #0111 : soalaba63 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: native_print: #0112 : soalaba56 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: native_print: acestatus#011(lsb:acestatus):#011Stopped Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: clone_print: Clone Set: pingdclone [pingd] Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: short_print: Started: [ soalaba56 soalaba63 ] Oct 24 08:08:39 soalaba63 pengine: [3174]: WARN: common_apply_stickiness: Forcing FloatingIP away from soalaba56 after 1000000 failures (max=1000000) Oct 24 08:08:39 soalaba63 pengine: [3174]: WARN: common_apply_stickiness: Forcing FloatingIP away from soalaba63 after 1000000 failures (max=1000000) Oct 24 08:08:39 soalaba63 pengine: [3174]: ERROR: native_create_actions: Resource FloatingIP (ocf::IPaddr2) is active on 2 nodes attempting recovery Oct 24 08:08:39 soalaba63 pengine: [3174]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information. Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: RecurringOp: Start recurring monitor (15s) for pingd:0 on soalaba56 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: RecurringOp: Start recurring monitor (15s) for pingd:1 on soalaba63 Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: LogActions: Leave FloatingIP#011(Started unmanaged) Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: LogActions: Leave acestatus#011(Stopped) Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: LogActions: Leave pingd:0#011(Started soalaba56) Oct 24 08:08:39 soalaba63 pengine: [3174]: notice: LogActions: Leave pingd:1#011(Started soalaba63) Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 24 08:08:39 soalaba63 crmd: [3175]: info: unpack_graph: Unpacked transition 2: 5 actions in 5 synapses Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1319458119-28) derived from /var/lib/pengine/pe-error-40.bz2 Oct 24 08:08:39 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 10 fired and confirmed Oct 24 08:08:39 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 14: monitor pingd:0_monitor_15000 on soalaba56 Oct 24 08:08:39 soalaba63 crmd: [3175]: info: te_rsc_command: Initiating action 17: monitor pingd:1_monitor_15000 on soalaba63 (local) Oct 24 08:08:39 soalaba63 crmd: [3175]: info: do_lrm_rsc_op: Performing key=17:2:0:0fbc0bc0-017f-4734-b555-eb0d6dd8568d op=pingd:1_monitor_15000 ) Oct 24 08:08:39 soalaba63 lrmd: [3172]: info: rsc:pingd:1:8: monitor Oct 24 08:08:39 soalaba63 pengine: [3174]: ERROR: process_pe_message: Transition 2: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-40.bz2 Oct 24 08:08:43 soalaba63 crmd: [3175]: info: process_lrm_event: LRM operation pingd:1_monitor_15000 (call=8, rc=0, cib-update=37, confirmed=false) ok Oct 24 08:08:43 soalaba63 crmd: [3175]: info: match_graph_event: Action pingd:1_monitor_15000 (17) confirmed on soalaba63 (rc=0) Oct 24 08:08:43 soalaba63 crmd: [3175]: info: match_graph_event: Action pingd:0_monitor_15000 (14) confirmed on soalaba56 (rc=0) Oct 24 08:08:43 soalaba63 crmd: [3175]: notice: run_graph: ==================================================== Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: run_graph: Transition 2 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-error-40.bz2): Terminated Oct 24 08:08:43 soalaba63 crmd: [3175]: ERROR: te_graph_trigger: Transition failed: terminated Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_graph: Graph 2 (5 actions in 5 synapses): batch-limit=30 jobs, network-delay=60000ms Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 0 is pending (priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_elem: [Action 11]: Pending (id: HAService_stopped_0, type: pseduo, priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 5]: Pending (id: FloatingIP_stop_0, loc: soalaba63, priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 6]: Pending (id: FloatingIP_stop_0, loc: soalaba56, priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 10]: Completed (id: HAService_stop_0, type: pseduo, priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 1 was confirmed (priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 2 is pending (priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_elem: [Action 8]: Pending (id: HAService_start_0, type: pseduo, priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 11]: Pending (id: HAService_stopped_0, type: pseduo, priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 3 was confirmed (priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 4 was confirmed (priority: 0) Oct 24 08:08:43 soalaba63 crmd: [3175]: info: te_graph_trigger: Transition 2 is now complete Oct 24 08:08:43 soalaba63 crmd: [3175]: info: notify_crmd: Transition 2 status: done - Oct 24 08:08:43 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 24 08:08:43 soalaba63 crmd: [3175]: info: do_state_transition: Starting PEngine Recheck Timer Oct 24 08:08:48 soalaba63 attrd: [3173]: info: attrd_trigger_update: Sending flush op to all hosts for: pingd (100) Oct 24 08:08:48 soalaba63 attrd: [3173]: info: attrd_perform_update: Sent update 25: pingd=100 Oct 24 08:08:48 soalaba63 crmd: [3175]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=1, tag=nvpair, id=status-soalaba63-pingd, magic=NA, cib=0.20.43) : Transient attribute: update Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_pe_invoke: Query 38: Requesting the current CIB: S_POLICY_ENGINE Oct 24 08:08:48 soalaba63 crmd: [3175]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=1, tag=nvpair, id=status-soalaba56-pingd, magic=NA, cib=0.20.44) : Transient attribute: update Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_pe_invoke: Query 39: Requesting the current CIB: S_POLICY_ENGINE Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_pe_invoke_callback: Invoking the PE: query=39, ref=pe_calc-dc-1319458128-31, seq=852, quorate=1 Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation FloatingIP_monitor_0 found resource FloatingIP active on soalaba63 Oct 24 08:08:48 soalaba63 pengine: [3174]: WARN: unpack_rsc_op: Processing failed op FloatingIP_stop_0 on soalaba63: unknown error (1) Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation acestatus_monitor_0 found resource acestatus active on soalaba63 Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation FloatingIP_monitor_0 found resource FloatingIP active on soalaba56 Oct 24 08:08:48 soalaba63 pengine: [3174]: WARN: unpack_rsc_op: Processing failed op FloatingIP_stop_0 on soalaba56: unknown error (1) Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: unpack_rsc_op: Operation acestatus_monitor_0 found resource acestatus active on soalaba56 Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: group_print: Resource Group: HAService Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: native_print: FloatingIP#011(ocf::heartbeat:IPaddr2) Started (unmanaged) FAILED Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: native_print: #0111 : soalaba63 Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: native_print: #0112 : soalaba56 Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: native_print: acestatus#011(lsb:acestatus):#011Stopped Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: clone_print: Clone Set: pingdclone [pingd] Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: short_print: Started: [ soalaba56 soalaba63 ] Oct 24 08:08:48 soalaba63 pengine: [3174]: WARN: common_apply_stickiness: Forcing FloatingIP away from soalaba56 after 1000000 failures (max=1000000) Oct 24 08:08:48 soalaba63 pengine: [3174]: WARN: common_apply_stickiness: Forcing FloatingIP away from soalaba63 after 1000000 failures (max=1000000) Oct 24 08:08:48 soalaba63 pengine: [3174]: ERROR: native_create_actions: Resource FloatingIP (ocf::IPaddr2) is active on 2 nodes attempting recovery Oct 24 08:08:48 soalaba63 pengine: [3174]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information. Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: LogActions: Leave FloatingIP#011(Started unmanaged) Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: LogActions: Leave acestatus#011(Stopped) Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: LogActions: Leave pingd:0#011(Started soalaba56) Oct 24 08:08:48 soalaba63 pengine: [3174]: notice: LogActions: Leave pingd:1#011(Started soalaba63) Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 24 08:08:48 soalaba63 crmd: [3175]: info: unpack_graph: Unpacked transition 3: 3 actions in 3 synapses Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1319458128-31) derived from /var/lib/pengine/pe-error-41.bz2 Oct 24 08:08:48 soalaba63 crmd: [3175]: info: te_pseudo_action: Pseudo action 12 fired and confirmed Oct 24 08:08:48 soalaba63 crmd: [3175]: notice: run_graph: ==================================================== Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: run_graph: Transition 3 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=2, Source=/var/lib/pengine/pe-error-41.bz2): Terminated Oct 24 08:08:48 soalaba63 crmd: [3175]: ERROR: te_graph_trigger: Transition failed: terminated Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_graph: Graph 3 (3 actions in 3 synapses): batch-limit=30 jobs, network-delay=60000ms Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 0 is pending (priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_elem: [Action 13]: Pending (id: HAService_stopped_0, type: pseduo, priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 7]: Pending (id: FloatingIP_stop_0, loc: soalaba63, priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 8]: Pending (id: FloatingIP_stop_0, loc: soalaba56, priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 12]: Completed (id: HAService_stop_0, type: pseduo, priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 1 was confirmed (priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_graph: Synapse 2 is pending (priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_elem: [Action 10]: Pending (id: HAService_start_0, type: pseduo, priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: WARN: print_elem: * [Input 13]: Pending (id: HAService_stopped_0, type: pseduo, priority: 0) Oct 24 08:08:48 soalaba63 crmd: [3175]: info: te_graph_trigger: Transition 3 is now complete Oct 24 08:08:48 soalaba63 crmd: [3175]: info: notify_crmd: Transition 3 status: done - Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 24 08:08:48 soalaba63 crmd: [3175]: info: do_state_transition: Starting PEngine Recheck Timer Oct 24 08:08:48 soalaba63 pengine: [3174]: ERROR: process_pe_message: Transition 3: ERRORs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-error-41.bz2