Feb 13 10:31:32 corosync [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service. Feb 13 10:31:32 corosync [MAIN ] Corosync built-in features: nss dbus rdma snmp Feb 13 10:31:32 corosync [MAIN ] Successfully read main configuration file '/etc/corosync/corosync.conf'. Feb 13 10:31:32 corosync [TOTEM ] Initializing transport (UDP/IP Unicast). Feb 13 10:31:32 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Feb 13 10:31:32 corosync [TOTEM ] Initializing transport (UDP/IP Unicast). Feb 13 10:31:32 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0). Feb 13 10:31:32 corosync [TOTEM ] The network interface [1192.168.2.2] is now up. Set r/w permissions for uid=0, gid=0 on /var/log/cluster/corosync.log Feb 13 10:31:32 corosync [pcmk ] info: process_ais_conf: Reading configure Feb 13 10:31:32 corosync [pcmk ] info: config_find_init: Local handle: 4835695805891346436 for logging Feb 13 10:31:32 corosync [pcmk ] info: config_find_next: Processing additional logging options... Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Found 'off' for option: debug Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Found 'yes' for option: to_logfile Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Found '/var/log/cluster/corosync.log' for option: logfile Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Found 'yes' for option: to_syslog Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Defaulting to 'daemon' for option: syslog_facility Feb 13 10:31:32 corosync [pcmk ] info: config_find_init: Local handle: 4552499517957603333 for quorum Feb 13 10:31:32 corosync [pcmk ] info: config_find_next: No additional configuration supplied for: quorum Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: No default for option: provider Feb 13 10:31:32 corosync [pcmk ] info: config_find_init: Local handle: 8972265949260414982 for service Feb 13 10:31:32 corosync [pcmk ] info: config_find_next: Processing additional service options... Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Found '1' for option: ver Feb 13 10:31:32 corosync [pcmk ] info: process_ais_conf: Enabling MCP mode: Use the Pacemaker init script to complete Pacemaker startup Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Defaulting to 'no' for option: use_logd Feb 13 10:31:32 corosync [pcmk ] info: get_config_opt: Defaulting to 'no' for option: use_mgmtd Feb 13 10:31:32 corosync [pcmk ] info: pcmk_startup: CRM: Initialized Feb 13 10:31:32 corosync [pcmk ] Logging: Initialized pcmk_startup Feb 13 10:31:32 corosync [pcmk ] info: pcmk_startup: Maximum core file size is: 18446744073709551615 Feb 13 10:31:32 corosync [pcmk ] info: pcmk_startup: Service: 10 Feb 13 10:31:32 corosync [pcmk ] info: pcmk_startup: Local hostname: nodeb Feb 13 10:31:32 corosync [pcmk ] info: pcmk_update_nodeid: Local node id: 731255468 Feb 13 10:31:32 corosync [pcmk ] info: update_member: Creating entry for node 731255468 born on 0 Feb 13 10:31:32 corosync [pcmk ] info: update_member: 0xf5f0f0 Node 731255468 now known as nodeb (was: (null)) Feb 13 10:31:32 corosync [pcmk ] info: update_member: Node nodeb now has 1 quorum votes (was 0) Feb 13 10:31:32 corosync [pcmk ] info: update_member: Node 731255468/nodeb is now: member Feb 13 10:31:32 corosync [SERV ] Service engine loaded: Pacemaker Cluster Manager 1.1.6 Feb 13 10:31:32 corosync [SERV ] Service engine loaded: corosync extended virtual synchrony service Feb 13 10:31:32 corosync [SERV ] Service engine loaded: corosync configuration service Feb 13 10:31:32 corosync [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 Feb 13 10:31:32 corosync [SERV ] Service engine loaded: corosync cluster config database access v1.01 Feb 13 10:31:32 corosync [SERV ] Service engine loaded: corosync profile loading service Feb 13 10:31:32 corosync [SERV ] Service engine loaded: corosync cluster quorum service v0.1 Feb 13 10:31:32 corosync [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine. Feb 13 10:31:32 corosync [TOTEM ] The network interface [192.168.1.2] is now up. Feb 13 10:31:32 corosync [TOTEM ] Incrementing problem counter for seqid 1 iface 192.168.1.2 to [1 of 10] Feb 13 10:31:32 corosync [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 260: memb=0, new=0, lost=0 Feb 13 10:31:32 corosync [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 260: memb=1, new=1, lost=0 Feb 13 10:31:32 corosync [pcmk ] info: pcmk_peer_update: NEW: nodeb 731255468 Feb 13 10:31:32 corosync [pcmk ] info: pcmk_peer_update: MEMB: nodeb 731255468 Feb 13 10:31:32 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Feb 13 10:31:32 corosync [CPG ] chosen downlist: sender r(0) ip(1192.168.2.2) r(1) ip(192.168.1.2) ; members(old:0 left:0) Feb 13 10:31:32 corosync [MAIN ] Completed service synchronization, ready to provide service. Feb 13 10:31:34 corosync [TOTEM ] ring 1 active with no faults Set r/w permissions for uid=498, gid=0 on /var/log/cluster/corosync.log Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: main: Starting Pacemaker 1.1.6-3.el6 (Build: a02c0f19a00c1eb2527ad38f146ebc0834814558): generated-manpages agent-manpages ascii-docs publican-docs ncurses trace-logging cman corosync-quorum corosync Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: main: Maximum core file size is: 18446744073709551615 Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: 0x1b0a770 Node 731255468 now known as nodeb (was: (null)) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000000002 (was 00000000000000000000000000000000) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: start_child: Forked child 22991 for process stonith-ng Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000100002 (was 00000000000000000000000000000002) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: start_child: Forked child 22992 for process cib Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000100102 (was 00000000000000000000000000100002) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: start_child: Forked child 22993 for process lrmd Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000100112 (was 00000000000000000000000000100102) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: start_child: Forked child 22994 for process attrd Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000101112 (was 00000000000000000000000000100112) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: start_child: Forked child 22995 for process pengine Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000111112 (was 00000000000000000000000000101112) Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: start_child: Forked child 22996 for process crmd Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: update_node_processes: Node nodeb now has process list: 00000000000000000000000000111312 (was 00000000000000000000000000111112) Feb 13 10:31:44 nodeb lrmd: [22993]: info: G_main_add_SignalHandler: Added signal handler for signal 15 Feb 13 10:31:44 nodeb pacemakerd: [22987]: info: main: Starting mainloop Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: Invoked: /usr/lib64/heartbeat/stonithd Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: get_cluster_type: Cluster type is: 'openais' Feb 13 10:31:44 nodeb stonith-ng: [22991]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Feb 13 10:31:44 nodeb cib: [22992]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster Feb 13 10:31:44 nodeb cib: [22992]: info: G_main_add_TriggerHandler: Added signal manual handler Feb 13 10:31:44 nodeb cib: [22992]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 13 10:31:44 nodeb cib: [22992]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig) Feb 13 10:31:44 nodeb lrmd: [22993]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 13 10:31:44 nodeb lrmd: [22993]: info: enabling coredumps Feb 13 10:31:44 nodeb lrmd: [22993]: info: G_main_add_SignalHandler: Added signal handler for signal 10 Feb 13 10:31:44 nodeb lrmd: [22993]: info: G_main_add_SignalHandler: Added signal handler for signal 12 Feb 13 10:31:44 nodeb lrmd: [22993]: info: Started. Feb 13 10:31:44 nodeb cib: [22992]: info: validate_with_relaxng: Creating RNG parser context Feb 13 10:31:44 nodeb pengine: [22995]: info: Invoked: /usr/lib64/heartbeat/pengine Feb 13 10:31:44 nodeb crmd: [22996]: info: Invoked: /usr/lib64/heartbeat/crmd Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: init_ais_connection_classic: AIS connection established Feb 13 10:31:44 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0xf748c0 for stonith-ng/0 Feb 13 10:31:44 nodeb crmd: [22996]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: get_ais_nodeid: Server details: id=731255468 uname=nodeb cname=pcmk Feb 13 10:31:44 nodeb crmd: [22996]: info: main: CRM Hg Version: a02c0f19a00c1eb2527ad38f146ebc0834814558 Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: crm_new_peer: Node nodeb now has id: 731255468 Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: crm_new_peer: Node 731255468 is now known as nodeb Feb 13 10:31:44 nodeb crmd: [22996]: info: crmd_init: Starting crmd Feb 13 10:31:44 nodeb crmd: [22996]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: main: Starting stonith-ng mainloop Feb 13 10:31:44 nodeb stonith-ng: [22991]: info: crm_update_peer: Node nodeb: id=731255468 state=unknown addr=(null) votes=0 born=0 seen=0 proc=00000000000000000000000000111312 (new) Feb 13 10:31:44 nodeb attrd: [22994]: info: Invoked: /usr/lib64/heartbeat/attrd Feb 13 10:31:44 nodeb attrd: [22994]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Feb 13 10:31:44 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0xf78c20 for attrd/0 Feb 13 10:31:44 nodeb attrd: [22994]: notice: main: Starting mainloop... Feb 13 10:31:44 nodeb cib: [22992]: info: startCib: CIB Initialization completed successfully Feb 13 10:31:44 nodeb cib: [22992]: info: get_cluster_type: Cluster type is: 'openais' Feb 13 10:31:44 nodeb cib: [22992]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Feb 13 10:31:44 nodeb cib: [22992]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Feb 13 10:31:44 nodeb cib: [22992]: info: init_ais_connection_classic: AIS connection established Feb 13 10:31:44 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0xf7cf80 for cib/0 Feb 13 10:31:44 corosync [pcmk ] info: pcmk_ipc: Sending membership update 260 to cib Feb 13 10:31:44 nodeb cib: [22992]: info: get_ais_nodeid: Server details: id=731255468 uname=nodeb cname=pcmk Feb 13 10:31:44 nodeb cib: [22992]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Feb 13 10:31:44 nodeb cib: [22992]: info: crm_new_peer: Node nodeb now has id: 731255468 Feb 13 10:31:44 nodeb cib: [22992]: info: crm_new_peer: Node 731255468 is now known as nodeb Feb 13 10:31:44 nodeb cib: [22992]: info: cib_init: Starting cib mainloop Feb 13 10:31:44 nodeb cib: [22992]: info: ais_dispatch_message: Membership 260: quorum still lost Feb 13 10:31:44 nodeb cib: [22992]: info: crm_update_peer: Node nodeb: id=731255468 state=member (new) addr=r(0) ip(1192.168.2.2) r(1) ip(192.168.1.2) (new) votes=1 (new) born=0 seen=260 proc=00000000000000000000000000000000 Feb 13 10:31:44 nodeb cib: [22992]: info: crm_update_peer: Node nodeb: id=731255468 state=member addr=r(0) ip(1192.168.2.2) r(1) ip(192.168.1.2) votes=1 born=0 seen=260 proc=00000000000000000000000000111312 (new) Feb 13 10:31:45 nodeb crmd: [22996]: info: do_cib_control: CIB connection established Feb 13 10:31:45 nodeb crmd: [22996]: info: get_cluster_type: Cluster type is: 'openais' Feb 13 10:31:45 nodeb crmd: [22996]: notice: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin) Feb 13 10:31:45 nodeb crmd: [22996]: info: init_ais_connection_classic: Creating connection to our Corosync plugin Feb 13 10:31:45 nodeb crmd: [22996]: info: init_ais_connection_classic: AIS connection established Feb 13 10:31:45 corosync [pcmk ] info: pcmk_ipc: Recorded connection 0xf81b30 for crmd/0 Feb 13 10:31:45 corosync [pcmk ] info: pcmk_ipc: Sending membership update 260 to crmd Feb 13 10:31:45 nodeb crmd: [22996]: info: get_ais_nodeid: Server details: id=731255468 uname=nodeb cname=pcmk Feb 13 10:31:45 nodeb crmd: [22996]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established Feb 13 10:31:45 nodeb crmd: [22996]: info: crm_new_peer: Node nodeb now has id: 731255468 Feb 13 10:31:45 nodeb crmd: [22996]: info: crm_new_peer: Node 731255468 is now known as nodeb Feb 13 10:31:45 nodeb crmd: [22996]: info: ais_status_callback: status: nodeb is now unknown Feb 13 10:31:45 nodeb crmd: [22996]: info: do_ha_control: Connected to the cluster Feb 13 10:31:45 nodeb crmd: [22996]: info: do_started: Delaying start, no membership data (0000000000100000) Feb 13 10:31:45 nodeb crmd: [22996]: info: crmd_init: Starting crmd's mainloop Feb 13 10:31:45 nodeb crmd: [22996]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms Feb 13 10:31:45 nodeb crmd: [22996]: info: config_query_callback: Checking for expired actions every 900000ms Feb 13 10:31:45 nodeb crmd: [22996]: info: config_query_callback: Sending expected-votes=2 to corosync Feb 13 10:31:45 nodeb crmd: [22996]: info: ais_dispatch_message: Membership 260: quorum still lost Feb 13 10:31:45 nodeb crmd: [22996]: info: ais_status_callback: status: nodeb is now member (was unknown) Feb 13 10:31:45 nodeb crmd: [22996]: info: crm_update_peer: Node nodeb: id=731255468 state=member (new) addr=r(0) ip(1192.168.2.2) r(1) ip(192.168.1.2) (new) votes=1 (new) born=0 seen=260 proc=00000000000000000000000000000000 Feb 13 10:31:45 nodeb crmd: [22996]: info: ais_dispatch_message: Membership 260: quorum still lost Feb 13 10:31:45 nodeb crmd: [22996]: notice: crmd_peer_update: Status update: Client nodeb/crmd now has status [online] (DC=) Feb 13 10:31:45 nodeb crmd: [22996]: info: crm_update_peer: Node nodeb: id=731255468 state=member addr=r(0) ip(1192.168.2.2) r(1) ip(192.168.1.2) votes=1 born=0 seen=260 proc=00000000000000000000000000111312 (new) Feb 13 10:31:45 nodeb crmd: [22996]: info: do_started: The local CRM is operational Feb 13 10:31:45 nodeb crmd: [22996]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ] Feb 13 10:31:46 nodeb crmd: [22996]: info: te_connect_stonith: Attempting connection to fencing daemon... Feb 13 10:31:47 nodeb crmd: [22996]: info: te_connect_stonith: Connected Feb 13 10:32:06 nodeb crmd: [22996]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms) Feb 13 10:32:06 nodeb crmd: [22996]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ] Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] Feb 13 10:32:06 nodeb crmd: [22996]: info: do_te_control: Registering TE UUID: 9b886b12-0a99-4f13-bc38-54585dbea0bc Feb 13 10:32:06 nodeb crmd: [22996]: info: set_graph_functions: Setting custom graph functions Feb 13 10:32:06 nodeb crmd: [22996]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses Feb 13 10:32:06 nodeb crmd: [22996]: info: do_dc_takeover: Taking over DC status for this partition Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_readwrite: We are now in R/W mode Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/5, version=0.67.1): ok (rc=0) Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/6, version=0.67.2): ok (rc=0) Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/8, version=0.67.3): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: join_make_offer: Making join offers based on membership 260 Feb 13 10:32:06 nodeb crmd: [22996]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.67.4): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms Feb 13 10:32:06 nodeb crmd: [22996]: info: config_query_callback: Checking for expired actions every 900000ms Feb 13 10:32:06 nodeb crmd: [22996]: info: config_query_callback: Sending expected-votes=2 to corosync Feb 13 10:32:06 nodeb crmd: [22996]: info: ais_dispatch_message: Membership 260: quorum still lost Feb 13 10:32:06 nodeb crmd: [22996]: info: crmd_ais_dispatch: Setting expected votes to 2 Feb 13 10:32:06 nodeb crmd: [22996]: info: ais_dispatch_message: Membership 260: quorum still lost Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/13, version=0.67.5): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: crmd_ais_dispatch: Setting expected votes to 2 Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/15, version=0.67.6): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: update_dc: Set DC to nodeb (3.0.5) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: All 1 cluster nodes responded to the join offer. Feb 13 10:32:06 nodeb crmd: [22996]: info: do_dc_join_finalize: join-1: Syncing the CIB from nodeb to the rest of the cluster Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/17, version=0.67.6): ok (rc=0) Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/18, version=0.67.7): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: update_attrd: Connecting to attrd... Feb 13 10:32:06 nodeb crmd: [22996]: info: do_dc_join_ack: join-1: Updating node state to member for nodeb Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='nodeb']/transient_attributes (origin=local/crmd/19, version=0.67.8): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: erase_xpath_callback: Deletion of "//node_state[@uname='nodeb']/transient_attributes": ok (rc=0) Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='nodeb']/lrm (origin=local/crmd/20, version=0.67.9): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: erase_xpath_callback: Deletion of "//node_state[@uname='nodeb']/lrm": ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: All 1 cluster nodes are eligible to run resources. Feb 13 10:32:06 nodeb crmd: [22996]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date Feb 13 10:32:06 nodeb crmd: [22996]: info: crm_update_quorum: Updating quorum status to false (call=24) Feb 13 10:32:06 nodeb attrd: [22994]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Feb 13 10:32:06 nodeb crmd: [22996]: info: abort_transition_graph: do_te_invoke:162 - Triggered transition abort (complete=1) : Peer Cancelled Feb 13 10:32:06 nodeb crmd: [22996]: info: do_pe_invoke: Query 25: Requesting the current CIB: S_POLICY_ENGINE Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/22, version=0.67.11): ok (rc=0) Feb 13 10:32:06 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/24, version=0.67.13): ok (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_pe_invoke_callback: Invoking the PE: query=25, ref=pe_calc-dc-1329147126-7, seq=260, quorate=0 Feb 13 10:32:06 nodeb pengine: [22995]: notice: unpack_config: On loss of CCM Quorum: Ignore Feb 13 10:32:06 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd0:0 on nodeb Feb 13 10:32:06 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd0:0 on nodeb Feb 13 10:32:06 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd1:0 on nodeb Feb 13 10:32:06 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd1:0 on nodeb Feb 13 10:32:06 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (60s) for fence-fcmb on nodeb Feb 13 10:32:06 nodeb pengine: [22995]: WARN: stage6: Scheduling Node nodea for STONITH Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Start drbd0:0 (nodeb) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave drbd0:1 (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Start drbd1:0 (nodeb) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave drbd1:1 (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave datafs (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave patchfs (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave ClusterIP (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave httpd (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Leave fence-fcma (Stopped) Feb 13 10:32:06 nodeb pengine: [22995]: notice: LogActions: Start fence-fcmb (nodeb) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Feb 13 10:32:06 nodeb crmd: [22996]: info: unpack_graph: Unpacked transition 0: 34 actions in 34 synapses Feb 13 10:32:06 nodeb crmd: [22996]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1329147126-7) derived from /var/lib/pengine/pe-warn-22.bz2 Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 4: monitor drbd0:0_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=4:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_monitor_0 ) Feb 13 10:32:06 nodeb lrmd: [22993]: info: rsc:drbd0:0:2: probe Feb 13 10:32:06 nodeb pengine: [22995]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-22.bz2 Feb 13 10:32:06 nodeb pengine: [22995]: notice: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Feb 13 10:32:06 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 16 fired and confirmed Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 5: monitor drbd1:0_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=5:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_monitor_0 ) Feb 13 10:32:06 nodeb lrmd: [22993]: info: rsc:drbd1:0:3: probe Feb 13 10:32:06 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 42 fired and confirmed Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 6: monitor datafs_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=6:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=datafs_monitor_0 ) Feb 13 10:32:06 nodeb lrmd: [22993]: info: rsc:datafs:4: probe Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 7: monitor patchfs_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=7:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=patchfs_monitor_0 ) Feb 13 10:32:06 nodeb lrmd: [22993]: info: rsc:patchfs:5: probe Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 8: monitor ClusterIP_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=8:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=ClusterIP_monitor_0 ) Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 9: monitor httpd_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=9:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=httpd_monitor_0 ) Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 10: monitor fence-fcma_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb lrmd: [22993]: notice: lrmd_rsc_new(): No lrm_rprovider field in message Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=10:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=fence-fcma_monitor_0 ) Feb 13 10:32:06 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 11: monitor fence-fcmb_monitor_0 on nodeb (local) Feb 13 10:32:06 nodeb lrmd: [22993]: notice: lrmd_rsc_new(): No lrm_rprovider field in message Feb 13 10:32:06 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=11:0:7:9b886b12-0a99-4f13-bc38-54585dbea0bc op=fence-fcmb_monitor_0 ) Feb 13 10:32:06 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 17 fired and confirmed Feb 13 10:32:06 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 14 fired and confirmed Feb 13 10:32:06 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 43 fired and confirmed Feb 13 10:32:06 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 40 fired and confirmed Feb 13 10:32:06 nodeb attrd: [22994]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd0:0 (5) Feb 13 10:32:06 nodeb attrd: [22994]: notice: attrd_perform_update: Sent update 4: master-drbd0:0=5 Feb 13 10:32:06 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_monitor_0 (call=2, rc=7, cib-update=26, confirmed=true) not running Feb 13 10:32:06 nodeb crmd: [22996]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-nodeb-master-drbd0.0, name=master-drbd0:0, value=5, magic=NA, cib=0.67.14) : Transient attribute: update Feb 13 10:32:06 nodeb crmd: [22996]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000 Feb 13 10:32:06 nodeb crmd: [22996]: info: update_abort_priority: Abort action done superceeded by restart Feb 13 10:32:06 nodeb attrd: [22994]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-drbd1:0 (5) Feb 13 10:32:06 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_monitor_0 (4) confirmed on nodeb (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_monitor_0 (call=3, rc=7, cib-update=27, confirmed=true) not running Feb 13 10:32:06 nodeb attrd: [22994]: notice: attrd_perform_update: Sent update 7: master-drbd1:0=5 Feb 13 10:32:06 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_monitor_0 (5) confirmed on nodeb (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: process_lrm_event: LRM operation datafs_monitor_0 (call=4, rc=7, cib-update=28, confirmed=true) not running Feb 13 10:32:06 nodeb crmd: [22996]: info: process_lrm_event: LRM operation patchfs_monitor_0 (call=5, rc=7, cib-update=29, confirmed=true) not running Feb 13 10:32:06 nodeb crmd: [22996]: info: abort_transition_graph: te_update_diff:176 - Triggered transition abort (complete=0, tag=nvpair, id=status-nodeb-master-drbd1.0, name=master-drbd1:0, value=5, magic=NA, cib=0.67.17) : Transient attribute: update Feb 13 10:32:06 nodeb crmd: [22996]: info: match_graph_event: Action datafs_monitor_0 (6) confirmed on nodeb (rc=0) Feb 13 10:32:06 nodeb crmd: [22996]: info: match_graph_event: Action patchfs_monitor_0 (7) confirmed on nodeb (rc=0) Feb 13 10:32:07 nodeb lrmd: [22993]: info: rsc:ClusterIP:6: probe Feb 13 10:32:07 nodeb lrmd: [22993]: info: rsc:httpd:7: probe Feb 13 10:32:07 nodeb lrmd: [22993]: info: rsc:fence-fcma:8: probe Feb 13 10:32:07 nodeb lrmd: [22993]: info: rsc:fence-fcmb:9: probe Feb 13 10:32:07 nodeb stonith-ng: [22991]: notice: stonith_device_action: Device fence-fcma not found Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-12 Feb 13 10:32:07 nodeb stonith-ng: [22991]: notice: stonith_device_action: Device fence-fcmb not found Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-12 Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation fence-fcma_monitor_0 (call=8, rc=7, cib-update=30, confirmed=true) not running Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation fence-fcmb_monitor_0 (call=9, rc=7, cib-update=31, confirmed=true) not running Feb 13 10:32:07 nodeb crmd: [22996]: info: match_graph_event: Action fence-fcma_monitor_0 (10) confirmed on nodeb (rc=0) Feb 13 10:32:07 nodeb crmd: [22996]: info: match_graph_event: Action fence-fcmb_monitor_0 (11) confirmed on nodeb (rc=0) Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation ClusterIP_monitor_0 (call=6, rc=7, cib-update=32, confirmed=true) not running Feb 13 10:32:07 nodeb crmd: [22996]: info: match_graph_event: Action ClusterIP_monitor_0 (8) confirmed on nodeb (rc=0) Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation httpd_monitor_0 (call=7, rc=7, cib-update=33, confirmed=true) not running Feb 13 10:32:07 nodeb crmd: [22996]: info: match_graph_event: Action httpd_monitor_0 (9) confirmed on nodeb (rc=0) Feb 13 10:32:07 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on nodeb (local) - no waiting Feb 13 10:32:07 nodeb crmd: [22996]: info: run_graph: ==================================================== Feb 13 10:32:07 nodeb crmd: [22996]: notice: run_graph: Transition 0 (Complete=15, Pending=0, Fired=0, Skipped=11, Incomplete=8, Source=/var/lib/pengine/pe-warn-22.bz2): Stopped Feb 13 10:32:07 nodeb attrd: [22994]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Feb 13 10:32:07 nodeb crmd: [22996]: info: te_graph_trigger: Transition 0 is now complete Feb 13 10:32:07 nodeb crmd: [22996]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Feb 13 10:32:07 nodeb crmd: [22996]: info: do_state_transition: All 1 cluster nodes are eligible to run resources. Feb 13 10:32:07 nodeb crmd: [22996]: info: do_pe_invoke: Query 34: Requesting the current CIB: S_POLICY_ENGINE Feb 13 10:32:07 nodeb attrd: [22994]: notice: attrd_perform_update: Sent update 10: probe_complete=true Feb 13 10:32:07 nodeb crmd: [22996]: info: do_pe_invoke_callback: Invoking the PE: query=34, ref=pe_calc-dc-1329147127-17, seq=260, quorate=0 Feb 13 10:32:07 nodeb pengine: [22995]: notice: unpack_config: On loss of CCM Quorum: Ignore Feb 13 10:32:07 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd0:0 on nodeb Feb 13 10:32:07 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd0:0 on nodeb Feb 13 10:32:07 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd1:0 on nodeb Feb 13 10:32:07 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (31s) for drbd1:0 on nodeb Feb 13 10:32:07 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (60s) for fence-fcmb on nodeb Feb 13 10:32:07 nodeb pengine: [22995]: WARN: stage6: Scheduling Node nodea for STONITH Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Start drbd0:0 (nodeb) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave drbd0:1 (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Start drbd1:0 (nodeb) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave drbd1:1 (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave datafs (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave patchfs (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave ClusterIP (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave httpd (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Leave fence-fcma (Stopped) Feb 13 10:32:07 nodeb pengine: [22995]: notice: LogActions: Start fence-fcmb (nodeb) Feb 13 10:32:07 nodeb crmd: [22996]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Feb 13 10:32:07 nodeb crmd: [22996]: info: unpack_graph: Unpacked transition 1: 25 actions in 25 synapses Feb 13 10:32:07 nodeb crmd: [22996]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1329147127-17) derived from /var/lib/pengine/pe-warn-23.bz2 Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 8 fired and confirmed Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 34 fired and confirmed Feb 13 10:32:07 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 60: start fence-fcmb_start_0 on nodeb (local) Feb 13 10:32:07 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=60:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=fence-fcmb_start_0 ) Feb 13 10:32:07 nodeb lrmd: [22993]: info: rsc:fence-fcmb:10: start Feb 13 10:32:07 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on nodeb (local) - no waiting Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 9 fired and confirmed Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 6 fired and confirmed Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 35 fired and confirmed Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 32 fired and confirmed Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_device_register: Added 'fence-fcmb' to the device list (1 active devices) Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_device_register from lrmd: rc=0 Feb 13 10:32:07 nodeb pengine: [22995]: WARN: process_pe_message: Transition 1: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-23.bz2 Feb 13 10:32:07 nodeb pengine: [22995]: notice: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Feb 13 10:32:07 nodeb lrmd: [22993]: info: stonith_api_device_metadata: looking up fence_ipmilan/redhat metadata Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation fence-fcmb_start_0 (call=10, rc=0, cib-update=35, confirmed=true) ok Feb 13 10:32:07 nodeb crmd: [22996]: info: match_graph_event: Action fence-fcmb_start_0 (60) confirmed on nodeb (rc=0) Feb 13 10:32:07 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 61: monitor fence-fcmb_monitor_60000 on nodeb (local) Feb 13 10:32:07 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=61:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=fence-fcmb_monitor_60000 ) Feb 13 10:32:07 nodeb lrmd: [22993]: info: rsc:fence-fcmb:11: monitor Feb 13 10:32:07 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 62 fired and confirmed Feb 13 10:32:07 nodeb crmd: [22996]: info: te_fence_node: Executing reboot fencing operation (64) on nodea (timeout=60000) Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: initiate_remote_stonith_op: Initiating remote operation reboot for nodea: 20f70779-f4da-4287-880b-b49ff0628cf1 Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: can_fence_host_with_device: fence-fcmb can fence nodea: static-list Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: call_remote_stonith: Requesting that nodeb perform op reboot nodea Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_fence: Exec Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: can_fence_host_with_device: fence-fcmb can fence nodea: static-list Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_fence: Found 1 matching devices for 'nodea' Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_fence from nodeb: rc=-1 Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: make_args: reboot-ing node 'nodea' as 'port=nodea' Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation fence-fcmb_monitor_60000 (call=11, rc=0, cib-update=36, confirmed=false) ok Feb 13 10:32:07 nodeb crmd: [22996]: info: match_graph_event: Action fence-fcmb_monitor_60000 (61) confirmed on nodeb (rc=0) Feb 13 10:32:16 nodeb stonith-ng: [22991]: info: log_operation: Operation 'reboot' [23215] (call 0 from (null)) for host 'nodea' with device 'fence-fcmb' returned: 0 Feb 13 10:32:16 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Rebooting machine @ IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:32:16 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:32:16 nodeb stonith-ng: [22991]: info: process_remote_stonith_exec: ExecResult Feb 13 10:32:16 nodeb stonith-ng: [22991]: info: remote_op_done: Notifing clients of 20f70779-f4da-4287-880b-b49ff0628cf1 (reboot of nodea from d120a100-c321-4b1c-ae7c-28da1051b191 by nodeb): 2, rc=0 Feb 13 10:32:16 nodeb stonith-ng: [22991]: info: stonith_notify_client: Sending st_fence-notification to client 22996/8e946dd6-657d-4a79-922f-651792bc71c4 Feb 13 10:32:16 nodeb crmd: [22996]: info: tengine_stonith_callback: StonithOp Feb 13 10:32:16 nodeb crmd: [22996]: info: tengine_stonith_callback: Stonith operation 2/64:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: OK (0) Feb 13 10:32:16 nodeb crmd: [22996]: info: tengine_stonith_callback: Stonith of nodea passed Feb 13 10:32:16 nodeb crmd: [22996]: info: send_stonith_update: Sending fencing update 37 for nodea Feb 13 10:32:16 nodeb crmd: [22996]: info: crm_new_peer: Node 0 is now known as nodea Feb 13 10:32:16 nodeb crmd: [22996]: info: ais_status_callback: status: nodea is now unknown Feb 13 10:32:16 nodeb crmd: [22996]: info: ais_status_callback: status: nodea is now lost (was unknown) Feb 13 10:32:16 nodeb crmd: [22996]: info: crm_update_peer: Node nodea: id=0 state=lost (new) addr=(null) votes=-1 born=0 seen=0 proc=00000000000000000000000000000001 Feb 13 10:32:16 nodeb crmd: [22996]: info: tengine_stonith_notify: Peer nodea was terminated (reboot) by nodeb for nodeb (ref=20f70779-f4da-4287-880b-b49ff0628cf1): OK Feb 13 10:32:16 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 63 fired and confirmed Feb 13 10:32:16 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 1 fired and confirmed Feb 13 10:32:16 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 4: start drbd0:0_start_0 on nodeb (local) Feb 13 10:32:16 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=4:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_start_0 ) Feb 13 10:32:16 nodeb lrmd: [22993]: info: rsc:drbd0:0:12: start Feb 13 10:32:16 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 30: start drbd1:0_start_0 on nodeb (local) Feb 13 10:32:16 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=30:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_start_0 ) Feb 13 10:32:16 nodeb lrmd: [22993]: info: rsc:drbd1:0:13: start Feb 13 10:32:16 nodeb crmd: [22996]: info: cib_fencing_updated: Fencing update 37 for nodea: complete Feb 13 10:32:16 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='nodea']/lrm (origin=local/crmd/38, version=0.67.28): ok (rc=0) Feb 13 10:32:16 nodeb crmd: [22996]: info: erase_xpath_callback: Deletion of "//node_state[@uname='nodea']/lrm": ok (rc=0) Feb 13 10:32:16 nodeb cib: [22992]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='nodea']/transient_attributes (origin=local/crmd/39, version=0.67.29): ok (rc=0) Feb 13 10:32:16 nodeb crmd: [22996]: info: erase_xpath_callback: Deletion of "//node_state[@uname='nodea']/transient_attributes": ok (rc=0) Feb 13 10:32:16 nodeb lrmd: [22993]: info: RA output: (drbd0:0:start:stdout) Feb 13 10:32:16 nodeb lrmd: [22993]: info: RA output: (drbd1:0:start:stdout) Feb 13 10:32:16 nodeb lrmd: [22993]: info: RA output: (drbd0:0:start:stdout) Feb 13 10:32:16 nodeb lrmd: [22993]: info: RA output: (drbd1:0:start:stdout) Feb 13 10:32:16 nodeb lrmd: [22993]: info: RA output: (drbd0:0:start:stdout) Feb 13 10:32:16 nodeb lrmd: [22993]: info: RA output: (drbd1:0:start:stdout) Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd0:0:start:stdout) Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd1:0:start:stdout) Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd0:0:start:stdout) Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd1:0:start:stdout) Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd0:0:start:stdout) Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd1:0:start:stdout) Feb 13 10:32:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_start_0 (call=12, rc=0, cib-update=40, confirmed=true) ok Feb 13 10:32:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_start_0 (call=13, rc=0, cib-update=41, confirmed=true) ok Feb 13 10:32:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_start_0 (4) confirmed on nodeb (rc=0) Feb 13 10:32:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 7 fired and confirmed Feb 13 10:32:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 10 fired and confirmed Feb 13 10:32:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 80: notify drbd0:0_post_notify_start_0 on nodeb (local) Feb 13 10:32:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=80:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_notify_0 ) Feb 13 10:32:17 nodeb lrmd: [22993]: info: rsc:drbd0:0:14: notify Feb 13 10:32:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_start_0 (30) confirmed on nodeb (rc=0) Feb 13 10:32:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 33 fired and confirmed Feb 13 10:32:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 36 fired and confirmed Feb 13 10:32:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 81: notify drbd1:0_post_notify_start_0 on nodeb (local) Feb 13 10:32:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=81:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_notify_0 ) Feb 13 10:32:17 nodeb lrmd: [22993]: info: rsc:drbd1:0:15: notify Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd0:0:notify:stdout) Feb 13 10:32:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd0:0_notify_0 from 80:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329147137-25 Feb 13 10:32:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329147137-25 from nodeb Feb 13 10:32:17 nodeb lrmd: [22993]: info: RA output: (drbd1:0:notify:stdout) Feb 13 10:32:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_notify_0 (80) confirmed on nodeb (rc=0) Feb 13 10:32:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=14, rc=0, cib-update=0, confirmed=true) ok Feb 13 10:32:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 11 fired and confirmed Feb 13 10:32:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 5: monitor drbd0:0_monitor_31000 on nodeb (local) Feb 13 10:32:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=5:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_monitor_31000 ) Feb 13 10:32:17 nodeb lrmd: [22993]: info: rsc:drbd0:0:16: monitor Feb 13 10:32:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd1:0_notify_0 from 81:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329147137-27 Feb 13 10:32:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329147137-27 from nodeb Feb 13 10:32:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_notify_0 (81) confirmed on nodeb (rc=0) Feb 13 10:32:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_notify_0 (call=15, rc=0, cib-update=0, confirmed=true) ok Feb 13 10:32:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 37 fired and confirmed Feb 13 10:32:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 31: monitor drbd1:0_monitor_31000 on nodeb (local) Feb 13 10:32:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=31:1:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_monitor_31000 ) Feb 13 10:32:17 nodeb lrmd: [22993]: info: rsc:drbd1:0:17: monitor Feb 13 10:32:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_monitor_31000 (call=16, rc=0, cib-update=42, confirmed=false) ok Feb 13 10:32:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_monitor_31000 (5) confirmed on nodeb (rc=0) Feb 13 10:32:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_monitor_31000 (call=17, rc=0, cib-update=43, confirmed=false) ok Feb 13 10:32:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_monitor_31000 (31) confirmed on nodeb (rc=0) Feb 13 10:32:17 nodeb crmd: [22996]: info: run_graph: ==================================================== Feb 13 10:32:17 nodeb crmd: [22996]: notice: run_graph: Transition 1 (Complete=25, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-23.bz2): Complete Feb 13 10:32:17 nodeb crmd: [22996]: info: te_graph_trigger: Transition 1 is now complete Feb 13 10:32:17 nodeb crmd: [22996]: info: notify_crmd: Transition 1 status: done - Feb 13 10:32:17 nodeb crmd: [22996]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Feb 13 10:32:17 nodeb crmd: [22996]: info: do_state_transition: Starting PEngine Recheck Timer Feb 13 10:33:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:33:07 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:33:07 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:34:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:34:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:34:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:35:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:35:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:35:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:36:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:36:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:36:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:37:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:37:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:37:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:38:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:38:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:38:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:39:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:39:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:39:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:40:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:40:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:40:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:41:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:41:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:41:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:41:44 nodeb cib: [22992]: info: cib_stats: Processed 95 operations (1052.00us average, 0% utilization) in the last 10min Feb 13 10:42:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:42:08 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:42:08 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:43:08 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:43:09 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:43:09 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:44:09 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:44:09 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:44:09 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:45:09 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:45:09 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:45:09 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:46:09 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:46:09 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:46:09 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:47:09 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-1 Feb 13 10:47:09 nodeb stonith-ng: [22991]: info: log_operation: fence-fcmb: Getting status of IPMI:xxx.xxx.xxx.xxx...Done Feb 13 10:47:09 nodeb stonith-ng: [22991]: info: stonith_device_execute: Nothing to do for fence-fcmb Feb 13 10:47:17 nodeb crmd: [22996]: info: crm_timer_popped: PEngine Recheck Timer (I_PE_CALC) just popped (900000ms) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ] Feb 13 10:47:17 nodeb crmd: [22996]: info: do_state_transition: Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED Feb 13 10:47:17 nodeb crmd: [22996]: info: do_state_transition: All 1 cluster nodes are eligible to run resources. Feb 13 10:47:17 nodeb crmd: [22996]: info: do_pe_invoke: Query 44: Requesting the current CIB: S_POLICY_ENGINE Feb 13 10:47:17 nodeb crmd: [22996]: info: do_pe_invoke_callback: Invoking the PE: query=44, ref=pe_calc-dc-1329148037-29, seq=260, quorate=0 Feb 13 10:47:17 nodeb pengine: [22995]: notice: unpack_config: On loss of CCM Quorum: Ignore Feb 13 10:47:17 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (30s) for drbd0:0 on nodeb Feb 13 10:47:17 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (30s) for drbd0:0 on nodeb Feb 13 10:47:17 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (30s) for drbd1:0 on nodeb Feb 13 10:47:17 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (30s) for drbd1:0 on nodeb Feb 13 10:47:17 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (30s) for ClusterIP on nodeb Feb 13 10:47:17 nodeb pengine: [22995]: notice: RecurringOp: Start recurring monitor (60s) for httpd on nodeb Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Promote drbd0:0 (Slave -> Master nodeb) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Leave drbd0:1 (Stopped) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Promote drbd1:0 (Slave -> Master nodeb) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Leave drbd1:1 (Stopped) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Start datafs (nodeb) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Start patchfs (nodeb) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Start ClusterIP (nodeb) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Start httpd (nodeb) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Leave fence-fcma (Stopped) Feb 13 10:47:17 nodeb pengine: [22995]: notice: LogActions: Leave fence-fcmb (Started nodeb) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Feb 13 10:47:17 nodeb crmd: [22996]: info: unpack_graph: Unpacked transition 2: 30 actions in 30 synapses Feb 13 10:47:17 nodeb crmd: [22996]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1329148037-29) derived from /var/lib/pengine/pe-input-153.bz2 Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 1: cancel drbd0:0_monitor_31000 on nodeb (local) Feb 13 10:47:17 nodeb lrmd: [22993]: info: cancel_op: operation monitor[16] on ocf::drbd::drbd0:0 for client 22996, its parameters: CRM_meta_clone=[0] CRM_meta_role=[Slave] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[drbd0] CRM_meta_notify_start_resource=[drbd0:0 ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_de cancelled Feb 13 10:47:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd0:0_monitor_31000 from 1:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329148037-31 Feb 13 10:47:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329148037-31 from nodeb Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_monitor_31000 (1) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 26 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 2: cancel drbd1:0_monitor_31000 on nodeb (local) Feb 13 10:47:17 nodeb lrmd: [22993]: info: cancel_op: operation monitor[17] on ocf::drbd::drbd1:0 for client 22996, its parameters: CRM_meta_clone=[0] CRM_meta_role=[Slave] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[drbd1] CRM_meta_notify_start_resource=[drbd1:0 ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_notify=[true] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_de cancelled Feb 13 10:47:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd1:0_monitor_31000 from 2:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329148037-33 Feb 13 10:47:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329148037-33 from nodeb Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_monitor_31000 (2) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 55 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_monitor_31000 (call=16, status=1, cib-update=0, confirmed=true) Cancelled Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_monitor_31000 (call=17, status=1, cib-update=0, confirmed=true) Cancelled Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 89: notify drbd0:0_pre_notify_promote_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=89:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_notify_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd0:0:18: notify Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 97: notify drbd1:0_pre_notify_promote_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=97:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_notify_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd1:0:19: notify Feb 13 10:47:17 nodeb pengine: [22995]: notice: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-153.bz2 Feb 13 10:47:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd0:0_notify_0 from 89:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329148037-36 Feb 13 10:47:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329148037-36 from nodeb Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_notify_0 (89) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=18, rc=0, cib-update=0, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 27 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 24 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd1:0_notify_0 from 97:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329148037-37 Feb 13 10:47:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329148037-37 from nodeb Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_notify_0 (97) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_notify_0 (call=19, rc=0, cib-update=0, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 10: promote drbd0:0_promote_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=10:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_promote_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd0:0:20: promote Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 56 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 53 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 39: promote drbd1:0_promote_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=39:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_promote_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd1:0:21: promote Feb 13 10:47:17 nodeb lrmd: [22993]: info: RA output: (drbd0:0:promote:stdout) Feb 13 10:47:17 nodeb lrmd: [22993]: info: RA output: (drbd1:0:promote:stdout) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_promote_0 (call=20, rc=0, cib-update=47, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_promote_0 (call=21, rc=0, cib-update=48, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_promote_0 (10) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 25 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 28 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 90: notify drbd0:0_post_notify_promote_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=90:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_notify_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd0:0:22: notify Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_promote_0 (39) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 54 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 57 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 98: notify drbd1:0_post_notify_promote_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=98:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_notify_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd1:0:23: notify Feb 13 10:47:17 nodeb lrmd: [22993]: info: RA output: (drbd0:0:notify:stdout) Feb 13 10:47:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd0:0_notify_0 from 90:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329148037-42 Feb 13 10:47:17 nodeb lrmd: [22993]: info: RA output: (drbd1:0:notify:stdout) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329148037-42 from nodeb Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_notify_0 (90) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_notify_0 (call=22, rc=0, cib-update=0, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 29 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 11: monitor drbd0:0_monitor_30000 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=11:2:8:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd0:0_monitor_30000 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd0:0:24: monitor Feb 13 10:47:17 nodeb crmd: [22996]: info: send_direct_ack: ACK'ing resource op drbd1:0_notify_0 from 98:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc: lrm_invoke-lrmd-1329148037-44 Feb 13 10:47:17 nodeb crmd: [22996]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1329148037-44 from nodeb Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_notify_0 (98) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_notify_0 (call=23, rc=0, cib-update=0, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 58 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 71 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 65: start datafs_start_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=65:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=datafs_start_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:datafs:25: start Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 40: monitor drbd1:0_monitor_30000 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=40:2:8:9b886b12-0a99-4f13-bc38-54585dbea0bc op=drbd1:0_monitor_30000 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:drbd1:0:26: monitor Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd0:0_monitor_30000 (call=24, rc=8, cib-update=49, confirmed=false) master Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation drbd1:0_monitor_30000 (call=26, rc=8, cib-update=50, confirmed=false) master Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd0:0_monitor_30000 (11) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action drbd1:0_monitor_30000 (40) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation datafs_start_0 (call=25, rc=0, cib-update=51, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action datafs_start_0 (65) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 66: start patchfs_start_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=66:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=patchfs_start_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:patchfs:27: start Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation patchfs_start_0 (call=27, rc=0, cib-update=52, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action patchfs_start_0 (66) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 67: start ClusterIP_start_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=67:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=ClusterIP_start_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:ClusterIP:28: start Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation ClusterIP_start_0 (call=28, rc=0, cib-update=53, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action ClusterIP_start_0 (67) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 68: monitor ClusterIP_monitor_30000 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=68:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=ClusterIP_monitor_30000 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:ClusterIP:29: monitor Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 69: start httpd_start_0 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=69:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=httpd_start_0 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:httpd:30: start Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation ClusterIP_monitor_30000 (call=29, rc=0, cib-update=54, confirmed=false) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action ClusterIP_monitor_30000 (68) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation httpd_start_0 (call=30, rc=0, cib-update=55, confirmed=true) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action httpd_start_0 (69) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: te_pseudo_action: Pseudo action 72 fired and confirmed Feb 13 10:47:17 nodeb crmd: [22996]: info: te_rsc_command: Initiating action 70: monitor httpd_monitor_60000 on nodeb (local) Feb 13 10:47:17 nodeb crmd: [22996]: info: do_lrm_rsc_op: Performing key=70:2:0:9b886b12-0a99-4f13-bc38-54585dbea0bc op=httpd_monitor_60000 ) Feb 13 10:47:17 nodeb lrmd: [22993]: info: rsc:httpd:31: monitor Feb 13 10:47:17 nodeb crmd: [22996]: info: process_lrm_event: LRM operation httpd_monitor_60000 (call=31, rc=0, cib-update=56, confirmed=false) ok Feb 13 10:47:17 nodeb crmd: [22996]: info: match_graph_event: Action httpd_monitor_60000 (70) confirmed on nodeb (rc=0) Feb 13 10:47:17 nodeb crmd: [22996]: info: run_graph: ==================================================== Feb 13 10:47:17 nodeb crmd: [22996]: notice: run_graph: Transition 2 (Complete=30, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-153.bz2): Complete Feb 13 10:47:17 nodeb crmd: [22996]: info: te_graph_trigger: Transition 2 is now complete Feb 13 10:47:17 nodeb crmd: [22996]: info: notify_crmd: Transition 2 status: done - Feb 13 10:47:17 nodeb crmd: [22996]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Feb 13 10:47:17 nodeb crmd: [22996]: info: do_state_transition: Starting PEngine Recheck Timer Feb 13 10:47:21 nodeb lrmd: [22993]: info: RA output: (ClusterIP:start:stderr) ARPING 192.168.1.3 from 192.168.1.3 eth0 Sent 5 probes (5 broadcast(s)) Received 0 response(s)