Aug 18 13:11:38 nodo1 cluster-dlm: add_change: add_change cg 2 joined nodeid 1812048064#012 Aug 18 13:11:38 nodo1 cluster-dlm: add_change: add_change cg 2 counts member 2 joined 1 remove 0 failed 0#012 Aug 18 13:11:38 nodo1 cluster-dlm: stop_kernel: stop_kernel cg 2#012 Aug 18 13:11:38 nodo1 cluster-dlm: do_sysfs: write "0" to "/sys/kernel/dlm/0BB443F896254AD3BA8FB960C425B666/control"#012 Aug 18 13:11:38 nodo1 cluster-dlm: check_fencing_done: check_fencing done#012 Aug 18 13:11:38 nodo1 cluster-dlm: check_quorum_done: check_quorum disabled#012 Aug 18 13:11:38 nodo1 cluster-dlm: check_fs_done: check_fs done#012 Aug 18 13:11:38 nodo1 cluster-dlm: send_info: send_start cg 2 flags 2 counts 1 2 1 0 0#012 Aug 18 13:11:38 nodo1 cluster-dlm: receive_start: receive_start 1812048064:1 len 80#012 Aug 18 13:11:38 nodo1 cluster-dlm: match_change: match_change 1812048064:1 matches cg 2#012 Aug 18 13:11:38 nodo1 cluster-dlm: wait_messages_done: wait_messages cg 2 need 1 of 2#012 Aug 18 13:11:38 nodo1 cluster-dlm: receive_start: receive_start 1778493632:2 len 80#012 Aug 18 13:11:38 nodo1 cluster-dlm: match_change: match_change 1778493632:2 matches cg 2#012 Aug 18 13:11:38 nodo1 cluster-dlm: wait_messages_done: wait_messages cg 2 got all 2#012 Aug 18 13:11:38 nodo1 cluster-dlm: start_kernel: start_kernel cg 2 member_count 2#012 Aug 18 13:11:38 nodo1 cluster-dlm: update_dir_members: dir_member 1778493632#012 Aug 18 13:11:38 nodo1 cluster-dlm: set_configfs_members: set_members mkdir "/sys/kernel/config/dlm/cluster/spaces/0BB443F896254AD3BA8FB960C425B666/nodes/1812048064"#012 Aug 18 13:11:38 nodo1 cluster-dlm: do_sysfs: write "1" to "/sys/kernel/dlm/0BB443F896254AD3BA8FB960C425B666/control"#012 Aug 18 13:11:38 nodo1 cluster-dlm: set_plock_ckpt_node: set_plock_ckpt_node from 1778493632 to 1778493632#012 Aug 18 13:11:38 nodo1 cluster-dlm: _unlink_checkpoint: unlink ckpt 0#012 Aug 18 13:11:38 nodo1 cluster-dlm: _unlink_checkpoint: unlink ckpt error 12 0BB443F896254AD3BA8FB960C425B666#012 Aug 18 13:11:38 nodo1 cluster-dlm: _unlink_checkpoint: unlink ckpt status error 9 0BB443F896254AD3BA8FB960C425B666#012 Aug 18 13:11:38 nodo1 cluster-dlm: store_plocks: store_plocks: r_count 0, lock_count 0, pp 40 bytes#012 Aug 18 13:11:38 nodo1 cluster-dlm: store_plocks: store_plocks: total 0 bytes, max_section 0 bytes#012 Aug 18 13:11:38 nodo1 cluster-dlm: store_plocks: store_plocks: open ckpt handle 4db127f800000000#012 Aug 18 13:11:38 nodo1 cluster-dlm: send_info: send_plocks_stored cg 2 flags 2 counts 1 2 1 0 0#012 Aug 18 13:11:38 nodo1 cluster-dlm: receive_plocks_stored: receive_plocks_stored 1778493632:2 need_plocks 0#012 Aug 18 13:11:38 nodo1 kernel: [ 4154.272025] ------------[ cut here ]------------ Aug 18 13:11:38 nodo1 kernel: [ 4154.272036] kernel BUG at /usr/src/packages/BUILD/kernel-xen-2.6.31.12/linux-2.6.31/fs/inode.c:1323! Aug 18 13:11:38 nodo1 kernel: [ 4154.272042] invalid opcode: 0000 [#1] SMP Aug 18 13:11:38 nodo1 kernel: [ 4154.272046] last sysfs file: /sys/kernel/dlm/0BB443F896254AD3BA8FB960C425B666/control Aug 18 13:11:38 nodo1 kernel: [ 4154.272050] CPU 1 Aug 18 13:11:38 nodo1 kernel: [ 4154.272053] Modules linked in: nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack xt_physdev iptable_filter ip_tables x_tables ocfs2 ocfs2_nodemanager quota_tree ocfs2_stack_user ocfs2_stackglue dlm configfs netbk coretemp blkbk blkback_pagemap blktap xenbus_be ipmi_si edd dm_round_robin scsi_dh_rdac dm_multipath scsi_dh bridge stp llc bonding ipv6 fuse ext4 jbd2 crc16 loop dm_mod sr_mod ide_pci_generic ide_core iTCO_wdt ata_generic ibmpex i5k_amb ibmaem iTCO_vendor_support ipmi_msghandler bnx2 i5000_edac 8250_pnp shpchp ata_piix pcspkr ics932s401 joydev edac_core i2c_i801 ses pci_hotplug 8250 i2c_core serio_raw enclosure serial_core button sg reiserfs usbhid hid uhci_hcd ehci_hcd xenblk cdrom xennet fan processor pata_acpi lpfc thermal thermal_sys hwmon aacraid [last unloaded: ocfs2_stackglue] Aug 18 13:11:38 nodo1 kernel: [ 4154.272111] Pid: 8889, comm: dlm_send Not tainted 2.6.31.12-0.2-xen #1 IBM System x3650 -[7979AC1]- Aug 18 13:11:38 nodo1 kernel: [ 4154.272113] RIP: e030:[] [] iput+0x82/0x90 Aug 18 13:11:38 nodo1 kernel: [ 4154.272121] RSP: e02b:ffff88014ec03c30 EFLAGS: 00010246 Aug 18 13:11:38 nodo1 kernel: [ 4154.272122] RAX: 0000000000000000 RBX: ffff880148a703c8 RCX: 0000000000000000 Aug 18 13:11:38 nodo1 kernel: [ 4154.272123] RDX: ffffc90000010000 RSI: ffff880148a70380 RDI: ffff880148a703c8 Aug 18 13:11:38 nodo1 kernel: [ 4154.272125] RBP: ffff88014ec03c50 R08: b038000000000000 R09: fe99594c51a57607 Aug 18 13:11:38 nodo1 kernel: [ 4154.272126] R10: ffff880040410270 R11: 0000000000000000 R12: ffff8801713e6e08 Aug 18 13:11:38 nodo1 kernel: [ 4154.272128] R13: ffff88014ec03d20 R14: 0000000000000000 R15: ffffc9000331d108 Aug 18 13:11:38 nodo1 kernel: [ 4154.272133] FS: 00007ff4cb11a730(0000) GS:ffffc90000010000(0000) knlGS:0000000000000000 Aug 18 13:11:38 nodo1 kernel: [ 4154.272135] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Aug 18 13:11:38 nodo1 kernel: [ 4154.272136] CR2: 00007ff4c5c45000 CR3: 0000000135b2a000 CR4: 0000000000002660 Aug 18 13:11:38 nodo1 kernel: [ 4154.272138] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Aug 18 13:11:38 nodo1 kernel: [ 4154.272140] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Aug 18 13:11:38 nodo1 kernel: [ 4154.272142] Process dlm_send (pid: 8889, threadinfo ffff88014ec02000, task ffff8801381e45c0) Aug 18 13:11:38 nodo1 kernel: [ 4154.272143] Stack: Aug 18 13:11:38 nodo1 kernel: [ 4154.272144] 0000000000000000 00000000072f0874 ffff880148a70380 ffff880148a70380 Aug 18 13:11:38 nodo1 kernel: [ 4154.272146] <0> ffff88014ec03c80 ffffffff803add09 ffff88014ec03c80 00000000072f0874 Aug 18 13:11:38 nodo1 kernel: [ 4154.272147] <0> ffff8801713e6df8 ffff8801713e6e08 ffff88014ec03de0 ffffffffa05661e1 Aug 18 13:11:38 nodo1 kernel: [ 4154.272150] Call Trace: Aug 18 13:11:38 nodo1 kernel: [ 4154.272164] [] sock_release+0x89/0xa0 Aug 18 13:11:38 nodo1 kernel: [ 4154.272177] [] tcp_connect_to_sock+0x161/0x2b0 [dlm] Aug 18 13:11:38 nodo1 kernel: [ 4154.272206] [] process_send_sockets+0x34/0x60 [dlm] Aug 18 13:11:38 nodo1 kernel: [ 4154.272222] [] run_workqueue+0x83/0x230 Aug 18 13:11:38 nodo1 kernel: [ 4154.272227] [] worker_thread+0xb4/0x140 Aug 18 13:11:38 nodo1 kernel: [ 4154.272231] [] kthread+0xb6/0xc0 Aug 18 13:11:38 nodo1 kernel: [ 4154.272236] [] child_rip+0xa/0x20 Aug 18 13:11:38 nodo1 kernel: [ 4154.272240] Code: 42 20 48 c7 c2 b0 4c 13 80 48 85 c0 48 0f 44 c2 48 89 df ff d0 48 8b 45 e8 65 48 33 04 25 28 00 00 00 75 0b 48 83 c4 18 5b c9 c3 <0f> 0b eb fe e8 35 c6 f1 ff 0f 1f 44 00 00 55 48 8d 97 10 02 00 Aug 18 13:11:38 nodo1 kernel: [ 4154.272256] RIP [] iput+0x82/0x90 Aug 18 13:11:38 nodo1 kernel: [ 4154.272259] RSP Aug 18 13:11:38 nodo1 kernel: [ 4154.272264] ---[ end trace 7707d0d92a7f5415 ]--- Aug 18 13:11:38 nodo1 kernel: [ 4154.272495] dlm: connect from non cluster node Aug 18 13:11:38 nodo1 mgmtd: [8480]: info: CIB query: cib Aug 18 13:11:52 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:11:52 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section 'all' (origin=nodo3/crmd/26): ok (rc=0) Aug 18 13:11:52 nodo1 crmd: [8479]: info: match_graph_event: Action rsa1-fencing_monitor_15000 (58) confirmed on nodo3 (rc=0) Aug 18 13:11:53 nodo1 mgmtd: [8480]: info: CIB query: cib Aug 18 13:11:58 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:11:58 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section 'all' (origin=nodo3/crmd/27): ok (rc=0) Aug 18 13:11:58 nodo1 crmd: [8479]: WARN: status_from_rc: Action 43 (XencfgFS:1_start_0) on nodo3 failed (target: 0 vs. rc: -2): Error Aug 18 13:11:58 nodo1 crmd: [8479]: WARN: update_failcount: Updating failcount for XencfgFS:1 on nodo3 after failed start: rc=-2 (update=INFINITY, time=1282129918) Aug 18 13:11:58 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section 'all' (origin=nodo3/crmd/28): ok (rc=0) Aug 18 13:11:58 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:11:58 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for /cib (/cib) Aug 18 13:11:58 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for /cib (/cib) Aug 18 13:11:58 nodo1 crmd: [8479]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=XencfgFS:1_start_0, magic=2:-2;43:19:0:6cb80fad-9035-478a-b4b2-e58245c05eb5) : Event failed Aug 18 13:11:58 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for /cib (/cib) Aug 18 13:11:58 nodo1 crmd: [8479]: info: update_abort_priority: Abort priority upgraded from 0 to 1 Aug 18 13:11:58 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for /cib (/cib) Aug 18 13:11:58 nodo1 crmd: [8479]: info: update_abort_priority: Abort action done superceeded by restart Aug 18 13:11:58 nodo1 crmd: [8479]: info: match_graph_event: Action XencfgFS:1_start_0 (43) confirmed on nodo3 (rc=4) Aug 18 13:11:58 nodo1 crmd: [8479]: WARN: status_from_rc: Action 51 (XenimageFS:1_start_0) on nodo3 failed (target: 0 vs. rc: -2): Error Aug 18 13:11:58 nodo1 crmd: [8479]: WARN: update_failcount: Updating failcount for XenimageFS:1 on nodo3 after failed start: rc=-2 (update=INFINITY, time=1282129918) Aug 18 13:11:58 nodo1 crmd: [8479]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=XenimageFS:1_start_0, magic=2:-2;51:19:0:6cb80fad-9035-478a-b4b2-e58245c05eb5) : Event failed Aug 18 13:11:58 nodo1 crmd: [8479]: info: match_graph_event: Action XenimageFS:1_start_0 (51) confirmed on nodo3 (rc=4) Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 46 fired and confirmed Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 54 fired and confirmed Aug 18 13:11:58 nodo1 crmd: [8479]: info: run_graph: ==================================================== Aug 18 13:11:58 nodo1 crmd: [8479]: notice: run_graph: Transition 19 (Complete=29, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pengine/pe-warn-2302.bz2): Stopped Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_graph_trigger: Transition 19 is now complete Aug 18 13:11:58 nodo1 crmd: [8479]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Aug 18 13:11:58 nodo1 crmd: [8479]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Aug 18 13:11:58 nodo1 crmd: [8479]: info: do_pe_invoke: Query 264: Requesting the current CIB: S_POLICY_ENGINE Aug 18 13:11:58 nodo1 crmd: [8479]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1282129918-137, seq=235504, quorate=1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: unpack_config: On loss of CCM Quorum: Ignore Aug 18 13:11:58 nodo1 pengine: [8478]: info: determine_online_status: Node nodo1 is online Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op mailsrv-rm_start_0 on nodo1: unknown exec error Aug 18 13:11:58 nodo1 pengine: [8478]: info: determine_online_status: Node nodo3 is online Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op XenimageFS:1_start_0 on nodo3: unknown exec error Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op XencfgFS:1_start_0 on nodo3: unknown exec error Aug 18 13:11:58 nodo1 pengine: [8478]: info: unpack_status: Node nodo4 is in standby-mode Aug 18 13:11:58 nodo1 pengine: [8478]: notice: clone_print: Clone Set: dlm-clone Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: dlm:0#011(ocf::pacemaker:controld):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: dlm:1#011(ocf::pacemaker:controld):#011Started nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: dlm:2#011(ocf::pacemaker:controld):#011Stopped Aug 18 13:11:58 nodo1 pengine: [8478]: notice: clone_print: Clone Set: o2cb-clone Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: o2cb:0#011(ocf::ocfs2:o2cb):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: o2cb:1#011(ocf::ocfs2:o2cb):#011Started nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: o2cb:2#011(ocf::ocfs2:o2cb):#011Stopped Aug 18 13:11:58 nodo1 crmd: [8479]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Aug 18 13:11:58 nodo1 pengine: [8478]: notice: clone_print: Clone Set: XencfgFS-Clone Aug 18 13:11:58 nodo1 crmd: [8479]: info: unpack_graph: Unpacked transition 20: 7 actions in 7 synapses Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: XencfgFS:0#011(ocf::heartbeat:Filesystem):#011Started nodo1 Aug 18 13:11:58 nodo1 crmd: [8479]: info: do_te_invoke: Processing graph 20 (ref=pe_calc-dc-1282129918-137) derived from /var/lib/pengine/pe-warn-2303.bz2 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: XencfgFS:1#011(ocf::heartbeat:Filesystem):#011Started nodo3 FAILED Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 39 fired and confirmed Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: XencfgFS:2#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 45 fired and confirmed Aug 18 13:11:58 nodo1 pengine: [8478]: notice: clone_print: Clone Set: XenimageFS-Clone Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_rsc_command: Initiating action 13: stop XencfgFS:1_stop_0 on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: XenimageFS:0#011(ocf::heartbeat:Filesystem):#011Started nodo1 Aug 18 13:11:58 nodo1 crmd: [8479]: info: te_rsc_command: Initiating action 12: stop XenimageFS:1_stop_0 on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: XenimageFS:1#011(ocf::heartbeat:Filesystem):#011Started nodo3 FAILED Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: XenimageFS:2#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: rsa1-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: rsa2-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: rsa3-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: rsa4-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: mailsrv-rm#011(ocf::heartbeat:Xen):#011Stopped Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: dbsrv-rm#011(ocf::heartbeat:Xen):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: native_print: websrv-rm#011(ocf::heartbeat:Xen):#011Started nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: get_failcount: mailsrv-rm has failed 1000000 times on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing mailsrv-rm away from nodo1 after 1000000 failures (max=1000000) Aug 18 13:11:58 nodo1 pengine: [8478]: info: get_failcount: XencfgFS-Clone has failed 1000000 times on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing XencfgFS-Clone away from nodo3 after 1000000 failures (max=1000000) Aug 18 13:11:58 nodo1 pengine: [8478]: info: get_failcount: XenimageFS-Clone has failed 1000000 times on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing XenimageFS-Clone away from nodo3 after 1000000 failures (max=1000000) Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource dlm:2 cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource o2cb:2 cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource XencfgFS:2 cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource XencfgFS:1 cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource XenimageFS:2 cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource XenimageFS:1 cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: native_color: Resource mailsrv-rm cannot run anywhere Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with dlm:0 on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:1 with dlm:1 on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating dlm:0 with o2cb:0 on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating dlm:1 with o2cb:1 on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating XencfgFS:0 with o2cb:0 on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with XencfgFS:0 on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:1 with XencfgFS:1 on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating XenimageFS:0 with o2cb:0 on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with XenimageFS:0 on nodo1 Aug 18 13:11:58 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:1 with XenimageFS:1 on nodo3 Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:0#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:1#011(Started nodo3) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:2#011(Stopped) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:0#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:1#011(Started nodo3) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:2#011(Stopped) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:0#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Stop resource XencfgFS:1#011(nodo3) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:2#011(Stopped) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:0#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Stop resource XenimageFS:1#011(nodo3) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:2#011(Stopped) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa1-fencing#011(Started nodo3) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa2-fencing#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa3-fencing#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa4-fencing#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource mailsrv-rm#011(Stopped) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource dbsrv-rm#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: notice: LogActions: Leave resource websrv-rm#011(Started nodo1) Aug 18 13:11:58 nodo1 pengine: [8478]: WARN: process_pe_message: Transition 20: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-2303.bz2 Aug 18 13:11:58 nodo1 pengine: [8478]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Aug 18 13:11:58 nodo1 mgmtd: [8480]: info: CIB query: cib Aug 18 13:12:18 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:18 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section 'all' (origin=nodo3/crmd/29): ok (rc=0) Aug 18 13:12:18 nodo1 crmd: [8479]: WARN: status_from_rc: Action 13 (XencfgFS:1_stop_0) on nodo3 failed (target: 0 vs. rc: -2): Error Aug 18 13:12:18 nodo1 crmd: [8479]: WARN: update_failcount: Updating failcount for XencfgFS:1 on nodo3 after failed stop: rc=-2 (update=INFINITY, time=1282129938) Aug 18 13:12:18 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section 'all' (origin=nodo3/crmd/30): ok (rc=0) Aug 18 13:12:18 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:18 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='nodo3']//nvpair[@name='fail-count-XencfgFS:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2]) Aug 18 13:12:18 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='nodo3']//nvpair[@name='last-failure-XencfgFS:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[3]) Aug 18 13:12:18 nodo1 crmd: [8479]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=XencfgFS:1_stop_0, magic=2:-2;13:20:0:6cb80fad-9035-478a-b4b2-e58245c05eb5) : Event failed Aug 18 13:12:18 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='nodo3']//nvpair[@name='fail-count-XenimageFS:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[4]) Aug 18 13:12:18 nodo1 crmd: [8479]: info: update_abort_priority: Abort priority upgraded from 0 to 1 Aug 18 13:12:18 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='nodo3']//nvpair[@name='last-failure-XenimageFS:1'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[5]) Aug 18 13:12:18 nodo1 crmd: [8479]: info: update_abort_priority: Abort action done superceeded by restart Aug 18 13:12:18 nodo1 crmd: [8479]: info: match_graph_event: Action XencfgFS:1_stop_0 (13) confirmed on nodo3 (rc=4) Aug 18 13:12:18 nodo1 crmd: [8479]: WARN: status_from_rc: Action 12 (XenimageFS:1_stop_0) on nodo3 failed (target: 0 vs. rc: -2): Error Aug 18 13:12:18 nodo1 crmd: [8479]: WARN: update_failcount: Updating failcount for XenimageFS:1 on nodo3 after failed stop: rc=-2 (update=INFINITY, time=1282129938) Aug 18 13:12:18 nodo1 crmd: [8479]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=XenimageFS:1_stop_0, magic=2:-2;12:20:0:6cb80fad-9035-478a-b4b2-e58245c05eb5) : Event failed Aug 18 13:12:18 nodo1 crmd: [8479]: info: match_graph_event: Action XenimageFS:1_stop_0 (12) confirmed on nodo3 (rc=4) Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 40 fired and confirmed Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 46 fired and confirmed Aug 18 13:12:18 nodo1 crmd: [8479]: info: run_graph: ==================================================== Aug 18 13:12:18 nodo1 crmd: [8479]: notice: run_graph: Transition 20 (Complete=6, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-2303.bz2): Stopped Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_graph_trigger: Transition 20 is now complete Aug 18 13:12:18 nodo1 crmd: [8479]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Aug 18 13:12:18 nodo1 crmd: [8479]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Aug 18 13:12:18 nodo1 crmd: [8479]: info: do_pe_invoke: Query 273: Requesting the current CIB: S_POLICY_ENGINE Aug 18 13:12:18 nodo1 crmd: [8479]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1282129938-140, seq=235504, quorate=1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: unpack_config: On loss of CCM Quorum: Ignore Aug 18 13:12:18 nodo1 pengine: [8478]: info: determine_online_status: Node nodo1 is online Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op mailsrv-rm_start_0 on nodo1: unknown exec error Aug 18 13:12:18 nodo1 pengine: [8478]: info: determine_online_status: Node nodo3 is online Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op XenimageFS:1_start_0 on nodo3: unknown exec error Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op XenimageFS:1_stop_0 on nodo3: unknown exec error Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op XencfgFS:1_start_0 on nodo3: unknown exec error Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op XencfgFS:1_stop_0 on nodo3: unknown exec error Aug 18 13:12:18 nodo1 pengine: [8478]: info: unpack_status: Node nodo4 is in standby-mode Aug 18 13:12:18 nodo1 pengine: [8478]: notice: clone_print: Clone Set: dlm-clone Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: dlm:0#011(ocf::pacemaker:controld):#011Started nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: dlm:1#011(ocf::pacemaker:controld):#011Started nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: dlm:2#011(ocf::pacemaker:controld):#011Stopped Aug 18 13:12:18 nodo1 pengine: [8478]: notice: clone_print: Clone Set: o2cb-clone Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: o2cb:0#011(ocf::ocfs2:o2cb):#011Started nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: o2cb:1#011(ocf::ocfs2:o2cb):#011Started nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: o2cb:2#011(ocf::ocfs2:o2cb):#011Stopped Aug 18 13:12:18 nodo1 pengine: [8478]: notice: clone_print: Clone Set: XencfgFS-Clone Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: XencfgFS:0#011(ocf::heartbeat:Filesystem):#011Started nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: XencfgFS:1#011(ocf::heartbeat:Filesystem):#011Started nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: XencfgFS:2#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:12:18 nodo1 pengine: [8478]: notice: clone_print: Clone Set: XenimageFS-Clone Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: XenimageFS:0#011(ocf::heartbeat:Filesystem):#011Started nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: XenimageFS:1#011(ocf::heartbeat:Filesystem):#011Started nodo3 FAILED Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: XenimageFS:2#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: rsa1-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: rsa2-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: rsa3-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: rsa4-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:12:18 nodo1 crmd: [8479]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: mailsrv-rm#011(ocf::heartbeat:Xen):#011Stopped Aug 18 13:12:18 nodo1 crmd: [8479]: info: unpack_graph: Unpacked transition 21: 16 actions in 16 synapses Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: dbsrv-rm#011(ocf::heartbeat:Xen):#011Started nodo1 Aug 18 13:12:18 nodo1 crmd: [8479]: info: do_te_invoke: Processing graph 21 (ref=pe_calc-dc-1282129938-140) derived from /var/lib/pengine/pe-warn-2304.bz2 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: native_print: websrv-rm#011(ocf::heartbeat:Xen):#011Started nodo1 Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 21 fired and confirmed Aug 18 13:12:18 nodo1 pengine: [8478]: info: get_failcount: mailsrv-rm has failed 1000000 times on nodo1 Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 35 fired and confirmed Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing mailsrv-rm away from nodo1 after 1000000 failures (max=1000000) Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 42 fired and confirmed Aug 18 13:12:18 nodo1 pengine: [8478]: info: get_failcount: XencfgFS-Clone has failed 1000000 times on nodo3 Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 44 fired and confirmed Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing XencfgFS-Clone away from nodo3 after 1000000 failures (max=1000000) Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 55 fired and confirmed Aug 18 13:12:18 nodo1 pengine: [8478]: info: get_failcount: XenimageFS-Clone has failed 1000000 times on nodo3 Aug 18 13:12:18 nodo1 crmd: [8479]: info: te_fence_node: Executing poweroff fencing operation (56) on nodo3 (timeout=60000) Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing XenimageFS-Clone away from nodo3 after 1000000 failures (max=1000000) Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource dlm:1 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource dlm:2 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource o2cb:1 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource o2cb:2 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource XencfgFS:2 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource XencfgFS:1 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource XenimageFS:2 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource XenimageFS:1 cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource rsa1-fencing cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_color: Resource mailsrv-rm cannot run anywhere Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: stage6: Scheduling Node nodo3 for STONITH Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_start_constraints: Ordering dlm:0_start_0 after nodo3 recovery Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_stop_constraints: dlm:1_stop_0 is implicit after nodo3 is fenced Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_start_constraints: Ordering o2cb:0_start_0 after nodo3 recovery Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_stop_constraints: o2cb:1_stop_0 is implicit after nodo3 is fenced Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_start_constraints: Ordering XencfgFS:0_start_0 after nodo3 recovery Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_stop_constraints: XencfgFS:1_stop_0 is implicit after nodo3 is fenced Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_start_constraints: Ordering XenimageFS:0_start_0 after nodo3 recovery Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: native_stop_constraints: Stop of failed resource XenimageFS:1 is implicit after nodo3 is fenced Aug 18 13:12:18 nodo1 pengine: [8478]: info: native_stop_constraints: rsa1-fencing_stop_0 is implicit after nodo3 is fenced Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with dlm:0 on nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating dlm:0 with o2cb:0 on nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating dlm:1 with o2cb:1 on nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating XencfgFS:0 with o2cb:0 on nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with XencfgFS:0 on nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:1 with XencfgFS:1 on nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating XenimageFS:0 with o2cb:0 on nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with XenimageFS:0 on nodo1 Aug 18 13:12:18 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:1 with XenimageFS:1 on nodo3 Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:0#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Stop resource dlm:1#011(nodo3) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:2#011(Stopped) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:0#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Stop resource o2cb:1#011(nodo3) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:2#011(Stopped) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:0#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Stop resource XencfgFS:1#011(nodo3) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:2#011(Stopped) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:0#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Stop resource XenimageFS:1#011(nodo3) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:2#011(Stopped) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Stop resource rsa1-fencing#011(nodo3) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa2-fencing#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa3-fencing#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa4-fencing#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource mailsrv-rm#011(Stopped) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource dbsrv-rm#011(Started nodo1) Aug 18 13:12:18 nodo1 pengine: [8478]: notice: LogActions: Leave resource websrv-rm#011(Started nodo1) Aug 18 13:12:18 nodo1 stonithd: [8474]: info: client tengine [pid: 8479] requests a STONITH operation POWEROFF on node nodo3 Aug 18 13:12:18 nodo1 stonithd: [8474]: info: stonith_operate_locally::2678: sending fencing op POWEROFF for nodo3 to rsa3-fencing (external/ibmrsa-telnet) (pid=18612) Aug 18 13:12:18 nodo1 pengine: [8478]: WARN: process_pe_message: Transition 21: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-2304.bz2 Aug 18 13:12:18 nodo1 pengine: [8478]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Aug 18 13:12:18 nodo1 mgmtd: [8480]: info: CIB query: cib Aug 18 13:12:20 nodo1 stonithd: [8474]: info: Succeeded to STONITH the node nodo3: optype=POWEROFF. whodoit: nodo1 Aug 18 13:12:20 nodo1 crmd: [8479]: info: tengine_stonith_callback: call=18612, optype=3, node_name=nodo3, result=0, node_list=nodo1, action=56:21:0:6cb80fad-9035-478a-b4b2-e58245c05eb5 Aug 18 13:12:20 nodo1 crmd: [8479]: info: erase_status_tag: Erasing //node_state[@uname='nodo3']/lrm Aug 18 13:12:20 nodo1 crmd: [8479]: info: erase_status_tag: Erasing //node_state[@uname='nodo3']/transient_attributes Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 32 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 36 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 39 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 43 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 28 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 25 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 29 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 18 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 22 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_pseudo_action: Pseudo action 13 fired and confirmed Aug 18 13:12:20 nodo1 crmd: [8479]: info: run_graph: ==================================================== Aug 18 13:12:20 nodo1 crmd: [8479]: notice: run_graph: Transition 21 (Complete=16, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-2304.bz2): Complete Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_graph_trigger: Transition 21 is now complete Aug 18 13:12:20 nodo1 crmd: [8479]: info: notify_crmd: Transition 21 status: done - Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: Starting PEngine Recheck Timer Aug 18 13:12:20 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:20 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_delete op for //node_state[@uname='nodo3']/lrm (/cib/status/node_state[2]/lrm) Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_update_diff: No match for deleted action //diff-added//cib//lrm_rsc_op[@id='o2cb:1_monitor_0'] (o2cb:1_monitor_0 on nodo3) Aug 18 13:12:20 nodo1 crmd: [8479]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=o2cb:1_monitor_0, magic=0:7;15:19:7:6cb80fad-9035-478a-b4b2-e58245c05eb5) : Resource op removal Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_pe_invoke: Query 277: Requesting the current CIB: S_POLICY_ENGINE Aug 18 13:12:20 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='nodo3']/lrm (origin=local/crmd/275): ok (rc=0) Aug 18 13:12:20 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:20 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_delete op for //node_state[@uname='nodo3']/transient_attributes (/cib/status/node_state[2]/transient_attributes) Aug 18 13:12:20 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:20 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='nodo3']/transient_attributes (origin=local/crmd/276): ok (rc=0) Aug 18 13:12:20 nodo1 crmd: [8479]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=nodo3, magic=NA) : Transient attribute: removal Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_pe_invoke: Query 278: Requesting the current CIB: S_POLICY_ENGINE Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1282129940-141, seq=235504, quorate=1 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: unpack_config: On loss of CCM Quorum: Ignore Aug 18 13:12:20 nodo1 pengine: [8478]: info: determine_online_status: Node nodo1 is online Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: unpack_rsc_op: Processing failed op mailsrv-rm_start_0 on nodo1: unknown exec error Aug 18 13:12:20 nodo1 pengine: [8478]: info: unpack_status: Node nodo4 is in standby-mode Aug 18 13:12:20 nodo1 pengine: [8478]: notice: clone_print: Clone Set: dlm-clone Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: dlm:0#011(ocf::pacemaker:controld):#011Started nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: dlm:1#011(ocf::pacemaker:controld):#011Stopped Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: dlm:2#011(ocf::pacemaker:controld):#011Stopped Aug 18 13:12:20 nodo1 pengine: [8478]: notice: clone_print: Clone Set: o2cb-clone Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: o2cb:0#011(ocf::ocfs2:o2cb):#011Started nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: o2cb:1#011(ocf::ocfs2:o2cb):#011Stopped Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: o2cb:2#011(ocf::ocfs2:o2cb):#011Stopped Aug 18 13:12:20 nodo1 pengine: [8478]: notice: clone_print: Clone Set: XencfgFS-Clone Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: XencfgFS:0#011(ocf::heartbeat:Filesystem):#011Started nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: XencfgFS:1#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: XencfgFS:2#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:12:20 nodo1 pengine: [8478]: notice: clone_print: Clone Set: XenimageFS-Clone Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: XenimageFS:0#011(ocf::heartbeat:Filesystem):#011Started nodo1 Aug 18 13:12:20 nodo1 crmd: [8479]: info: unpack_graph: Unpacked transition 22: 0 actions in 0 synapses Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: XenimageFS:1#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_te_invoke: Processing graph 22 (ref=pe_calc-dc-1282129940-141) derived from /var/lib/pengine/pe-warn-2305.bz2 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: XenimageFS:2#011(ocf::heartbeat:Filesystem):#011Stopped Aug 18 13:12:20 nodo1 crmd: [8479]: info: run_graph: ==================================================== Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: rsa1-fencing#011(stonith:external/ibmrsa-telnet):#011Stopped Aug 18 13:12:20 nodo1 crmd: [8479]: notice: run_graph: Transition 22 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-2305.bz2): Complete Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: rsa2-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:12:20 nodo1 crmd: [8479]: info: te_graph_trigger: Transition 22 is now complete Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: rsa3-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:12:20 nodo1 crmd: [8479]: info: notify_crmd: Transition 22 status: done - Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: rsa4-fencing#011(stonith:external/ibmrsa-telnet):#011Started nodo1 Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: mailsrv-rm#011(ocf::heartbeat:Xen):#011Stopped Aug 18 13:12:20 nodo1 crmd: [8479]: info: do_state_transition: Starting PEngine Recheck Timer Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: dbsrv-rm#011(ocf::heartbeat:Xen):#011Started nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: native_print: websrv-rm#011(ocf::heartbeat:Xen):#011Started nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: info: get_failcount: mailsrv-rm has failed 1000000 times on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: common_apply_stickiness: Forcing mailsrv-rm away from nodo1 after 1000000 failures (max=1000000) Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource dlm:1 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource dlm:2 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource o2cb:1 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource o2cb:2 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource XencfgFS:1 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource XencfgFS:2 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource XenimageFS:1 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource XenimageFS:2 cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource rsa1-fencing cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: native_color: Resource mailsrv-rm cannot run anywhere Aug 18 13:12:20 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with dlm:0 on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: info: find_compatible_child: Colocating dlm:0 with o2cb:0 on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: info: find_compatible_child: Colocating XencfgFS:0 with o2cb:0 on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with XencfgFS:0 on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: info: find_compatible_child: Colocating XenimageFS:0 with o2cb:0 on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: info: find_compatible_child: Colocating o2cb:0 with XenimageFS:0 on nodo1 Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:0#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:1#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource dlm:2#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:0#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:1#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource o2cb:2#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:0#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:1#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource XencfgFS:2#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:0#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:1#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource XenimageFS:2#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa1-fencing#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa2-fencing#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa3-fencing#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource rsa4-fencing#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource mailsrv-rm#011(Stopped) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource dbsrv-rm#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: notice: LogActions: Leave resource websrv-rm#011(Started nodo1) Aug 18 13:12:20 nodo1 pengine: [8478]: WARN: process_pe_message: Transition 22: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-2305.bz2 Aug 18 13:12:20 nodo1 pengine: [8478]: info: process_pe_message: Configuration WARNINGs found during PE processing. Please run "crm_verify -L" to identify issues. Aug 18 13:12:20 nodo1 mgmtd: [8480]: info: CIB query: cib Aug 18 13:12:28 nodo1 openais[8462]: [TOTEM] The token was lost in the OPERATIONAL state. Aug 18 13:12:28 nodo1 openais[8462]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes). Aug 18 13:12:28 nodo1 openais[8462]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes). Aug 18 13:12:28 nodo1 openais[8462]: [TOTEM] entering GATHER state from 2. Aug 18 13:12:31 nodo1 cluster-dlm: update_cluster: Processing membership 235508#012 Aug 18 13:12:31 nodo1 cluster-dlm: dlm_process_node: Skipped active node 1778493632 'nodo1': born-on=235504, last-seen=235508, this-event=235508, last-event=235504#012 Aug 18 13:12:31 nodo1 cluster-dlm: dlm_process_node: Skipped inactive node 1812048064 'nodo3': born-on=235504, last-seen=235504, this-event=235508, last-event=235504#012 Aug 18 13:12:31 nodo1 cluster-dlm: add_change: add_change cg 3 remove nodeid 1812048064 reason 3#012 Aug 18 13:12:31 nodo1 cluster-dlm: add_change: add_change cg 3 counts member 1 joined 0 remove 1 failed 1#012 Aug 18 13:12:31 nodo1 cluster-dlm: stop_kernel: stop_kernel cg 3#012 Aug 18 13:12:31 nodo1 cluster-dlm: do_sysfs: write "0" to "/sys/kernel/dlm/0BB443F896254AD3BA8FB960C425B666/control"#012 Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] entering GATHER state from 0. Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] Creating commit token because I am the rep. Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] Saving state aru e1 high seq received e1 Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] Storing new sequence id for ring 397f4 Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] entering COMMIT state. Aug 18 13:12:31 nodo1 cib: [8475]: notice: ais_dispatch: Membership 235508: quorum lost Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] entering RECOVERY state. Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] position [0] member 192.168.1.106: Aug 18 13:12:31 nodo1 crmd: [8479]: notice: ais_dispatch: Membership 235508: quorum lost Aug 18 13:12:31 nodo1 cluster-dlm: purge_plocks: purged 0 plocks for 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: [8721]: notice: ais_dispatch: Membership 235508: quorum lost Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] previous ring seq 235504 rep 192.168.1.106 Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] aru e1 high delivered e1 received flag 1 Aug 18 13:12:31 nodo1 ocfs2_controld: [8786]: notice: ais_dispatch: Membership 235508: quorum lost Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] Did not need to originate any messages in recovery. Aug 18 13:12:31 nodo1 cib: [8475]: info: crm_update_peer: Node nodo3: id=1812048064 state=lost (new) addr=r(0) ip(192.168.1.108) votes=1 born=235504 seen=235504 proc=00000000000000000000000000053312 Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] Sending initial ORF token Aug 18 13:12:31 nodo1 crmd: [8479]: info: ais_status_callback: status: nodo3 is now lost (was member) Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] CLM CONFIGURATION CHANGE Aug 18 13:12:31 nodo1 cluster-dlm: [8721]: info: crm_update_peer: Node nodo3: id=1812048064 state=lost (new) addr=r(0) ip(192.168.1.108) votes=1 born=235504 seen=235504 proc=00000000000000000000000000053312 Aug 18 13:12:31 nodo1 ocfs2_controld: [8786]: info: crm_update_peer: Node nodo3: id=1812048064 state=lost (new) addr=r(0) ip(192.168.1.108) votes=1 born=235504 seen=235504 proc=00000000000000000000000000053312 Aug 18 13:12:31 nodo1 cib: [8475]: info: ais_dispatch: Membership 235508: quorum still lost Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] New Configuration: Aug 18 13:12:31 nodo1 crmd: [8479]: info: crm_update_peer: Node nodo3: id=1812048064 state=lost (new) addr=r(0) ip(192.168.1.108) votes=1 born=235504 seen=235504 proc=00000000000000000000000000053312 Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] #011r(0) ip(192.168.1.106) Aug 18 13:12:31 nodo1 cluster-dlm: [8721]: info: ais_dispatch: Membership 235508: quorum still lost Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] Members Left: Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] #011r(0) ip(192.168.1.108) Aug 18 13:12:31 nodo1 crmd: [8479]: info: erase_node_from_join: Removed node nodo3 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1 Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] Members Joined: Aug 18 13:12:31 nodo1 openais[8462]: [crm ] notice: pcmk_peer_update: Transitional membership event on ring 235508: memb=1, new=0, lost=1 Aug 18 13:12:31 nodo1 crmd: [8479]: info: crm_update_quorum: Updating quorum status to false (call=281) Aug 18 13:12:31 nodo1 openais[8462]: [crm ] info: pcmk_peer_update: memb: nodo1 1778493632 Aug 18 13:12:31 nodo1 openais[8462]: [crm ] info: pcmk_peer_update: lost: nodo3 1812048064 Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] CLM CONFIGURATION CHANGE Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] New Configuration: Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] #011r(0) ip(192.168.1.106) Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] Members Left: Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] Members Joined: Aug 18 13:12:31 nodo1 openais[8462]: [crm ] notice: pcmk_peer_update: Stable membership event on ring 235508: memb=1, new=0, lost=0 Aug 18 13:12:31 nodo1 openais[8462]: [crm ] info: pcmk_peer_update: MEMB: nodo1 1778493632 Aug 18 13:12:31 nodo1 openais[8462]: [crm ] info: ais_mark_unseen_peer_dead: Node nodo3 was not seen in the previous transition Aug 18 13:12:31 nodo1 openais[8462]: [MAIN ] info: update_member: Node 1812048064/nodo3 is now: lost Aug 18 13:12:31 nodo1 openais[8462]: [crm ] info: send_member_notification: Sending membership update 235508 to 4 children Aug 18 13:12:31 nodo1 openais[8462]: [MAIN ] info: update_member: 0x7f12080009a0 Node 1778493632 ((null)) born on: 235504 Aug 18 13:12:31 nodo1 openais[8462]: [SYNC ] This node is within the primary component and will provide service. Aug 18 13:12:31 nodo1 openais[8462]: [TOTEM] entering OPERATIONAL state. Aug 18 13:12:31 nodo1 openais[8462]: [MAIN ] info: update_member: 0x7f12080009a0 Node 1778493632 (nodo1) born on: 235504 Aug 18 13:12:31 nodo1 openais[8462]: [crm ] info: send_member_notification: Sending membership update 235508 to 4 children Aug 18 13:12:31 nodo1 openais[8462]: [CLM ] got nodejoin message 192.168.1.106 Aug 18 13:12:31 nodo1 openais[8462]: [CPG ] got joinlist message from node 1778493632 Aug 18 13:12:31 nodo1 ocfs2_controld: [8786]: info: ais_dispatch: Membership 235508: quorum still lost Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/279): ok (rc=0) Aug 18 13:12:31 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_config_changed: Attr changes Aug 18 13:12:31 nodo1 haclient: on_event:evt:cib_changed Aug 18 13:12:31 nodo1 cib: [8475]: info: log_data_element: cib:diff: - Aug 18 13:12:31 nodo1 cib: [8475]: info: log_data_element: cib:diff: + Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/281): ok (rc=0) Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='expected-quorum-votes'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2]) Aug 18 13:12:31 nodo1 crmd: [8479]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change Aug 18 13:12:31 nodo1 crmd: [8479]: info: need_abort: Aborting on change to have-quorum Aug 18 13:12:31 nodo1 crmd: [8479]: info: ais_dispatch: Membership 235508: quorum still lost Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/283): ok (rc=0) Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/284): ok (rc=0) Aug 18 13:12:31 nodo1 cluster-dlm: fence_node_time: Node 1812048064/nodo3 was last shot 'now'#012 Aug 18 13:12:31 nodo1 cluster-dlm: fence_node_time: It does not appear node 1812048064/nodo3 has been shot#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_fencing_done: check_fencing 1812048064 1282129898 fenced at 1282129951#012 Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//nvpair[@name='expected-quorum-votes'] (/cib/configuration/crm_config/cluster_property_set/nvpair[2]) Aug 18 13:12:31 nodo1 crmd: [8479]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Aug 18 13:12:31 nodo1 crmd: [8479]: info: do_state_transition: All 1 cluster nodes are eligible to run resources. Aug 18 13:12:31 nodo1 crmd: [8479]: info: do_pe_invoke: Query 288: Requesting the current CIB: S_POLICY_ENGINE Aug 18 13:12:31 nodo1 cib: [8475]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/287): ok (rc=0) Aug 18 13:12:31 nodo1 cluster-dlm: check_fencing_done: check_fencing done#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_quorum_done: check_quorum disabled#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_fs_done: check_fs nodeid 1812048064 needs fs notify#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_fencing_done: check_fencing done#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_quorum_done: check_quorum disabled#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_fs_done: check_fs nodeid 1812048064 needs fs notify#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_fencing_done: check_fencing done#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_quorum_done: check_quorum disabled#012 Aug 18 13:12:31 nodo1 cluster-dlm: check_fs_done: check_fs done#012 Aug 18 13:12:31 nodo1 cluster-dlm: send_info: send_start cg 3 flags 2 counts 2 1 0 1 1#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: receive_start: receive_start 1778493632:3 len 76#012 Aug 18 13:12:31 nodo1 cluster-dlm: match_change: match_change 1778493632:3 matches cg 3#012 Aug 18 13:12:31 nodo1 cluster-dlm: wait_messages_done: wait_messages cg 3 got all 1#012 Aug 18 13:12:31 nodo1 cluster-dlm: start_kernel: start_kernel cg 3 member_count 1#012 Aug 18 13:12:31 nodo1 cluster-dlm: update_dir_members: dir_member 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: update_dir_members: dir_member 1778493632#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_configfs_members: set_members rmdir "/sys/kernel/config/dlm/cluster/spaces/0BB443F896254AD3BA8FB960C425B666/nodes/1812048064"#012 Aug 18 13:12:31 nodo1 cluster-dlm: do_sysfs: write "1" to "/sys/kernel/dlm/0BB443F896254AD3BA8FB960C425B666/control"#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012 Aug 18 13:12:31 nodo1 cluster-dlm: set_fs_notified: set_fs_notified no nodeid 1812048064#012