nebula1 is DC ============= Start corosync -------------- Nov 25 10:48:43 nebula2 corosync[5083]: [MAIN ] Corosync Cluster Engine ('2.3.3'): started and ready to provide service. Nov 25 10:48:43 nebula2 corosync[5083]: [MAIN ] Corosync built-in features: dbus testagents rdma watchdog augeas pie relro bindnow Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] Initializing transport (UDP/IP Unicast). Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256 Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] The network interface [192.168.231.71] is now up. Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync configuration map access [0] Nov 25 10:48:43 nebula2 corosync[5084]: [QB ] server name: cmap Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync configuration service [1] Nov 25 10:48:43 nebula2 corosync[5084]: [QB ] server name: cfg Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2] Nov 25 10:48:43 nebula2 corosync[5084]: [QB ] server name: cpg Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync profile loading service [4] Nov 25 10:48:43 nebula2 corosync[5084]: [WD ] No Watchdog, try modprobe Nov 25 10:48:43 nebula2 corosync[5084]: [WD ] no resources configured. Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync watchdog service [7] Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Using quorum provider corosync_votequorum Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3 Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5] Nov 25 10:48:43 nebula2 corosync[5084]: [QB ] server name: votequorum Nov 25 10:48:43 nebula2 corosync[5084]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3] Nov 25 10:48:43 nebula2 corosync[5084]: [QB ] server name: quorum Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] adding new UDPU member {192.168.231.70} Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] adding new UDPU member {192.168.231.71} Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] adding new UDPU member {192.168.231.72} Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] adding new UDPU member {192.168.231.110} Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] adding new UDPU member {192.168.231.111} Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] A new membership (192.168.231.71:81384) was formed. Members joined: 1084811079 Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3 Nov 25 10:48:43 nebula2 corosync[5084]: message repeated 2 times: [ [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3] Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Members[1]: 1084811079 Nov 25 10:48:43 nebula2 corosync[5084]: [MAIN ] Completed service synchronization, ready to provide service. Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] A new membership (192.168.231.70:81388) was formed. Members joined: 1084811078 Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3 Nov 25 10:48:43 nebula2 corosync[5084]: message repeated 2 times: [ [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3] Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Members[2]: 1084811078 1084811079 Nov 25 10:48:43 nebula2 corosync[5084]: [MAIN ] Completed service synchronization, ready to provide service. Nov 25 10:48:43 nebula2 corosync[5084]: [TOTEM ] A new membership (192.168.231.70:81392) was formed. Members joined: 1084811080 Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3 Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] This node is within the primary component and will provide service. Nov 25 10:48:43 nebula2 corosync[5084]: [QUORUM] Members[3]: 1084811078 1084811079 1084811080 Nov 25 10:48:43 nebula2 corosync[5084]: [MAIN ] Completed service synchronization, ready to provide service. Start pacemaker --------------- Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: mcp_read_config: Configured corosync to accept connections from group 113: OK (1) Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: main: Starting Pacemaker 1.1.10 (Build: 42f2063): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart nagios heartbeat corosync-native snmp libesmtp Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: cluster_connect_quorum: Quorum acquired Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078 Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811078] - state is now member (was (null)) Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was (null)) Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080 Nov 25 10:49:26 nebula2 pacemakerd[5093]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811080] - state is now member (was (null)) Nov 25 10:49:26 nebula2 attrd[5098]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 10:49:26 nebula2 crmd[5100]: notice: main: CRM Git Version: 42f2063 Nov 25 10:49:26 nebula2 stonith-ng[5096]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 10:49:26 nebula2 attrd[5098]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:26 nebula2 attrd[5098]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:26 nebula2 attrd[5098]: notice: main: Starting mainloop... Nov 25 10:49:26 nebula2 stonith-ng[5096]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:26 nebula2 stonith-ng[5096]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:26 nebula2 cib[5095]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 10:49:26 nebula2 cib[5095]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:26 nebula2 cib[5095]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:27 nebula2 crmd[5100]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 10:49:27 nebula2 crmd[5100]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:27 nebula2 crmd[5100]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:27 nebula2 crmd[5100]: notice: cluster_connect_quorum: Quorum acquired Nov 25 10:49:27 nebula2 stonith-ng[5096]: notice: setup_cib: Watching for stonith topology changes Nov 25 10:49:27 nebula2 stonith-ng[5096]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:27 nebula2 stonith-ng[5096]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:27 nebula2 crmd[5100]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078 Nov 25 10:49:27 nebula2 crmd[5100]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811078] - state is now member (was (null)) Nov 25 10:49:27 nebula2 crmd[5100]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was (null)) Nov 25 10:49:27 nebula2 crmd[5100]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080 Nov 25 10:49:27 nebula2 crmd[5100]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811080] - state is now member (was (null)) Nov 25 10:49:27 nebula2 crmd[5100]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:27 nebula2 crmd[5100]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:27 nebula2 crmd[5100]: notice: do_started: The local CRM is operational Nov 25 10:49:27 nebula2 crmd[5100]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ] Nov 25 10:49:28 nebula2 stonith-ng[5096]: notice: stonith_device_register: Added 'Stonith-nebula1-IPMILAN' to the device list (1 active devices) Nov 25 10:49:29 nebula2 stonith-ng[5096]: notice: stonith_device_register: Added 'Stonith-nebula3-IPMILAN' to the device list (2 active devices) Nov 25 10:49:30 nebula2 stonith-ng[5096]: notice: stonith_device_register: Added 'Stonith-ONE-Frontend' to the device list (3 active devices) Nov 25 10:49:31 nebula2 stonith-ng[5096]: notice: stonith_device_register: Added 'Stonith-Quorum-Node' to the device list (4 active devices) Nov 25 10:49:43 nebula2 cib[5095]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:43 nebula2 cib[5095]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:43 nebula2 crmd[5100]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ] Nov 25 10:49:43 nebula2 attrd[5098]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Nov 25 10:49:43 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence one-frontend: static-list Nov 25 10:49:43 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can fence one-frontend: static-list Nov 25 10:49:43 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-Quorum-Node can not fence one-frontend: static-list Nov 25 10:49:43 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence one-frontend: static-list Nov 25 10:49:43 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation dlm_monitor_0 (call=36, rc=7, cib-update=13, confirmed=true) not running Nov 25 10:49:43 nebula2 Filesystem(ONE-Datastores)[5146]: WARNING: Couldn't find device [/dev/one-fs/datastores]. Expected /dev/??? to exist Nov 25 10:49:43 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation clvm_monitor_0 (call=41, rc=7, cib-update=14, confirmed=true) not running Nov 25 10:49:43 nebula2 LVM(ONE-vg)[5125]: WARNING: LVM Volume one-fs is not available (stopped) Nov 25 10:49:43 nebula2 LVM(ONE-vg)[5125]: INFO: LVM Volume one-fs is offline Nov 25 10:49:43 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-vg_monitor_0 (call=46, rc=7, cib-update=15, confirmed=true) not running Nov 25 10:49:43 nebula2 attrd[5098]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 10:49:43 nebula2 attrd[5098]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 10:49:43 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_0 (call=54, rc=7, cib-update=16, confirmed=true) not running Nov 25 10:49:43 nebula2 VirtualDomain(ONE-Frontend-VM)[5121]: INFO: Configuration file /var/lib/one/datastores/one/one.xml not readable during probe. Nov 25 10:49:43 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-Frontend-VM_monitor_0 (call=23, rc=7, cib-update=17, confirmed=true) not running Nov 25 10:49:43 nebula2 VirtualDomain(Quorum-Node-VM)[5122]: INFO: Domain name "quorum" saved to /var/run/resource-agents/VirtualDomain-Quorum-Node-VM.state. Nov 25 10:49:43 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation Quorum-Node-VM_monitor_0 (call=31, rc=7, cib-update=18, confirmed=true) not running Nov 25 10:49:43 nebula2 attrd[5098]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Nov 25 10:49:43 nebula2 attrd[5098]: notice: attrd_perform_update: Sent update 6: probe_complete=true Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: remote_op_done: Operation reboot of one-frontend by nebula1 for crmd.5038@nebula1.98290389: OK Nov 25 10:49:46 nebula2 crmd[5100]: notice: tengine_stonith_notify: Peer one-frontend was terminated (reboot) by nebula1 for nebula1: OK (ref=98290389-2232-4ba1-b4dd-ed4a4d7f46a1) by client crmd.5038 Nov 25 10:49:46 nebula2 crmd[5100]: notice: crm_update_peer_state: tengine_stonith_notify: Node one-frontend[0] - state is now lost (was (null)) Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-Quorum-Node can fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-Quorum-Node can fence quorum: static-list Nov 25 10:49:46 nebula2 stonith-ng[5096]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence quorum: static-list Nov 25 10:49:46 nebula2 external/libvirt[5284]: notice: Domain quorum is already stopped Nov 25 10:49:49 nebula2 kernel: [ 212.548164] type=1400 audit(1416908989.278:26): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirt-77a9d2b2-6655-42f9-a4df-996aadf0eeff" pid=5303 comm="apparmor_parser" Nov 25 10:49:49 nebula2 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port quorum-admin -- add-port rectorat quorum-admin tag=702 -- set Interface quorum-admin "external-ids:attached-mac=\"52:54:00:c2:7f:53\"" -- set Interface quorum-admin "external-ids:iface-id=\"1bac8d1a-29b7-478c-a19c-9a0ddc819e28\"" -- set Interface quorum-admin "external-ids:vm-id=\"77a9d2b2-6655-42f9-a4df-996aadf0eeff\"" -- set Interface quorum-admin external-ids:iface-status=active Nov 25 10:49:49 nebula2 kernel: [ 212.638004] device quorum-admin entered promiscuous mode Nov 25 10:49:49 nebula2 kernel: [ 212.701177] cgroup: libvirtd (4231) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. Nov 25 10:49:49 nebula2 kernel: [ 212.701181] cgroup: "memory" requires setting use_hierarchy to 1 on the root. Nov 25 10:49:49 nebula2 external/libvirt[5284]: notice: Domain quorum was started Nov 25 10:49:50 nebula2 kernel: [ 213.537042] kvm: zapping shadow pages for mmio generation wraparound Nov 25 10:49:50 nebula2 stonith-ng[5096]: notice: log_operation: Operation 'reboot' [5276] (call 3 from crmd.5038) for host 'quorum' with device 'Stonith-Quorum-Node' returned: 0 (OK) Nov 25 10:49:50 nebula2 stonith-ng[5096]: error: crm_abort: crm_glib_handler: Forked child 5390 to record non-fatal assert at logging.c:63 : Source ID 15 was not found when attempting to remove it Nov 25 10:49:50 nebula2 stonith-ng[5096]: error: crm_abort: crm_glib_handler: Forked child 5391 to record non-fatal assert at logging.c:63 : Source ID 16 was not found when attempting to remove it Nov 25 10:49:50 nebula2 stonith-ng[5096]: notice: remote_op_done: Operation reboot of quorum by nebula1 for crmd.5038@nebula1.281581d8: OK Nov 25 10:49:50 nebula2 crmd[5100]: notice: tengine_stonith_notify: Peer quorum was terminated (reboot) by nebula1 for nebula1: OK (ref=281581d8-fecf-4588-808f-1ead04b7cdd6) by client crmd.5038 Nov 25 10:49:50 nebula2 crmd[5100]: notice: crm_update_peer_state: tengine_stonith_notify: Node quorum[0] - state is now lost (was (null)) Nov 25 10:49:50 nebula2 stonith-ng[5096]: notice: stonith_device_register: Device 'Stonith-nebula1-IPMILAN' already existed in device list (4 active devices) Nov 25 10:49:52 nebula2 stonith-ng[5096]: error: crm_abort: crm_glib_handler: Forked child 5407 to record non-fatal assert at logging.c:63 : Source ID 18 was not found when attempting to remove it Nov 25 10:49:52 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation Stonith-nebula1-IPMILAN_start_0 (call=62, rc=0, cib-update=19, confirmed=true) ok Nov 25 10:49:52 nebula2 stonith-ng[5096]: error: crm_abort: crm_glib_handler: Forked child 5408 to record non-fatal assert at logging.c:63 : Source ID 19 was not found when attempting to remove it Nov 25 10:49:52 nebula2 ntpd[5075]: Listen normally on 9 quorum-admin fe80::fc54:ff:fec2:7f53 UDP 123 Nov 25 10:49:52 nebula2 ntpd[5075]: peers refreshed Nov 25 10:49:52 nebula2 ntpd[5075]: new interface(s) found: waking up resolver Nov 25 10:49:53 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation Stonith-nebula1-IPMILAN_monitor_1800000 (call=65, rc=0, cib-update=20, confirmed=false) ok Nov 25 10:49:53 nebula2 stonith-ng[5096]: error: crm_abort: crm_glib_handler: Forked child 5423 to record non-fatal assert at logging.c:63 : Source ID 20 was not found when attempting to remove it Nov 25 10:49:53 nebula2 stonith-ng[5096]: error: crm_abort: crm_glib_handler: Forked child 5424 to record non-fatal assert at logging.c:63 : Source ID 21 was not found when attempting to remove it Start resource ONE-Storage-Clone -------------------------------- Nov 25 10:50:47 nebula2 kernel: [ 271.044567] sctp: Hash tables configured (established 65536 bind 65536) Nov 25 10:50:47 nebula2 kernel: [ 271.081866] DLM installed Nov 25 10:50:47 nebula2 dlm_controld[5455]: 271 dlm_controld 4.0.1 started Nov 25 10:50:48 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation dlm_start_0 (call=68, rc=0, cib-update=21, confirmed=true) ok Nov 25 10:50:48 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation dlm_monitor_60000 (call=71, rc=0, cib-update=22, confirmed=false) ok Nov 25 10:50:48 nebula2 clvmd(clvm)[5464]: INFO: Starting clvm Nov 25 10:50:48 nebula2 clvmd[5480]: CLVMD started Nov 25 10:50:48 nebula2 kernel: [ 272.211592] dlm: Using TCP for communications Nov 25 10:50:48 nebula2 kernel: [ 272.221786] dlm: connecting to 1084811080 Nov 25 10:50:48 nebula2 kernel: [ 272.221892] dlm: connecting to 1084811078 Nov 25 10:50:48 nebula2 kernel: [ 272.222020] dlm: got connection from 1084811078 Nov 25 10:50:48 nebula2 kernel: [ 272.224620] dlm: got connection from 1084811080 Nov 25 10:50:49 nebula2 clvmd[5480]: Created DLM lockspace for CLVMD. Nov 25 10:50:49 nebula2 clvmd[5480]: DLM initialisation complete Nov 25 10:50:49 nebula2 clvmd[5480]: Our local node id is 1084811079 Nov 25 10:50:49 nebula2 clvmd[5480]: Connected to Corosync Nov 25 10:50:49 nebula2 clvmd[5480]: Cluster LVM daemon started - connected to Corosync Nov 25 10:50:49 nebula2 clvmd[5480]: Cluster ready, doing some more initialisation Nov 25 10:50:49 nebula2 clvmd[5480]: starting LVM thread Nov 25 10:50:49 nebula2 clvmd[5480]: LVM thread function started Nov 25 10:50:50 nebula2 lvm[5480]: clvmd ready for work Nov 25 10:50:50 nebula2 lvm[5480]: Sub thread ready for work. Nov 25 10:50:50 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:50 nebula2 lvm[5480]: Using timeout of 60 seconds Nov 25 10:50:50 nebula2 lvm[5480]: confchg callback. 1 joined, 0 left, 3 members Nov 25 10:50:51 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 0. len 29 Nov 25 10:50:51 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fca60. client=0x6a1d60, msg=0x7fffbdee4d8c, len=29, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:51 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:51 nebula2 lvm[5480]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 0 on node 40a8e746 Nov 25 10:50:51 nebula2 lvm[5480]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:51 nebula2 lvm[5480]: Refreshing context Nov 25 10:50:51 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:51 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fca60. client=0x6a1d60, msg=0x7fffbdee4d8c, len=31, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 2 on node 40a8e746 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fca60. client=0x6a1d60, msg=0x7fffbdee4d8c, len=31, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 5 on node 40a8e746 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 0. len 29 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fca60. client=0x6a1d60, msg=0x7fffbdee4d8c, len=29, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 7 on node 40a8e746 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Refreshing context Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lrmd[5097]: notice: operation_finished: clvm_start_0:5464:stderr [ local socket: connect failed: No such file or directory ] Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 0. len 84 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fca60. client=0x6a1d60, msg=0x7fffbdee4d8c, len=84, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command LOCK_QUERY (0x34) for clientid 0x5000000 XID 9 on node 40a8e746 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode -1 (?) Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 0. len 29 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fca60. client=0x6a1d60, msg=0x7fffbdee4d8c, len=29, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command LOCK_VG (0x33) for clientid 0xc000000 XID 0 on node 40a8e748 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Refreshing context Nov 25 10:50:52 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation clvm_start_0 (call=73, rc=0, cib-update=23, confirmed=true) ok Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcad0. client=0x6a1d60, msg=0x7fffbdee4d8c, len=31, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 11 on node 40a8e746 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811078. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811078. len 18 Nov 25 10:50:52 nebula2 LVM(ONE-vg)[5501]: INFO: Activating volume group one-fs Nov 25 10:50:52 nebula2 lvm[5480]: Got new connection on fd 5 Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 29 Nov 25 10:50:52 nebula2 lvm[5480]: check_all_clvmds_running Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811079, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811078, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811080, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: creating pipe, [12, 13] Nov 25 10:50:52 nebula2 lvm[5480]: Creating pre&post thread Nov 25 10:50:52 nebula2 lvm[5480]: Created pre&post thread, state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: in sub thread: client = 0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource 'P_#global', flags=0, mode=4 Nov 25 10:50:52 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation clvm_monitor_60000 (call=77, rc=0, cib-update=24, confirmed=false) ok Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x6a1d60, msg=0x7fffbdee4d8c, len=31, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command SYNC_NAMES (0x2d) for clientid 0xc000000 XID 2 on node 40a8e748 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x6a1d60, msg=0x7fffbdee4d8c, len=31, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command SYNC_NAMES (0x2d) for clientid 0xc000000 XID 5 on node 40a8e748 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource returning 0, lock_id=1 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 0, flags=0x0 () Nov 25 10:50:52 nebula2 lvm[5480]: num_nodes = 3 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fd040. client=0x21fca60, msg=0x21fcb70, len=29, csid=(nil), xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fcde0, msglen =29, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Refreshing context Nov 25 10:50:52 nebula2 lvm[5480]: Sending message to all cluster nodes Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 0. len 29 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fce10. client=0x6a1d60, msg=0x7fffbdee4d8c, len=29, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 0. len 29 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command LOCK_VG (0x33) for clientid 0xc000000 XID 7 on node 40a8e748 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Refreshing context Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e746: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 2 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e748: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 3 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 32 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'V_nebula2-vg' at 1 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource 'V_nebula2-vg', flags=0, mode=3 Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource returning 0, lock_id=2 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 1, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x21fca60, msg=0x21fcb70, len=32, csid=(nil), xid=1 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fce20, msglen =32, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'V_nebula2-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Invalidating cached metadata for VG nebula2-vg Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 31 Nov 25 10:50:52 nebula2 lvm[5480]: check_all_clvmds_running Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811079, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811078, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811080, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 2, flags=0x0 () Nov 25 10:50:52 nebula2 lvm[5480]: num_nodes = 3 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fd040. client=0x21fca60, msg=0x21fcb70, len=31, csid=(nil), xid=2 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: SYNC_NAMES (0x2d) msg=0x21fcde0, msglen =31, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Sending message to all cluster nodes Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e746: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 2 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e748: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 3 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 32 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'V_nebula2-vg' at 6 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: unlock_resource: V_nebula2-vg lockid: 2 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 3, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x21fca60, msg=0x21fcb70, len=32, csid=(nil), xid=3 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fce20, msglen =32, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'V_nebula2-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Invalidating cached metadata for VG nebula2-vg Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 28 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource 'V_one-fs', flags=0, mode=3 Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource returning 0, lock_id=2 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 4, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x21fca60, msg=0x21fcb70, len=28, csid=(nil), xid=4 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fce20, msglen =28, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Invalidating cached metadata for VG one-fs Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 31 Nov 25 10:50:52 nebula2 lvm[5480]: check_all_clvmds_running Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811079, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811078, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811080, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 5, flags=0x0 () Nov 25 10:50:52 nebula2 lvm[5480]: num_nodes = 3 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fd040. client=0x21fca60, msg=0x21fcb70, len=31, csid=(nil), xid=5 Nov 25 10:50:52 nebula2 lvm[5480]: Sending message to all cluster nodes Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: SYNC_NAMES (0x2d) msg=0x21fcde0, msglen =31, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e746: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 2 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e748: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 3 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 28 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: unlock_resource: V_one-fs lockid: 2 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 0. len 84 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x6a1d60, msg=0x7fffbdee4d8c, len=84, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command LOCK_QUERY (0x34) for clientid 0xc000000 XID 9 on node 40a8e748 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode -1 (?) Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 6, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x21fca60, msg=0x21fcb70, len=28, csid=(nil), xid=6 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fce20, msglen =28, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Invalidating cached metadata for VG one-fs Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 29 Nov 25 10:50:52 nebula2 lvm[5480]: check_all_clvmds_running Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811079, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811078, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811080, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: unlock_resource: P_#global lockid: 1 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811080. len 21 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 7, flags=0x0 () Nov 25 10:50:52 nebula2 lvm[5480]: num_nodes = 3 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fd040. client=0x21fca60, msg=0x21fcb70, len=29, csid=(nil), xid=7 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fcde0, msglen =29, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Refreshing context Nov 25 10:50:52 nebula2 lvm[5480]: Sending message to all cluster nodes Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 0. len 29 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fce10. client=0x6a1d60, msg=0x7fffbdee4d8c, len=31, csid=0x7fffbdee375c, xid=0 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: remote Nov 25 10:50:52 nebula2 lvm[5480]: process_remote_command SYNC_NAMES (0x2d) for clientid 0xc000000 XID 11 on node 40a8e748 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e746: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 2 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811080. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e748: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 3 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 0 Nov 25 10:50:52 nebula2 lvm[5480]: EOF on local socket: inprogress=0 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for child thread Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Subthread finished Nov 25 10:50:52 nebula2 lvm[5480]: Joined child thread Nov 25 10:50:52 nebula2 lvm[5480]: ret == 0, errno = 0. removing client Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcb70. client=0x21fca60, msg=(nil), len=0, csid=(nil), xid=7 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: free fd -1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 LVM(ONE-vg)[5501]: INFO: Reading all physical volumes. This may take a while... Found volume group "nebula2-vg" using metadata type lvm2 Found volume group "one-fs" using metadata type lvm2 Nov 25 10:50:52 nebula2 lvm[5480]: Got new connection on fd 5 Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 28 Nov 25 10:50:52 nebula2 lvm[5480]: creating pipe, [12, 13] Nov 25 10:50:52 nebula2 lvm[5480]: Creating pre&post thread Nov 25 10:50:52 nebula2 lvm[5480]: Created pre&post thread, state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: in sub thread: client = 0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource 'V_one-fs', flags=0, mode=3 Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource returning 0, lock_id=1 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 8, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x21fca60, msg=0x21fcb70, len=28, csid=(nil), xid=8 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fce20, msglen =28, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Invalidating cached metadata for VG one-fs Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 84 Nov 25 10:50:52 nebula2 lvm[5480]: check_all_clvmds_running Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811079, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811078, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811080, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 9, flags=0x0 () Nov 25 10:50:52 nebula2 lvm[5480]: num_nodes = 3 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fd0a0. client=0x21fca60, msg=0x21fcde0, len=84, csid=(nil), xid=9 Nov 25 10:50:52 nebula2 lvm[5480]: Sending message to all cluster nodes Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_QUERY (0x34) msg=0x21fce40, msglen =84, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode -1 (?) Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 0. len 84 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 21 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e746: 3 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 2 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 21 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e748: 3 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 3 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 84 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 10, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fce40. client=0x21fca60, msg=0x21fcde0, len=84, csid=(nil), xid=10 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_LV (0x32) msg=0x21fd0a0, msglen =84, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_lv: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', flags=1, mode=1 Nov 25 10:50:52 nebula2 lvm[5480]: lock_resource returning 0, lock_id=2 Nov 25 10:50:52 nebula2 lvm[5480]: Command return is 0, critical_section is 0 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 31 Nov 25 10:50:52 nebula2 lvm[5480]: check_all_clvmds_running Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811079, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811078, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: down_callback. node 1084811080, state = 3 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 11, flags=0x0 () Nov 25 10:50:52 nebula2 lvm[5480]: num_nodes = 3 Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fd040. client=0x21fca60, msg=0x21fcb70, len=31, csid=(nil), xid=11 Nov 25 10:50:52 nebula2 lvm[5480]: Sending message to all cluster nodes Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: SYNC_NAMES (0x2d) msg=0x21fcde0, msglen =31, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Syncing device names Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811079 for 0. len 31 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e746: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e748: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 2 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 3 replies, expecting: 3 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 28 Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x21fca60) Nov 25 10:50:52 nebula2 lvm[5480]: unlock_resource: V_one-fs lockid: 1 Nov 25 10:50:52 nebula2 lvm[5480]: Writing status 0 down pipe 13 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting to do post command - state = 0 Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: distribute command: XID = 12, flags=0x1 (LOCAL) Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcde0. client=0x21fca60, msg=0x21fcb70, len=28, csid=(nil), xid=12 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: local Nov 25 10:50:52 nebula2 lvm[5480]: process_local_command: LOCK_VG (0x33) msg=0x21fce20, msglen =28, client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0 Nov 25 10:50:52 nebula2 lvm[5480]: Invalidating cached metadata for VG one-fs Nov 25 10:50:52 nebula2 lvm[5480]: Reply from node 40a8e747: 0 bytes Nov 25 10:50:52 nebula2 lvm[5480]: Got 1 replies, expecting: 1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 lvm[5480]: Got post command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for next pre command Nov 25 10:50:52 nebula2 lvm[5480]: read on PIPE 12: 4 bytes: status: 0 Nov 25 10:50:52 nebula2 lvm[5480]: background routine status was 0, sock_client=0x21fca60 Nov 25 10:50:52 nebula2 lvm[5480]: Send local reply Nov 25 10:50:52 nebula2 lvm[5480]: Read on local socket 5, len = 0 Nov 25 10:50:52 nebula2 lvm[5480]: EOF on local socket: inprogress=0 Nov 25 10:50:52 nebula2 lvm[5480]: Waiting for child thread Nov 25 10:50:52 nebula2 lvm[5480]: Got pre command condition... Nov 25 10:50:52 nebula2 lvm[5480]: Subthread finished Nov 25 10:50:52 nebula2 lvm[5480]: Joined child thread Nov 25 10:50:52 nebula2 lvm[5480]: ret == 0, errno = 0. removing client Nov 25 10:50:52 nebula2 lvm[5480]: add_to_lvmqueue: cmd=0x21fcb70. client=0x21fca60, msg=(nil), len=0, csid=(nil), xid=12 Nov 25 10:50:52 nebula2 lvm[5480]: process_work_item: free fd -1 Nov 25 10:50:52 nebula2 lvm[5480]: LVM thread waiting for work Nov 25 10:50:52 nebula2 LVM(ONE-vg)[5501]: INFO: 1 logical volume(s) in volume group "one-fs" now active Nov 25 10:50:52 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-vg_start_0 (call=79, rc=0, cib-update=25, confirmed=true) ok Nov 25 10:50:52 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-vg_monitor_60000 (call=83, rc=0, cib-update=26, confirmed=false) ok Nov 25 10:50:52 nebula2 Filesystem(ONE-Datastores)[5558]: INFO: Running start for /dev/one-fs/datastores on /var/lib/one/datastores Nov 25 10:50:52 nebula2 kernel: [ 275.767572] GFS2 installed Nov 25 10:50:52 nebula2 kernel: [ 275.844125] GFS2: fsid=one:datastores: Trying to join cluster "lock_dlm", "one:datastores" Nov 25 10:50:53 nebula2 kernel: [ 276.972738] GFS2: fsid=one:datastores: Joined cluster. Now mounting FS... Nov 25 10:50:53 nebula2 kernel: [ 277.198095] GFS2: fsid=one:datastores.2: jid=2, already locked for use Nov 25 10:50:53 nebula2 kernel: [ 277.198101] GFS2: fsid=one:datastores.2: jid=2: Looking at journal... Nov 25 10:50:53 nebula2 kernel: [ 277.208807] GFS2: fsid=one:datastores.2: jid=2: Done Nov 25 10:50:54 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-Datastores_start_0 (call=85, rc=0, cib-update=27, confirmed=true) ok Nov 25 10:50:54 nebula2 crmd[5100]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_20000 (call=89, rc=0, cib-update=28, confirmed=false) ok Crash self (echo c > /proc/sysrq-trigger) -------------------------------------------- Start of corosync ----------------- Nov 25 11:04:37 nebula2 corosync[4888]: [MAIN ] Corosync Cluster Engine ('2.3.3'): started and ready to provide service. Nov 25 11:04:37 nebula2 corosync[4888]: [MAIN ] Corosync built-in features: dbus testagents rdma watchdog augeas pie relro bindnow Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] Initializing transport (UDP/IP Unicast). Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256 Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] The network interface [192.168.231.71] is now up. Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync configuration map access [0] Nov 25 11:04:37 nebula2 corosync[4889]: [QB ] server name: cmap Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync configuration service [1] Nov 25 11:04:37 nebula2 corosync[4889]: [QB ] server name: cfg Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2] Nov 25 11:04:37 nebula2 corosync[4889]: [QB ] server name: cpg Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync profile loading service [4] Nov 25 11:04:37 nebula2 corosync[4889]: [WD ] No Watchdog, try modprobe Nov 25 11:04:37 nebula2 corosync[4889]: [WD ] no resources configured. Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync watchdog service [7] Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] Using quorum provider corosync_votequorum Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3 Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5] Nov 25 11:04:37 nebula2 corosync[4889]: [QB ] server name: votequorum Nov 25 11:04:37 nebula2 corosync[4889]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3] Nov 25 11:04:37 nebula2 corosync[4889]: [QB ] server name: quorum Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] adding new UDPU member {192.168.231.70} Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] adding new UDPU member {192.168.231.71} Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] adding new UDPU member {192.168.231.72} Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] adding new UDPU member {192.168.231.110} Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] adding new UDPU member {192.168.231.111} Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] A new membership (192.168.231.71:81396) was formed. Members joined: 1084811079 Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3 Nov 25 11:04:37 nebula2 corosync[4889]: message repeated 2 times: [ [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3] Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] Members[1]: 1084811079 Nov 25 11:04:37 nebula2 corosync[4889]: [MAIN ] Completed service synchronization, ready to provide service. Nov 25 11:04:37 nebula2 corosync[4889]: [TOTEM ] A new membership (192.168.231.70:81400) was formed. Members joined: 1084811078 1084811080 Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3 Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] This node is within the primary component and will provide service. Nov 25 11:04:37 nebula2 corosync[4889]: [QUORUM] Members[3]: 1084811078 1084811079 1084811080 Nov 25 11:04:37 nebula2 corosync[4889]: [MAIN ] Completed service synchronization, ready to provide service. Start of pacemaker ------------------ Nov 25 11:04:50 nebula2 pacemakerd[4926]: notice: mcp_read_config: Configured corosync to accept connections from group 113: OK (1) Nov 25 11:04:50 nebula2 pacemakerd[4926]: notice: main: Starting Pacemaker 1.1.10 (Build: 42f2063): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart nagios heartbeat corosync-native snmp libesmtp Nov 25 11:04:50 nebula2 pacemakerd[4926]: notice: cluster_connect_quorum: Quorum acquired Nov 25 11:04:50 nebula2 pacemakerd[4926]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:50 nebula2 pacemakerd[4926]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:51 nebula2 pacemakerd[4926]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078 Nov 25 11:04:51 nebula2 pacemakerd[4926]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811078] - state is now member (was (null)) Nov 25 11:04:51 nebula2 pacemakerd[4926]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was (null)) Nov 25 11:04:51 nebula2 pacemakerd[4926]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080 Nov 25 11:04:51 nebula2 pacemakerd[4926]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811080] - state is now member (was (null)) Nov 25 11:04:51 nebula2 attrd[4931]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 11:04:51 nebula2 stonith-ng[4929]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 11:04:51 nebula2 crmd[4933]: notice: main: CRM Git Version: 42f2063 Nov 25 11:04:51 nebula2 attrd[4931]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:51 nebula2 attrd[4931]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:51 nebula2 attrd[4931]: notice: main: Starting mainloop... Nov 25 11:04:51 nebula2 stonith-ng[4929]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:51 nebula2 stonith-ng[4929]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:51 nebula2 cib[4928]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 11:04:51 nebula2 cib[4928]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:51 nebula2 cib[4928]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:52 nebula2 crmd[4933]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync Nov 25 11:04:52 nebula2 stonith-ng[4929]: notice: setup_cib: Watching for stonith topology changes Nov 25 11:04:52 nebula2 crmd[4933]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:52 nebula2 crmd[4933]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:52 nebula2 stonith-ng[4929]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:52 nebula2 stonith-ng[4929]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:52 nebula2 crmd[4933]: notice: cluster_connect_quorum: Quorum acquired Nov 25 11:04:52 nebula2 cib[4928]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:52 nebula2 cib[4928]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:52 nebula2 crmd[4933]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078 Nov 25 11:04:52 nebula2 crmd[4933]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811078] - state is now member (was (null)) Nov 25 11:04:52 nebula2 crmd[4933]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was (null)) Nov 25 11:04:52 nebula2 crmd[4933]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080 Nov 25 11:04:52 nebula2 crmd[4933]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811080] - state is now member (was (null)) Nov 25 11:04:52 nebula2 crmd[4933]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:52 nebula2 crmd[4933]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:52 nebula2 crmd[4933]: notice: do_started: The local CRM is operational Nov 25 11:04:52 nebula2 crmd[4933]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ] Nov 25 11:04:53 nebula2 stonith-ng[4929]: notice: stonith_device_register: Added 'Stonith-nebula1-IPMILAN' to the device list (1 active devices) Nov 25 11:04:54 nebula2 stonith-ng[4929]: notice: stonith_device_register: Added 'Stonith-nebula3-IPMILAN' to the device list (2 active devices) Nov 25 11:04:55 nebula2 stonith-ng[4929]: notice: stonith_device_register: Added 'Stonith-ONE-Frontend' to the device list (3 active devices) Nov 25 11:04:56 nebula2 stonith-ng[4929]: notice: stonith_device_register: Added 'Stonith-Quorum-Node' to the device list (4 active devices) Nov 25 11:04:56 nebula2 crmd[4933]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ] Nov 25 11:04:56 nebula2 attrd[4931]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Nov 25 11:04:56 nebula2 crmd[4933]: notice: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_JOIN_OFFER cause=C_HA_MESSAGE origin=route_message ] Nov 25 11:04:56 nebula2 crmd[4933]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ] Nov 25 11:04:56 nebula2 attrd[4931]: notice: attrd_local_callback: Sending full refresh (origin=crmd) Nov 25 11:04:56 nebula2 attrd[4931]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079 Nov 25 11:04:56 nebula2 attrd[4931]: notice: get_node_name: Defaulting to uname -n for the local corosync node name Nov 25 11:04:57 nebula2 Filesystem(ONE-Datastores)[4951]: WARNING: Couldn't find device [/dev/one-fs/datastores]. Expected /dev/??? to exist Nov 25 11:04:57 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation clvm_monitor_0 (call=39, rc=7, cib-update=16, confirmed=true) not running Nov 25 11:04:57 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation dlm_monitor_0 (call=34, rc=7, cib-update=17, confirmed=true) not running Nov 25 11:04:57 nebula2 LVM(ONE-vg)[4950]: WARNING: LVM Volume one-fs is not available (stopped) Nov 25 11:04:57 nebula2 LVM(ONE-vg)[4950]: INFO: LVM Volume one-fs is offline Nov 25 11:04:57 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_0 (call=49, rc=7, cib-update=18, confirmed=true) not running Nov 25 11:04:57 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation ONE-vg_monitor_0 (call=44, rc=7, cib-update=19, confirmed=true) not running Nov 25 11:04:57 nebula2 VirtualDomain(ONE-Frontend-VM)[4946]: INFO: Configuration file /var/lib/one/datastores/one/one.xml not readable during probe. Nov 25 11:04:57 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation ONE-Frontend-VM_monitor_0 (call=21, rc=7, cib-update=20, confirmed=true) not running Nov 25 11:04:57 nebula2 VirtualDomain(Quorum-Node-VM)[4947]: INFO: Domain name "quorum" saved to /var/run/resource-agents/VirtualDomain-Quorum-Node-VM.state. Nov 25 11:04:57 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation Quorum-Node-VM_monitor_0 (call=29, rc=7, cib-update=21, confirmed=true) not running Nov 25 11:04:57 nebula2 attrd[4931]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Nov 25 11:04:57 nebula2 attrd[4931]: notice: attrd_perform_update: Sent update 5: probe_complete=true Nov 25 11:04:57 nebula2 kernel: [ 359.431437] sctp: Hash tables configured (established 65536 bind 65536) Nov 25 11:04:57 nebula2 kernel: [ 359.455571] DLM installed Nov 25 11:04:57 nebula2 dlm_controld[5139]: 359 dlm_controld 4.0.1 started Nov 25 11:04:58 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation dlm_start_0 (call=62, rc=0, cib-update=22, confirmed=true) ok Nov 25 11:04:58 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation dlm_monitor_60000 (call=65, rc=0, cib-update=23, confirmed=false) ok Nov 25 11:04:58 nebula2 clvmd(clvm)[5146]: INFO: Starting clvm Nov 25 11:04:58 nebula2 clvmd[5162]: CLVMD started Nov 25 11:04:58 nebula2 kernel: [ 360.554365] dlm: Using TCP for communications Nov 25 11:04:58 nebula2 kernel: [ 360.561694] dlm: connecting to 1084811080 Nov 25 11:04:58 nebula2 kernel: [ 360.562631] dlm: connecting to 1084811078 Nov 25 11:04:59 nebula2 clvmd[5162]: Created DLM lockspace for CLVMD. Nov 25 11:04:59 nebula2 clvmd[5162]: DLM initialisation complete Nov 25 11:04:59 nebula2 clvmd[5162]: Our local node id is 1084811079 Nov 25 11:04:59 nebula2 clvmd[5162]: Connected to Corosync Nov 25 11:04:59 nebula2 clvmd[5162]: Cluster LVM daemon started - connected to Corosync Nov 25 11:04:59 nebula2 clvmd[5162]: Cluster ready, doing some more initialisation Nov 25 11:04:59 nebula2 clvmd[5162]: starting LVM thread Nov 25 11:04:59 nebula2 clvmd[5162]: LVM thread function started Nov 25 11:04:59 nebula2 lvm[5162]: clvmd ready for work Nov 25 11:04:59 nebula2 lvm[5162]: Sub thread ready for work. Nov 25 11:04:59 nebula2 lvm[5162]: Using timeout of 60 seconds Nov 25 11:04:59 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:04:59 nebula2 lvm[5162]: confchg callback. 1 joined, 0 left, 3 members Nov 25 11:05:01 nebula2 lrmd[4930]: notice: operation_finished: clvm_start_0:5146:stderr [ local socket: connect failed: No such file or directory ] Nov 25 11:05:01 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation clvm_start_0 (call=67, rc=0, cib-update=24, confirmed=true) ok Nov 25 11:05:01 nebula2 LVM(ONE-vg)[5181]: INFO: Activating volume group one-fs Nov 25 11:05:01 nebula2 lvm[5162]: Got new connection on fd 5 Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 29 Nov 25 11:05:01 nebula2 lvm[5162]: check_all_clvmds_running Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811079, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811078, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811080, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: creating pipe, [13, 14] Nov 25 11:05:01 nebula2 lvm[5162]: Creating pre&post thread Nov 25 11:05:01 nebula2 lvm[5162]: Created pre&post thread, state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: in sub thread: client = 0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource 'P_#global', flags=0, mode=4 Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource returning 0, lock_id=1 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 0, flags=0x0 () Nov 25 11:05:01 nebula2 lvm[5162]: num_nodes = 3 Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d8a0a0. client=0x1d89ac0, msg=0x1d89bd0, len=29, csid=(nil), xid=0 Nov 25 11:05:01 nebula2 lvm[5162]: Sending message to all cluster nodes Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e40, msglen =29, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Refreshing context Nov 25 11:05:01 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation clvm_monitor_60000 (call=71, rc=0, cib-update=25, confirmed=false) ok Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811079 for 0. len 29 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e746: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 2 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e748: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 3 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 32 Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'V_nebula2-vg' at 1 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource 'V_nebula2-vg', flags=0, mode=3 Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource returning 0, lock_id=2 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 1, flags=0x1 (LOCAL) Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89e40. client=0x1d89ac0, msg=0x1d89bd0, len=32, csid=(nil), xid=1 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e80, msglen =32, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'V_nebula2-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Invalidating cached metadata for VG nebula2-vg Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 31 Nov 25 11:05:01 nebula2 lvm[5162]: check_all_clvmds_running Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811079, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811078, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811080, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 2, flags=0x0 () Nov 25 11:05:01 nebula2 lvm[5162]: num_nodes = 3 Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d8a0a0. client=0x1d89ac0, msg=0x1d89bd0, len=31, csid=(nil), xid=2 Nov 25 11:05:01 nebula2 lvm[5162]: Sending message to all cluster nodes Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: SYNC_NAMES (0x2d) msg=0x1d89e40, msglen =31, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Syncing device names Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811079 for 0. len 31 Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e746: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 2 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e748: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 3 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 32 Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'V_nebula2-vg' at 6 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: unlock_resource: V_nebula2-vg lockid: 2 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 3, flags=0x1 (LOCAL) Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89e40. client=0x1d89ac0, msg=0x1d89bd0, len=32, csid=(nil), xid=3 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e80, msglen =32, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'V_nebula2-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Invalidating cached metadata for VG nebula2-vg Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 29 Nov 25 11:05:01 nebula2 lvm[5162]: check_all_clvmds_running Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811079, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811078, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811080, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: unlock_resource: P_#global lockid: 1 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 4, flags=0x0 () Nov 25 11:05:01 nebula2 lvm[5162]: num_nodes = 3 Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d8a0a0. client=0x1d89ac0, msg=0x1d89bd0, len=29, csid=(nil), xid=4 Nov 25 11:05:01 nebula2 lvm[5162]: Sending message to all cluster nodes Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e40, msglen =29, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Refreshing context Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811079 for 0. len 29 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e746: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 2 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e748: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 3 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 0 Nov 25 11:05:01 nebula2 lvm[5162]: EOF on local socket: inprogress=0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for child thread Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Subthread finished Nov 25 11:05:01 nebula2 lvm[5162]: Joined child thread Nov 25 11:05:01 nebula2 lvm[5162]: ret == 0, errno = 0. removing client Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89bd0. client=0x1d89ac0, msg=(nil), len=0, csid=(nil), xid=4 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: free fd -1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 LVM(ONE-vg)[5181]: INFO: Reading all physical volumes. This may take a while... Found volume group "nebula2-vg" using metadata type lvm2 Nov 25 11:05:01 nebula2 lvm[5162]: Got new connection on fd 5 Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 28 Nov 25 11:05:01 nebula2 lvm[5162]: creating pipe, [13, 14] Nov 25 11:05:01 nebula2 lvm[5162]: Creating pre&post thread Nov 25 11:05:01 nebula2 lvm[5162]: Created pre&post thread, state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: in sub thread: client = 0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource 'V_one-fs', flags=0, mode=3 Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource returning 0, lock_id=1 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 5, flags=0x1 (LOCAL) Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89e40. client=0x1d89ac0, msg=0x1d89bd0, len=28, csid=(nil), xid=5 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e80, msglen =28, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Invalidating cached metadata for VG one-fs Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 31 Nov 25 11:05:01 nebula2 lvm[5162]: check_all_clvmds_running Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811079, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811078, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: down_callback. node 1084811080, state = 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 6, flags=0x0 () Nov 25 11:05:01 nebula2 lvm[5162]: num_nodes = 3 Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d8a0a0. client=0x1d89ac0, msg=0x1d89bd0, len=31, csid=(nil), xid=6 Nov 25 11:05:01 nebula2 lvm[5162]: Sending message to all cluster nodes Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: SYNC_NAMES (0x2d) msg=0x1d89e40, msglen =31, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Syncing device names Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811079 for 0. len 31 Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e746: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 2 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e748: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 3 replies, expecting: 3 Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 28 Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: unlock_resource: V_one-fs lockid: 1 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 14 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 7, flags=0x1 (LOCAL) Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89e40. client=0x1d89ac0, msg=0x1d89bd0, len=28, csid=(nil), xid=7 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e80, msglen =28, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Invalidating cached metadata for VG one-fs Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 13: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 0 Nov 25 11:05:01 nebula2 lvm[5162]: EOF on local socket: inprogress=0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for child thread Nov 25 11:05:01 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Subthread finished Nov 25 11:05:01 nebula2 lvm[5162]: Joined child thread Nov 25 11:05:01 nebula2 lvm[5162]: ret == 0, errno = 0. removing client Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89bd0. client=0x1d89ac0, msg=(nil), len=0, csid=(nil), xid=7 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: free fd -1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 LVM(ONE-vg)[5181]: ERROR: Volume group "one-fs" not found Nov 25 11:05:01 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation ONE-vg_start_0 (call=73, rc=1, cib-update=26, confirmed=true) unknown error Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_cs_dispatch: Update relayed from nebula1 Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ONE-vg (INFINITY) Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_perform_update: Sent update 8: fail-count-ONE-vg=INFINITY Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_cs_dispatch: Update relayed from nebula1 Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-ONE-vg (1416909901) Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_perform_update: Sent update 11: last-failure-ONE-vg=1416909901 Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_cs_dispatch: Update relayed from nebula1 Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ONE-vg (INFINITY) Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_perform_update: Sent update 14: fail-count-ONE-vg=INFINITY Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_cs_dispatch: Update relayed from nebula1 Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-ONE-vg (1416909901) Nov 25 11:05:01 nebula2 attrd[4931]: notice: attrd_perform_update: Sent update 17: last-failure-ONE-vg=1416909901 Nov 25 11:05:01 nebula2 lvm[5162]: Got new connection on fd 5 Nov 25 11:05:01 nebula2 lvm[5162]: Read on local socket 5, len = 28 Nov 25 11:05:01 nebula2 lvm[5162]: creating pipe, [12, 13] Nov 25 11:05:01 nebula2 lvm[5162]: Creating pre&post thread Nov 25 11:05:01 nebula2 lvm[5162]: Created pre&post thread, state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: in sub thread: client = 0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x1d89ac0) Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource 'V_one-fs', flags=0, mode=3 Nov 25 11:05:01 nebula2 lvm[5162]: lock_resource returning 0, lock_id=1 Nov 25 11:05:01 nebula2 lvm[5162]: Writing status 0 down pipe 13 Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 12: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:01 nebula2 lvm[5162]: distribute command: XID = 8, flags=0x1 (LOCAL) Nov 25 11:05:01 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89e40. client=0x1d89ac0, msg=0x1d89bd0, len=28, csid=(nil), xid=8 Nov 25 11:05:01 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:01 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e80, msglen =28, client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 11:05:01 nebula2 lvm[5162]: Invalidating cached metadata for VG one-fs Nov 25 11:05:01 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:01 nebula2 lvm[5162]: Got 1 replies, expecting: 1 Nov 25 11:05:01 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:01 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:01 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:01 nebula2 lvm[5162]: read on PIPE 12: 4 bytes: status: 0 Nov 25 11:05:01 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:01 nebula2 lvm[5162]: Send local reply Nov 25 11:05:02 nebula2 lvm[5162]: Read on local socket 5, len = 31 Nov 25 11:05:02 nebula2 lvm[5162]: check_all_clvmds_running Nov 25 11:05:02 nebula2 lvm[5162]: down_callback. node 1084811079, state = 3 Nov 25 11:05:02 nebula2 lvm[5162]: down_callback. node 1084811078, state = 3 Nov 25 11:05:02 nebula2 lvm[5162]: down_callback. node 1084811080, state = 3 Nov 25 11:05:02 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:02 nebula2 lvm[5162]: Writing status 0 down pipe 13 Nov 25 11:05:02 nebula2 lvm[5162]: read on PIPE 12: 4 bytes: status: 0 Nov 25 11:05:02 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:02 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:02 nebula2 lvm[5162]: distribute command: XID = 9, flags=0x0 () Nov 25 11:05:02 nebula2 lvm[5162]: num_nodes = 3 Nov 25 11:05:02 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d8a0a0. client=0x1d89ac0, msg=0x1d89bd0, len=31, csid=(nil), xid=9 Nov 25 11:05:02 nebula2 lvm[5162]: Sending message to all cluster nodes Nov 25 11:05:02 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:02 nebula2 lvm[5162]: process_local_command: SYNC_NAMES (0x2d) msg=0x1d89e40, msglen =31, client=0x1d89ac0 Nov 25 11:05:02 nebula2 lvm[5162]: Syncing device names Nov 25 11:05:02 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:02 nebula2 lvm[5162]: Got 1 replies, expecting: 3 Nov 25 11:05:02 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:02 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811079 for 0. len 31 Nov 25 11:05:02 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811078 for 1084811079. len 18 Nov 25 11:05:02 nebula2 lvm[5162]: Reply from node 40a8e746: 0 bytes Nov 25 11:05:02 nebula2 lvm[5162]: Got 2 replies, expecting: 3 Nov 25 11:05:02 nebula2 lvm[5162]: 1084811079 got message from nodeid 1084811080 for 1084811079. len 18 Nov 25 11:05:02 nebula2 lvm[5162]: Reply from node 40a8e748: 0 bytes Nov 25 11:05:02 nebula2 lvm[5162]: Got 3 replies, expecting: 3 Nov 25 11:05:02 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:02 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:02 nebula2 lvm[5162]: read on PIPE 12: 4 bytes: status: 0 Nov 25 11:05:02 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:02 nebula2 lvm[5162]: Send local reply Nov 25 11:05:02 nebula2 lvm[5162]: Read on local socket 5, len = 28 Nov 25 11:05:02 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:02 nebula2 lvm[5162]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x1d89ac0) Nov 25 11:05:02 nebula2 lvm[5162]: unlock_resource: V_one-fs lockid: 1 Nov 25 11:05:02 nebula2 lvm[5162]: Writing status 0 down pipe 13 Nov 25 11:05:02 nebula2 lvm[5162]: Waiting to do post command - state = 0 Nov 25 11:05:02 nebula2 lvm[5162]: read on PIPE 12: 4 bytes: status: 0 Nov 25 11:05:02 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:02 nebula2 lvm[5162]: distribute command: XID = 10, flags=0x1 (LOCAL) Nov 25 11:05:02 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89e40. client=0x1d89ac0, msg=0x1d89bd0, len=28, csid=(nil), xid=10 Nov 25 11:05:02 nebula2 lvm[5162]: process_work_item: local Nov 25 11:05:02 nebula2 lvm[5162]: process_local_command: LOCK_VG (0x33) msg=0x1d89e80, msglen =28, client=0x1d89ac0 Nov 25 11:05:02 nebula2 lvm[5162]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0 Nov 25 11:05:02 nebula2 lvm[5162]: Invalidating cached metadata for VG one-fs Nov 25 11:05:02 nebula2 lvm[5162]: Reply from node 40a8e747: 0 bytes Nov 25 11:05:02 nebula2 lvm[5162]: Got 1 replies, expecting: 1 Nov 25 11:05:02 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:02 nebula2 lvm[5162]: Got post command condition... Nov 25 11:05:02 nebula2 lvm[5162]: read on PIPE 12: 4 bytes: status: 0 Nov 25 11:05:02 nebula2 lvm[5162]: background routine status was 0, sock_client=0x1d89ac0 Nov 25 11:05:02 nebula2 lvm[5162]: Waiting for next pre command Nov 25 11:05:02 nebula2 lvm[5162]: Send local reply Nov 25 11:05:02 nebula2 lvm[5162]: Read on local socket 5, len = 0 Nov 25 11:05:02 nebula2 lvm[5162]: EOF on local socket: inprogress=0 Nov 25 11:05:02 nebula2 lvm[5162]: Waiting for child thread Nov 25 11:05:02 nebula2 lvm[5162]: Got pre command condition... Nov 25 11:05:02 nebula2 lvm[5162]: Subthread finished Nov 25 11:05:02 nebula2 lvm[5162]: Joined child thread Nov 25 11:05:02 nebula2 lvm[5162]: ret == 0, errno = 0. removing client Nov 25 11:05:02 nebula2 lvm[5162]: add_to_lvmqueue: cmd=0x1d89bd0. client=0x1d89ac0, msg=(nil), len=0, csid=(nil), xid=10 Nov 25 11:05:02 nebula2 lvm[5162]: process_work_item: free fd -1 Nov 25 11:05:02 nebula2 lvm[5162]: LVM thread waiting for work Nov 25 11:05:02 nebula2 LVM(ONE-vg)[5208]: INFO: Volume group one-fs not found Nov 25 11:05:02 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation ONE-vg_stop_0 (call=77, rc=0, cib-update=27, confirmed=true) ok Nov 25 11:05:02 nebula2 lrmd[4930]: error: crm_abort: crm_glib_handler: Forked child 5221 to record non-fatal assert at logging.c:63 : Source ID 53 was not found when attempting to remove it Nov 25 11:05:02 nebula2 clvmd(clvm)[5222]: INFO: Stopping clvm Nov 25 11:05:02 nebula2 clvmd(clvm)[5222]: INFO: Stopping clvmd Nov 25 11:05:02 nebula2 lvm[5162]: SIGTERM received Nov 25 11:05:02 nebula2 lvm[5162]: cluster_closedown Nov 25 11:05:02 nebula2 dlm_controld[5139]: 363 cpg_dispatch error 9 Nov 25 11:05:03 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation clvm_stop_0 (call=81, rc=0, cib-update=28, confirmed=true) ok Nov 25 11:05:03 nebula2 lrmd[4930]: error: crm_abort: crm_glib_handler: Forked child 5244 to record non-fatal assert at logging.c:63 : Source ID 46 was not found when attempting to remove it Nov 25 11:05:03 nebula2 kernel: [ 364.951701] dlm: closing connection to node 1084811080 Nov 25 11:05:03 nebula2 kernel: [ 364.952896] dlm: closing connection to node 1084811079 Nov 25 11:05:03 nebula2 kernel: [ 364.954064] dlm: closing connection to node 1084811078 Nov 25 11:05:05 nebula2 crmd[4933]: notice: process_lrm_event: LRM operation dlm_stop_0 (call=86, rc=0, cib-update=29, confirmed=true) ok