[Pacemaker] problem setting up STONITH for DRAC

Sander van Vugt mail at sandervanvugt.nl
Sun Jan 10 05:58:48 EST 2010


Hi,

In two different environments I seem to have the same problem setting up
STONITH, I'm getting an "unknown error(rc=1)". My configuration is as
follows, a log file indicating the error I get is attached. I would
greatly appreciate any help with this.

DRAC PARAMETERS (as configured in DRAC BIOS) (Using similar parameters
on DRAC5 in one environment and on DRAC6 in another environment, the log
files shown here come from DRAC6).
*	IP address: 192.168.1.10 (reachable directly from the server console)
*	username: root
*	password: novell

Output of cibadmin -Q is attached to this mail in the /tmp/cluster.xml
file.

Thanks in advance!
Sander


-------------- next part --------------
Dec  4 21:30:47 linux syslog-ng[5903]: syslog-ng starting up; version='2.0.9'
Dec  4 21:30:52 linux kernel: klogd 1.4.1, log source = /proc/kmsg started.
Dec  4 21:30:52 linux kernel: bootsplash: status on console 0 changed to on
Dec  4 21:30:54 linux kernel: NET: Registered protocol family 10
Dec  4 21:30:54 linux kernel: lo: Disabled Privacy Extensions
Dec  4 21:31:01 linux kernel: CE: hpet increasing min_delta_ns to 22500 nsec
Dec  4 21:33:48 linux kernel: bnx2: eth0: using MSIX
Dec  4 21:33:48 linux kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 21:33:48 linux kernel: bnx2: eth1: using MSIX
Dec  4 21:33:48 linux kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 21:33:48 linux kernel: bnx2: eth2: using MSIX
Dec  4 21:33:48 linux kernel: ADDRCONF(NETDEV_UP): eth2: link is not ready
Dec  4 21:33:48 linux kernel: bnx2: eth3: using MSIX
Dec  4 21:33:48 linux kernel: ADDRCONF(NETDEV_UP): eth3: link is not ready
Dec  4 21:33:48 linux kernel: st: Version 20080504, fixed bufsize 32768, s/g segs 256
Dec  4 21:33:51 linux kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 21:33:51 linux kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 21:33:51 linux kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 21:33:51 linux kernel: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
Dec  4 21:33:53 linux kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 21:33:53 linux kernel: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Dec  4 21:34:01 linux kernel: eth0: no IPv6 routers present
Dec  4 21:34:02 linux kernel: eth3: no IPv6 routers present
Dec  4 21:34:04 linux kernel: eth2: no IPv6 routers present
Dec  4 21:34:11 linux su: (to root) root on none
Dec  4 21:34:18 linux kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Dec  4 21:34:21 linux kernel: NET: Registered protocol family 17
Dec  4 21:36:30 linux ifdown:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 21:36:30 linux ifdown:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 21:36:31 linux ifdown:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 21:36:31 linux ifdown:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 21:36:31 linux ifdown:               No configuration found for eth3 
Dec  4 21:36:31 linux ifdown:               Nevertheless the interface will be shut down.
Dec  4 21:36:32 linux SuSEfirewall2: SuSEfirewall2 not active
Dec  4 21:36:32 linux ifup:     lo        
Dec  4 21:36:32 linux ifup:     lo        
Dec  4 21:36:32 linux ifup: IP address: 127.0.0.1/8  
Dec  4 21:36:32 linux ifup:  
Dec  4 21:36:32 linux ifup:               
Dec  4 21:36:32 linux ifup: IP address: 127.0.0.2/8  
Dec  4 21:36:32 linux ifup:  
Dec  4 21:36:32 linux ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 21:36:32 linux kernel: bnx2: eth0: using MSIX
Dec  4 21:36:32 linux kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 21:36:32 linux ifup:     eth0      
Dec  4 21:36:32 linux ifup: IP address: 10.0.0.10/24  
Dec  4 21:36:32 linux ifup:  
Dec  4 21:36:32 linux SuSEfirewall2: SuSEfirewall2 not active
Dec  4 21:36:33 linux ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 21:36:33 linux kernel: bnx2: eth1: using MSIX
Dec  4 21:36:33 linux kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 21:36:33 linux ifup:     eth1      
Dec  4 21:36:33 linux ifup: IP address: 10.0.0.11/24  
Dec  4 21:36:33 linux ifup:  
Dec  4 21:36:33 linux SuSEfirewall2: SuSEfirewall2 not active
Dec  4 21:36:33 linux ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 21:36:33 linux kernel: bnx2: eth2: using MSIX
Dec  4 21:36:33 linux kernel: ADDRCONF(NETDEV_UP): eth2: link is not ready
Dec  4 21:36:33 linux ifup:     eth2      
Dec  4 21:36:33 linux ifup: IP address: 192.168.1.150/24  
Dec  4 21:36:33 linux ifup:  
Dec  4 21:36:33 linux SuSEfirewall2: SuSEfirewall2 not active
Dec  4 21:36:33 linux ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 21:36:33 linux ifup:               No configuration found for eth3
Dec  4 21:36:34 linux SuSEfirewall2: SuSEfirewall2 not active
Dec  4 21:36:35 linux kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 21:36:35 linux kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 21:36:36 linux kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 21:36:36 linux kernel: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Dec  4 21:36:45 linux kernel: eth0: no IPv6 routers present
Dec  4 21:36:46 linux kernel: eth2: no IPv6 routers present
Dec  4 21:56:46 linux -- MARK --
Dec  4 21:59:52 linux init: Re-reading inittab
Dec  4 22:00:26 linux kernel: lp: driver loaded but no devices found
Dec  4 22:00:26 linux kernel: ppdev: user-space parallel port driver
Dec  4 22:00:47 linux ifdown:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:00:48 linux ifdown:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:00:48 linux ifdown:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:00:49 linux ifdown:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:00:49 linux ifdown:               No configuration found for eth3 
Dec  4 22:00:49 linux ifdown:               Nevertheless the interface will be shut down.
Dec  4 22:00:49 linux init: Entering runlevel: 5
Dec  4 22:00:50 linux kernel: Kernel logging (proc) stopped.
Dec  4 22:00:50 linux kernel: Kernel log daemon terminating.
Dec  4 22:00:50 linux syslog-ng[5903]: Termination requested via signal, terminating;
Dec  4 22:00:50 linux syslog-ng[5903]: syslog-ng shutting down; version='2.0.9'
Dec  4 22:00:50 node1 syslog-ng[12341]: syslog-ng starting up; version='2.0.9'
Dec  4 22:00:50 node1 firmware.sh[12504]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  4 22:00:50 node1 firmware.sh[12635]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  4 22:00:50 node1 firmware.sh[12641]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  4 22:00:50 node1 firmware.sh[12646]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  4 22:00:50 node1 ifup:     lo        
Dec  4 22:00:50 node1 ifup:     lo        
Dec  4 22:00:50 node1 ifup: IP address: 127.0.0.1/8  
Dec  4 22:00:50 node1 ifup:  
Dec  4 22:00:50 node1 ifup:               
Dec  4 22:00:50 node1 ifup: IP address: 127.0.0.2/8  
Dec  4 22:00:50 node1 ifup:  
Dec  4 22:00:50 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:00:51 node1 ifup:     eth0      
Dec  4 22:00:51 node1 ifup: IP address: 10.0.0.10/24  
Dec  4 22:00:51 node1 ifup:  
Dec  4 22:00:51 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:00:51 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:00:51 node1 ifup:     eth1      
Dec  4 22:00:51 node1 ifup: IP address: 10.0.0.11/24  
Dec  4 22:00:51 node1 ifup:  
Dec  4 22:00:51 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:00:51 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:00:51 node1 ifup:     eth2      
Dec  4 22:00:51 node1 ifup: IP address: 192.168.1.150/24  
Dec  4 22:00:52 node1 ifup:  
Dec  4 22:00:52 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:00:52 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:00:52 node1 ifup:               No configuration found for eth3
Dec  4 22:00:53 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Dec  4 22:00:53 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Dec  4 22:00:54 node1 smartd[13939]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Dec  4 22:00:54 node1 smartd[13939]: Opened configuration file /etc/smartd.conf
Dec  4 22:00:54 node1 smartd[13939]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Dec  4 22:00:54 node1 smartd[13939]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Dec  4 22:00:54 node1 smartd[13939]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Dec  4 22:00:54 node1 smartd[13939]: Device: /dev/sda [SAT], opened
Dec  4 22:00:54 node1 smartd[13939]: Device: /dev/sda [SAT], found in smartd database.
Dec  4 22:00:54 node1 smartd[13939]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Dec  4 22:00:54 node1 smartd[13939]: Monitoring 1 ATA and 0 SCSI devices
Dec  4 22:00:55 node1 smartd[13939]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  4 22:00:55 node1 smartd[14023]: smartd has fork()ed into background mode. New PID=14023.
Dec  4 22:00:55 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Dec  4 22:00:55 node1 kernel: IA-32 Microcode Update Driver: v1.14a <tigran at aivazian.fsnet.co.uk>
Dec  4 22:00:55 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  4 22:00:55 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  4 22:00:55 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  4 22:00:55 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  4 22:00:55 node1 kernel: bnx2: eth0: using MSIX
Dec  4 22:00:55 node1 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 22:00:55 node1 kernel: bnx2: eth1: using MSIX
Dec  4 22:00:55 node1 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 22:00:55 node1 kernel: bnx2: eth2: using MSIX
Dec  4 22:00:55 node1 kernel: ADDRCONF(NETDEV_UP): eth2: link is not ready
Dec  4 22:00:55 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:00:55 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 22:00:55 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:00:55 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Dec  4 22:00:55 node1 sshd[14070]: Server listening on 0.0.0.0 port 22.
Dec  4 22:00:55 node1 sshd[14070]: Server listening on :: port 22.
Dec  4 22:00:55 node1 /usr/sbin/cron[14075]: (CRON) STARTUP (V5.0)
Dec  4 22:00:58 node1 gdm-simple-greeter[14182]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Dec  4 22:00:58 node1 gdm-simple-greeter[14182]: WARNING: Could not ask power manager if user can suspend: Launch helper exited with unknown return code 0
Dec  4 22:00:58 node1 gdm-simple-greeter[14182]: WARNING: Could not ask power manager if user can suspend: Launch helper exited with unknown return code 0
Dec  4 22:00:58 node1 gdm-simple-greeter[14182]: WARNING: Could not ask power manager if user can suspend: Launch helper exited with unknown return code 0
Dec  4 22:01:03 node1 kernel: eth0: no IPv6 routers present
Dec  4 22:01:04 node1 kernel: eth2: no IPv6 routers present
Dec  4 22:07:07 node1 python: hp-systray(init)[14387]: error: hp-systray cannot be run as root. Exiting.
Dec  4 22:07:11 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec  4 22:07:11 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Dec  4 22:07:11 node1 hald: mounted /dev/sr0 on behalf of uid 0
Dec  4 22:07:11 node1 gnome-keyring-daemon[14353]: adding removable location: volume_label_SUSE_SLES_11_0_0_001 at /media/SUSE_SLES-11-0-0.001
Dec  4 22:07:44 node1 kernel: bnx2: eth3: using MSIX
Dec  4 22:07:44 node1 kernel: ADDRCONF(NETDEV_UP): eth3: link is not ready
Dec  4 22:07:46 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:07:46 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
Dec  4 22:07:57 node1 kernel: eth3: no IPv6 routers present
Dec  4 22:08:07 node1 ifprobe:     eth0      changed config file: ifcfg-eth0 --> restart interface!
Dec  4 22:08:07 node1 ifdown:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:08:07 node1 ifprobe:     eth1      changed config file: ifcfg-eth1 --> restart interface!
Dec  4 22:08:08 node1 ifdown:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:08:08 node1 ifprobe:     eth2      changed config file: ifcfg-eth2 --> restart interface!
Dec  4 22:08:08 node1 ifdown:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:08:08 node1 ifprobe:     eth3      config file created: --> restart interface!
Dec  4 22:08:08 node1 ifdown:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:08:09 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:08:09 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:08:09 node1 kernel: bnx2: eth0: using MSIX
Dec  4 22:08:09 node1 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 22:08:09 node1 ifup:     eth0      
Dec  4 22:08:09 node1 ifup: IP address: 10.0.0.10/24  
Dec  4 22:08:09 node1 ifup:  
Dec  4 22:08:09 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:08:10 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:08:10 node1 kernel: bnx2: eth1: using MSIX
Dec  4 22:08:10 node1 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 22:08:10 node1 ifup:     eth1      
Dec  4 22:08:10 node1 ifup: IP address: 10.0.0.11/24  
Dec  4 22:08:10 node1 ifup:  
Dec  4 22:08:10 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:08:10 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:08:10 node1 kernel: bnx2: eth2: using MSIX
Dec  4 22:08:10 node1 kernel: ADDRCONF(NETDEV_UP): eth2: link is not ready
Dec  4 22:08:10 node1 ifup:     eth2      
Dec  4 22:08:10 node1 ifup: IP address: 192.168.1.150/24  
Dec  4 22:08:10 node1 ifup:  
Dec  4 22:08:10 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:08:10 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:08:11 node1 kernel: bnx2: eth3: using MSIX
Dec  4 22:08:11 node1 kernel: ADDRCONF(NETDEV_UP): eth3: link is not ready
Dec  4 22:08:11 node1 ifup:     eth3      
Dec  4 22:08:11 node1 ifup: IP address: 192.168.1.151/24  
Dec  4 22:08:11 node1 ifup:  
Dec  4 22:08:11 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:08:11 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:08:12 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:08:12 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 22:08:13 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:08:13 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Dec  4 22:08:13 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:08:13 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
Dec  4 22:08:23 node1 kernel: eth0: no IPv6 routers present
Dec  4 22:08:23 node1 kernel: eth2: no IPv6 routers present
Dec  4 22:08:24 node1 kernel: eth3: no IPv6 routers present
Dec  4 22:08:54 node1 kernel: bnx2: eth0 NIC Copper Link is Down
Dec  4 22:09:02 node1 kernel: bnx2: eth3 NIC Copper Link is Down
Dec  4 22:09:08 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:09:17 node1 kernel: bnx2: eth3 NIC Copper Link is Down
Dec  4 22:09:23 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:09:32 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:13:03 node1 ifdown:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:13:04 node1 ifdown:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:13:04 node1 ifdown:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:13:04 node1 ifdown:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:13:05 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:13:06 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:13:06 node1 kernel: bnx2: eth0: using MSIX
Dec  4 22:13:06 node1 kernel: ADDRCONF(NETDEV_UP): eth0: link is not ready
Dec  4 22:13:06 node1 ifup:     eth0      
Dec  4 22:13:06 node1 ifup: IP address: 10.0.0.10/24  
Dec  4 22:13:06 node1 ifup:  
Dec  4 22:13:06 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:13:06 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  4 22:13:06 node1 kernel: bnx2: eth1: using MSIX
Dec  4 22:13:06 node1 kernel: ADDRCONF(NETDEV_UP): eth1: link is not ready
Dec  4 22:13:06 node1 ifup:     eth1      
Dec  4 22:13:06 node1 ifup: IP address: 10.0.1.11/24  
Dec  4 22:13:06 node1 ifup:  
Dec  4 22:13:06 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:13:06 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:13:06 node1 kernel: bnx2: eth2: using MSIX
Dec  4 22:13:06 node1 kernel: ADDRCONF(NETDEV_UP): eth2: link is not ready
Dec  4 22:13:06 node1 ifup:     eth2      
Dec  4 22:13:06 node1 ifup: IP address: 192.168.1.150/24  
Dec  4 22:13:06 node1 ifup:  
Dec  4 22:13:07 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:13:07 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  4 22:13:07 node1 kernel: bnx2: eth3: using MSIX
Dec  4 22:13:07 node1 kernel: ADDRCONF(NETDEV_UP): eth3: link is not ready
Dec  4 22:13:07 node1 ifup:     eth3      
Dec  4 22:13:07 node1 ifup: IP address: 192.168.1.151/24  
Dec  4 22:13:07 node1 ifup:  
Dec  4 22:13:07 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:13:07 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  4 22:13:08 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:13:08 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Dec  4 22:13:09 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:13:09 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Dec  4 22:13:09 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:13:09 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth3: link becomes ready
Dec  4 22:13:18 node1 kernel: eth0: no IPv6 routers present
Dec  4 22:13:19 node1 kernel: eth2: no IPv6 routers present
Dec  4 22:13:20 node1 kernel: eth3: no IPv6 routers present
Dec  4 22:13:54 node1 kernel: bnx2: eth0 NIC Copper Link is Down
Dec  4 22:14:01 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:14:28 node1 kernel: bnx2: eth0 NIC Copper Link is Down
Dec  4 22:14:31 node1 kernel: bnx2: eth3 NIC Copper Link is Down
Dec  4 22:14:35 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:14:35 node1 kernel: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec  4 22:14:38 node1 kernel: bnx2: eth2 NIC Copper Link is Down
Dec  4 22:14:43 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  4 22:14:46 node1 kernel: eth1: no IPv6 routers present
Dec  4 22:14:49 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  4 22:16:32 node1 kernel: No iBFT detected.
Dec  4 22:16:35 node1 kernel: Loading iSCSI transport class v2.0-870.
Dec  4 22:16:35 node1 kernel: iscsi: registered transport (tcp)
Dec  4 22:16:35 node1 kernel: iscsi: registered transport (iser)
Dec  4 22:16:35 node1 iscsid: iSCSI logger with pid=4150 started!
Dec  4 22:16:36 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Dec  4 22:16:36 node1 iscsid: iSCSI daemon with pid=4151 started!
Dec  4 22:16:55 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Dec  4 22:16:55 node1 kernel: scsi 4:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] 104857600 512-byte hardware sectors: (53.6GB/50.0GiB)
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] 104857600 512-byte hardware sectors: (53.6GB/50.0GiB)
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:16:55 node1 kernel:  sdb: unknown partition table
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: [sdb] Attached SCSI disk
Dec  4 22:16:55 node1 kernel: sd 4:0:0:0: Attached scsi generic sg2 type 0
Dec  4 22:16:55 node1 kernel: scsi 4:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] 209715200 512-byte hardware sectors: (107GB/100GiB)
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Write Protect is off
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Mode Sense: 77 00 00 08
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] 209715200 512-byte hardware sectors: (107GB/100GiB)
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Write Protect is off
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Mode Sense: 77 00 00 08
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:16:55 node1 kernel:  sdc: unknown partition table
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: [sdc] Attached SCSI disk
Dec  4 22:16:55 node1 kernel: sd 4:0:0:1: Attached scsi generic sg3 type 0
Dec  4 22:16:55 node1 iscsid: connection1:0 is operational now
Dec  4 22:18:14 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Dec  4 22:18:14 node1 kernel: scsi 5:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] 104857600 512-byte hardware sectors: (53.6GB/50.0GiB)
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Write Protect is off
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Mode Sense: 77 00 00 08
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] 104857600 512-byte hardware sectors: (53.6GB/50.0GiB)
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Write Protect is off
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Mode Sense: 77 00 00 08
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:18:14 node1 kernel:  sdd: unknown partition table
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: [sdd] Attached SCSI disk
Dec  4 22:18:14 node1 kernel: sd 5:0:0:0: Attached scsi generic sg4 type 0
Dec  4 22:18:14 node1 kernel: scsi 5:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] 209715200 512-byte hardware sectors: (107GB/100GiB)
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] 209715200 512-byte hardware sectors: (107GB/100GiB)
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Dec  4 22:18:14 node1 kernel:  sde: unknown partition table
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: [sde] Attached SCSI disk
Dec  4 22:18:14 node1 kernel: sd 5:0:0:1: Attached scsi generic sg5 type 0
Dec  4 22:18:15 node1 iscsid: connection2:0 is operational now
Dec  4 22:18:47 node1 kernel: device-mapper: multipath: version 1.0.5 loaded
Dec  4 22:18:47 node1 kernel: device-mapper: multipath round-robin: version 1.0.0 loaded
Dec  4 22:18:47 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  4 22:18:47 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  4 22:18:48 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  4 22:18:48 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  4 22:38:06 node1 hald: unmounted /dev/sr0 from '/media/SUSE_SLES-11-0-0.001' on behalf of uid 0
Dec  4 22:38:06 node1 gnome-keyring-daemon[14353]: removing removable location: volume_label_SUSE_SLES_11_0_0_001
Dec  4 22:38:17 node1 shutdown[4806]: shutting down for system halt
Dec  4 22:38:17 node1 init: Switching to runlevel: 0
Dec  4 22:38:19 node1 kernel: bootsplash: status on console 0 changed to on
Dec  4 22:38:19 node1 smartd[14023]: smartd received signal 15: Terminated
Dec  4 22:38:19 node1 smartd[14023]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  4 22:38:19 node1 smartd[14023]: smartd is exiting (exit status 0)
Dec  4 22:38:19 node1 libvirtd: Shutting down on signal 15
Dec  4 22:38:19 node1 sshd[14070]: Received signal 15; terminating.
Dec  4 22:38:19 node1 multipathd: 149455400000000000000000001000000498e00000f000000: stop event checker thread
Dec  4 22:38:20 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Dec  4 22:38:20 node1 kernel: Kernel logging (proc) stopped.
Dec  4 22:38:20 node1 kernel: Kernel log daemon terminating.
Dec  4 22:38:20 node1 syslog-ng[12341]: Termination requested via signal, terminating;
Dec  4 22:38:20 node1 syslog-ng[12341]: syslog-ng shutting down; version='2.0.9'
Dec  5 09:30:01 node1 syslog-ng[2391]: syslog-ng starting up; version='2.0.9'
Dec  5 09:30:01 node1 rchal: CPU frequency scaling is not supported by your processor.
Dec  5 09:30:01 node1 rchal: boot with 'CPUFREQ=no' in to avoid this warning.
Dec  5 09:30:01 node1 rchal: Cannot load cpufreq governors - No cpufreq driver available
Dec  5 09:30:01 node1 ifup:     lo        
Dec  5 09:30:01 node1 ifup:     lo        
Dec  5 09:30:01 node1 ifup: IP address: 127.0.0.1/8  
Dec  5 09:30:01 node1 ifup:  
Dec  5 09:30:01 node1 ifup:               
Dec  5 09:30:01 node1 ifup: IP address: 127.0.0.2/8  
Dec  5 09:30:01 node1 ifup:  
Dec  5 09:30:02 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  5 09:30:02 node1 ifup:     eth0      
Dec  5 09:30:02 node1 ifup: IP address: 10.0.0.10/24  
Dec  5 09:30:02 node1 ifup:  
Dec  5 09:30:02 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:30:02 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  5 09:30:03 node1 ifup:     eth1      
Dec  5 09:30:03 node1 ifup: IP address: 10.0.1.11/24  
Dec  5 09:30:03 node1 ifup:  
Dec  5 09:30:03 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:30:03 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  5 09:30:03 node1 ifup:     eth2      
Dec  5 09:30:03 node1 ifup: IP address: 192.168.1.150/24  
Dec  5 09:30:03 node1 ifup:  
Dec  5 09:30:03 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:30:03 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  5 09:30:03 node1 ifup:     eth3      
Dec  5 09:30:03 node1 ifup: IP address: 192.168.1.151/24  
Dec  5 09:30:03 node1 ifup:  
Dec  5 09:30:04 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:30:04 node1 rpcbind: cannot create socket for udp6
Dec  5 09:30:04 node1 rpcbind: cannot create socket for tcp6
Dec  5 09:30:05 node1 iscsid: iSCSI logger with pid=3608 started!
Dec  5 09:30:06 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Dec  5 09:30:06 node1 iscsid: iSCSI daemon with pid=3609 started!
Dec  5 09:30:06 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Dec  5 09:30:06 node1 kernel: IA-32 Microcode Update Driver: v1.14a-xen <tigran at aivazian.fsnet.co.uk>
Dec  5 09:30:06 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 09:30:06 node1 kernel: bnx2: eth0: using MSIX
Dec  5 09:30:06 node1 kernel: bnx2: eth1: using MSIX
Dec  5 09:30:06 node1 kernel: bnx2: eth2: using MSIX
Dec  5 09:30:06 node1 kernel: bnx2: eth3: using MSIX
Dec  5 09:30:06 node1 kernel: Loading iSCSI transport class v2.0-870.
Dec  5 09:30:06 node1 kernel: iscsi: registered transport (tcp)
Dec  5 09:30:06 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 09:30:06 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 09:30:06 node1 kernel: iscsi: registered transport (iser)
Dec  5 09:30:06 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 09:30:06 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 09:30:06 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  5 09:30:06 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  5 09:30:06 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Dec  5 09:30:07 node1 iscsid: connection1:0 is operational now
Dec  5 09:30:07 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  5 09:30:07 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Dec  5 09:30:08 node1 iscsid: connection2:0 is operational now
Dec  5 09:30:09 node1 sshd[4197]: Server listening on 0.0.0.0 port 22.
Dec  5 09:30:09 node1 xenstored: Checking store ...
Dec  5 09:30:09 node1 xenstored: Checking store complete.
Dec  5 09:30:09 node1 kernel: suspend: event channel 52
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:795: blktapctrl: v1.0.0
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [raw image (aio)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [raw image (sync)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [vmware image (vmdk)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [ramdisk image (ram)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [qcow disk (qcow)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [qcow2 disk (qcow2)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [ioemu disk]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl.c:797: Found driver: [raw image (cdrom)]
Dec  5 09:30:09 node1 BLKTAPCTRL[4212]: blktapctrl_linux.c:23: /dev/xen/blktap0 device already exists
Dec  5 09:30:09 node1 kernel: Bridge firewalling registered
Dec  5 09:30:09 node1 smartd[4154]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Dec  5 09:30:10 node1 smartd[4154]: Opened configuration file /etc/smartd.conf
Dec  5 09:30:10 node1 smartd[4154]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Dec  5 09:30:10 node1 smartd[4154]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Dec  5 09:30:10 node1 smartd[4154]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Dec  5 09:30:10 node1 smartd[4154]: Device: /dev/sda [SAT], opened
Dec  5 09:30:10 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Dec  5 09:30:10 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Dec  5 09:30:10 node1 smartd[4154]: Device: /dev/sda [SAT], found in smartd database.
Dec  5 09:30:10 node1 smartd[4154]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Dec  5 09:30:10 node1 /usr/sbin/cron[4555]: (CRON) STARTUP (V5.0)
Dec  5 09:30:10 node1 smartd[4154]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 09:30:10 node1 smartd[4154]: Monitoring 1 ATA and 0 SCSI devices
Dec  5 09:30:11 node1 smartd[4154]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 113 to 121
Dec  5 09:30:11 node1 smartd[4154]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 09:30:11 node1 smartd[4557]: smartd has fork()ed into background mode. New PID=4557.
Dec  5 09:30:22 node1 gdm-simple-greeter[4726]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Dec  5 09:30:22 node1 gdm-session-worker[4729]: PAM pam_putenv: NULL pam handle passed
Dec  5 09:30:33 node1 python: hp-systray(init)[4893]: error: hp-systray cannot be run as root. Exiting.
Dec  5 09:31:15 node1 kernel: No iBFT detected.
Dec  5 09:32:29 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 09:32:29 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 09:32:29 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 09:32:29 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 09:34:26 node1 kernel:  connection1:0: detected conn error (1011)
Dec  5 09:34:27 node1 iscsid: Kernel reported iSCSI connection 1:0 error (1011) state (3)
Dec  5 09:34:27 node1 kernel:  connection2:0: detected conn error (1011)
Dec  5 09:34:28 node1 iscsid: Kernel reported iSCSI connection 2:0 error (1011) state (3)
Dec  5 09:34:30 node1 iscsid: connection1:0 is operational after recovery (1 attempts)
Dec  5 09:34:31 node1 iscsid: connection2:0 is operational after recovery (1 attempts)
Dec  5 09:34:56 node1 kernel: Loading iSCSI transport class v2.0-870.
Dec  5 09:34:56 node1 kernel: iscsi: registered transport (tcp)
Dec  5 09:34:56 node1 kernel: iscsi: registered transport (iser)
Dec  5 09:34:56 node1 iscsid: iSCSI logger with pid=5540 started!
Dec  5 09:34:56 node1 kernel: scsi6 : iSCSI Initiator over TCP/IP
Dec  5 09:34:56 node1 kernel: scsi7 : iSCSI Initiator over TCP/IP
Dec  5 09:34:57 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Dec  5 09:34:57 node1 iscsid: iSCSI daemon with pid=5541 started!
Dec  5 09:34:57 node1 iscsid: connection1:0 is operational now
Dec  5 09:34:57 node1 iscsid: connection2:0 is operational now
Dec  5 09:36:07 node1 kernel:  connection2:0: detected conn error (1011)
Dec  5 09:36:07 node1 kernel:  connection1:0: detected conn error (1011)
Dec  5 09:36:07 node1 iscsid: Kernel reported iSCSI connection 2:0 error (1011) state (3)
Dec  5 09:36:07 node1 iscsid: Kernel reported iSCSI connection 1:0 error (1011) state (3)
Dec  5 09:36:10 node1 iscsid: connection2:0 is operational after recovery (1 attempts)
Dec  5 09:36:10 node1 iscsid: connection1:0 is operational after recovery (1 attempts)
Dec  5 09:37:11 node1 shutdown[5563]: shutting down for system reboot
Dec  5 09:37:11 node1 init: Switching to runlevel: 6
Dec  5 09:37:13 node1 smartd[4557]: smartd received signal 15: Terminated
Dec  5 09:37:13 node1 smartd[4557]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 09:37:13 node1 smartd[4557]: smartd is exiting (exit status 0)
Dec  5 09:37:13 node1 libvirtd: Shutting down on signal 15
Dec  5 09:37:13 node1 sshd[4197]: Received signal 15; terminating.
Dec  5 09:37:13 node1 bonobo-activation-server (root-5745): could not associate with desktop session: Failed to connect to socket /tmp/dbus-SLqfq8gyPo: Connection refused
Dec  5 09:37:14 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Dec  5 09:37:14 node1 kernel: Kernel logging (proc) stopped.
Dec  5 09:37:14 node1 kernel: Kernel log daemon terminating.
Dec  5 09:37:14 node1 syslog-ng[2391]: Termination requested via signal, terminating;
Dec  5 09:37:14 node1 syslog-ng[2391]: syslog-ng shutting down; version='2.0.9'
Dec  5 09:38:32 node1 syslog-ng[2234]: syslog-ng starting up; version='2.0.9'
Dec  5 09:38:33 node1 ifup:     lo        
Dec  5 09:38:33 node1 ifup:     lo        
Dec  5 09:38:33 node1 ifup: IP address: 127.0.0.1/8  
Dec  5 09:38:33 node1 ifup:  
Dec  5 09:38:33 node1 ifup:               
Dec  5 09:38:33 node1 ifup: IP address: 127.0.0.2/8  
Dec  5 09:38:33 node1 ifup:  
Dec  5 09:38:33 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  5 09:38:33 node1 ifup:     eth0      
Dec  5 09:38:33 node1 ifup: IP address: 10.0.0.10/24  
Dec  5 09:38:33 node1 ifup:  
Dec  5 09:38:34 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:38:34 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  5 09:38:34 node1 ifup:     eth1      
Dec  5 09:38:34 node1 ifup: IP address: 10.0.1.11/24  
Dec  5 09:38:34 node1 ifup:  
Dec  5 09:38:34 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:38:34 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  5 09:38:34 node1 firmware.sh[2989]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 09:38:34 node1 firmware.sh[3016]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 09:38:34 node1 firmware.sh[3031]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 09:38:34 node1 ifup:     eth2      
Dec  5 09:38:34 node1 firmware.sh[3037]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 09:38:34 node1 ifup: IP address: 192.168.1.150/24  
Dec  5 09:38:34 node1 ifup:  
Dec  5 09:38:35 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:38:35 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  5 09:38:35 node1 ifup:     eth3      
Dec  5 09:38:35 node1 ifup: IP address: 192.168.1.151/24  
Dec  5 09:38:35 node1 ifup:  
Dec  5 09:38:35 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 09:38:36 node1 rpcbind: cannot create socket for udp6
Dec  5 09:38:36 node1 rpcbind: cannot create socket for tcp6
Dec  5 09:38:37 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Dec  5 09:38:37 node1 kernel: bnx2: eth0: using MSIX
Dec  5 09:38:37 node1 kernel: bnx2: eth1: using MSIX
Dec  5 09:38:37 node1 kernel: IA-32 Microcode Update Driver: v1.14a <tigran at aivazian.fsnet.co.uk>
Dec  5 09:38:37 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 09:38:37 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 09:38:37 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 09:38:37 node1 kernel: bnx2: eth2: using MSIX
Dec  5 09:38:37 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 09:38:37 node1 kernel: bnx2: eth3: using MSIX
Dec  5 09:38:37 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  5 09:38:37 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  5 09:38:38 node1 kernel: Loading iSCSI transport class v2.0-870.
Dec  5 09:38:38 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  5 09:38:38 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 09:38:38 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 09:38:38 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 09:38:38 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 09:38:38 node1 kernel: iscsi: registered transport (tcp)
Dec  5 09:38:38 node1 sshd[4201]: Server listening on 0.0.0.0 port 22.
Dec  5 09:38:38 node1 smartd[3880]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Dec  5 09:38:38 node1 smartd[3880]: Opened configuration file /etc/smartd.conf
Dec  5 09:38:38 node1 smartd[3880]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Dec  5 09:38:38 node1 smartd[3880]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Dec  5 09:38:38 node1 smartd[3880]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Dec  5 09:38:38 node1 smartd[3880]: Device: /dev/sda [SAT], opened
Dec  5 09:38:39 node1 smartd[3880]: Device: /dev/sda [SAT], found in smartd database.
Dec  5 09:38:39 node1 smartd[3880]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Dec  5 09:38:39 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Dec  5 09:38:39 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Dec  5 09:38:39 node1 smartd[3880]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 09:38:39 node1 smartd[3880]: Monitoring 1 ATA and 0 SCSI devices
Dec  5 09:38:39 node1 smartd[3880]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 121 to 118
Dec  5 09:38:39 node1 kernel: iscsi: registered transport (iser)
Dec  5 09:38:40 node1 iscsid: iSCSI logger with pid=4312 started!
Dec  5 09:38:40 node1 smartd[3880]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 09:38:40 node1 smartd[4316]: smartd has fork()ed into background mode. New PID=4316.
Dec  5 09:38:40 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Dec  5 09:38:41 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Dec  5 09:38:41 node1 /usr/sbin/cron[4419]: (CRON) STARTUP (V5.0)
Dec  5 09:38:41 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Dec  5 09:38:41 node1 iscsid: iSCSI daemon with pid=4313 started!
Dec  5 09:38:42 node1 kernel: bootsplash: status on console 0 changed to on
Dec  5 09:38:42 node1 iscsid: connection1:0 is operational now
Dec  5 09:38:42 node1 iscsid: connection2:0 is operational now
Dec  5 09:38:48 node1 gdm-simple-greeter[4596]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Dec  5 09:45:53 node1 kernel: CE: hpet increasing min_delta_ns to 22500 nsec
Dec  5 09:46:35 node1 python: hp-systray(init)[4815]: error: hp-systray cannot be run as root. Exiting.
Dec  5 09:46:54 node1 shutdown[4912]: shutting down for system halt
Dec  5 09:46:54 node1 init: Switching to runlevel: 0
Dec  5 09:46:56 node1 kernel: bootsplash: status on console 0 changed to on
Dec  5 09:46:56 node1 smartd[4316]: smartd received signal 15: Terminated
Dec  5 09:46:56 node1 smartd[4316]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 09:46:56 node1 smartd[4316]: smartd is exiting (exit status 0)
Dec  5 09:46:56 node1 libvirtd: Shutting down on signal 15
Dec  5 09:46:56 node1 sshd[4201]: Received signal 15; terminating.
Dec  5 09:46:57 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Dec  5 09:46:57 node1 kernel: Kernel logging (proc) stopped.
Dec  5 09:46:57 node1 kernel: Kernel log daemon terminating.
Dec  5 09:46:57 node1 syslog-ng[2234]: Termination requested via signal, terminating;
Dec  5 09:46:57 node1 syslog-ng[2234]: syslog-ng shutting down; version='2.0.9'
Dec  5 10:24:06 node1 syslog-ng[2235]: syslog-ng starting up; version='2.0.9'
Dec  5 10:24:07 node1 ifup:     lo        
Dec  5 10:24:07 node1 firmware.sh[2634]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 10:24:07 node1 ifup:     lo        
Dec  5 10:24:07 node1 firmware.sh[2665]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 10:24:07 node1 ifup: IP address: 127.0.0.1/8  
Dec  5 10:24:07 node1 firmware.sh[2674]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 10:24:07 node1 ifup:  
Dec  5 10:24:07 node1 firmware.sh[2684]: Cannot find  firmware file 'intel-ucode/06-1e-05'
Dec  5 10:24:07 node1 ifup:               
Dec  5 10:24:07 node1 ifup: IP address: 127.0.0.2/8  
Dec  5 10:24:07 node1 ifup:  
Dec  5 10:24:07 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  5 10:24:07 node1 ifup:     eth0      
Dec  5 10:24:07 node1 ifup: IP address: 10.0.0.10/24  
Dec  5 10:24:07 node1 ifup:  
Dec  5 10:24:08 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 10:24:08 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Dec  5 10:24:08 node1 ifup:     eth1      
Dec  5 10:24:08 node1 ifup: IP address: 10.0.1.11/24  
Dec  5 10:24:08 node1 ifup:  
Dec  5 10:24:08 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 10:24:08 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  5 10:24:09 node1 ifup:     eth2      
Dec  5 10:24:09 node1 ifup: IP address: 192.168.1.150/24  
Dec  5 10:24:09 node1 ifup:  
Dec  5 10:24:09 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 10:24:09 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Dec  5 10:24:09 node1 ifup:     eth3      
Dec  5 10:24:09 node1 ifup: IP address: 192.168.1.151/24  
Dec  5 10:24:09 node1 ifup:  
Dec  5 10:24:09 node1 SuSEfirewall2: SuSEfirewall2 not active
Dec  5 10:24:10 node1 rpcbind: cannot create socket for udp6
Dec  5 10:24:10 node1 iscsid: iSCSI logger with pid=3528 started!
Dec  5 10:24:10 node1 rpcbind: cannot create socket for tcp6
Dec  5 10:24:11 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Dec  5 10:24:11 node1 iscsid: iSCSI daemon with pid=3529 started!
Dec  5 10:24:11 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Dec  5 10:24:11 node1 kernel: IA-32 Microcode Update Driver: v1.14a <tigran at aivazian.fsnet.co.uk>
Dec  5 10:24:11 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 10:24:11 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 10:24:11 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 10:24:11 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Dec  5 10:24:11 node1 kernel: bnx2: eth0: using MSIX
Dec  5 10:24:11 node1 kernel: bnx2: eth1: using MSIX
Dec  5 10:24:11 node1 kernel: bnx2: eth2: using MSIX
Dec  5 10:24:11 node1 kernel: CE: hpet increasing min_delta_ns to 15000 nsec
Dec  5 10:24:11 node1 kernel: bnx2: eth3: using MSIX
Dec  5 10:24:11 node1 kernel: Loading iSCSI transport class v2.0-870.
Dec  5 10:24:11 node1 kernel: iscsi: registered transport (tcp)
Dec  5 10:24:11 node1 kernel: iscsi: registered transport (iser)
Dec  5 10:24:11 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  5 10:24:11 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Dec  5 10:24:12 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Dec  5 10:24:12 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Dec  5 10:24:12 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Dec  5 10:24:13 node1 iscsid: connection1:0 is operational now
Dec  5 10:24:13 node1 iscsid: connection2:0 is operational now
Dec  5 10:24:14 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 10:24:14 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 10:24:15 node1 smartd[3972]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Dec  5 10:24:15 node1 smartd[3972]: Opened configuration file /etc/smartd.conf
Dec  5 10:24:15 node1 smartd[3972]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Dec  5 10:24:15 node1 smartd[3972]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Dec  5 10:24:15 node1 smartd[3972]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Dec  5 10:24:15 node1 smartd[3972]: Device: /dev/sda [SAT], opened
Dec  5 10:24:15 node1 smartd[3972]: Device: /dev/sda [SAT], found in smartd database.
Dec  5 10:24:15 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Dec  5 10:24:15 node1 kernel: device-mapper: ioctl: error adding target to table
Dec  5 10:24:15 node1 smartd[3972]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Dec  5 10:24:15 node1 smartd[3972]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 10:24:15 node1 smartd[3972]: Monitoring 1 ATA and 0 SCSI devices
Dec  5 10:24:16 node1 smartd[3972]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 118 to 120
Dec  5 10:24:16 node1 smartd[3972]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 10:24:16 node1 smartd[4299]: smartd has fork()ed into background mode. New PID=4299.
Dec  5 10:24:16 node1 sshd[4328]: Server listening on 0.0.0.0 port 22.
Dec  5 10:24:16 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Dec  5 10:24:16 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Dec  5 10:24:18 node1 /usr/sbin/cron[4482]: (CRON) STARTUP (V5.0)
Dec  5 10:24:24 node1 gdm-simple-greeter[4589]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Dec  5 10:24:36 node1 python: hp-systray(init)[4763]: error: hp-systray cannot be run as root. Exiting.
Dec  5 10:24:37 node1 kernel: CE: hpet increasing min_delta_ns to 22500 nsec
Dec  5 10:24:45 node1 shutdown[4851]: shutting down for system halt
Dec  5 10:24:46 node1 init: Switching to runlevel: 0
Dec  5 10:24:48 node1 kernel: bootsplash: status on console 0 changed to on
Dec  5 10:24:48 node1 smartd[4299]: smartd received signal 15: Terminated
Dec  5 10:24:48 node1 smartd[4299]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Dec  5 10:24:48 node1 smartd[4299]: smartd is exiting (exit status 0)
Dec  5 10:24:48 node1 libvirtd: Shutting down on signal 15
Dec  5 10:24:48 node1 sshd[4328]: Received signal 15; terminating.
Dec  5 10:24:49 node1 bonobo-activation-server (root-4982): could not associate with desktop session: Failed to connect to socket /tmp/dbus-S0I9R9im99: Connection refused
Dec  5 10:24:49 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Dec  5 10:24:49 node1 kernel: Kernel logging (proc) stopped.
Dec  5 10:24:49 node1 kernel: Kernel log daemon terminating.
Dec  5 10:24:49 node1 syslog-ng[2235]: Termination requested via signal, terminating;
Dec  5 10:24:49 node1 syslog-ng[2235]: syslog-ng shutting down; version='2.0.9'
Jan  9 20:28:48 node1 syslog-ng[2328]: syslog-ng starting up; version='2.0.9'
Jan  9 20:28:49 node1 ifup:     lo        
Jan  9 20:28:49 node1 ifup:     lo        
Jan  9 20:28:49 node1 ifup: IP address: 127.0.0.1/8  
Jan  9 20:28:49 node1 ifup:  
Jan  9 20:28:49 node1 ifup:               
Jan  9 20:28:49 node1 ifup: IP address: 127.0.0.2/8  
Jan  9 20:28:49 node1 ifup:  
Jan  9 20:28:49 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 20:28:49 node1 ifup:     eth0      
Jan  9 20:28:49 node1 ifup: IP address: 10.0.0.10/24  
Jan  9 20:28:49 node1 ifup:  
Jan  9 20:28:49 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 20:28:50 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 20:28:50 node1 ifup:     eth1      
Jan  9 20:28:50 node1 ifup: IP address: 10.0.1.11/24  
Jan  9 20:28:50 node1 ifup:  
Jan  9 20:28:50 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 20:28:50 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 20:28:50 node1 ifup:     eth2      
Jan  9 20:28:50 node1 ifup: IP address: 192.168.1.150/24  
Jan  9 20:28:50 node1 ifup:  
Jan  9 20:28:50 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 20:28:50 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 20:28:51 node1 ifup:     eth3      
Jan  9 20:28:51 node1 ifup: IP address: 192.168.1.151/24  
Jan  9 20:28:51 node1 ifup:  
Jan  9 20:28:51 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 20:28:51 node1 rpcbind: cannot create socket for udp6
Jan  9 20:28:51 node1 rpcbind: cannot create socket for tcp6
Jan  9 20:28:53 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Jan  9 20:28:53 node1 kernel: IA-32 Microcode Update Driver: v1.14a <tigran at aivazian.fsnet.co.uk>
Jan  9 20:28:53 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 20:28:53 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 20:28:53 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 20:28:53 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 20:28:53 node1 kernel: bnx2: eth0: using MSIX
Jan  9 20:28:53 node1 kernel: bnx2: eth1: using MSIX
Jan  9 20:28:53 node1 kernel: bnx2: eth2: using MSIX
Jan  9 20:28:53 node1 kernel: bnx2: eth3: using MSIX
Jan  9 20:28:53 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 20:28:53 node1 kernel: Loading iSCSI transport class v2.0-870.
Jan  9 20:28:53 node1 smartd[3790]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Jan  9 20:28:53 node1 smartd[3790]: Opened configuration file /etc/smartd.conf
Jan  9 20:28:53 node1 smartd[3790]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Jan  9 20:28:53 node1 smartd[3790]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan  9 20:28:53 node1 smartd[3790]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Jan  9 20:28:53 node1 smartd[3790]: Device: /dev/sda [SAT], opened
Jan  9 20:28:53 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 20:28:53 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 20:28:53 node1 smartd[3790]: Device: /dev/sda [SAT], found in smartd database.
Jan  9 20:28:53 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 20:28:53 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 20:28:54 node1 kernel: iscsi: registered transport (tcp)
Jan  9 20:28:54 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 20:28:54 node1 smartd[3790]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Jan  9 20:28:54 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 20:28:54 node1 smartd[3790]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 20:28:54 node1 smartd[3790]: Monitoring 1 ATA and 0 SCSI devices
Jan  9 20:28:54 node1 smartd[3790]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 120 to 122
Jan  9 20:28:54 node1 smartd[3790]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 20:28:54 node1 smartd[4224]: smartd has fork()ed into background mode. New PID=4224.
Jan  9 20:28:54 node1 sshd[4251]: Server listening on 0.0.0.0 port 22.
Jan  9 20:28:55 node1 kernel: iscsi: registered transport (iser)
Jan  9 20:28:55 node1 iscsid: iSCSI logger with pid=4282 started!
Jan  9 20:28:55 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Jan  9 20:28:55 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Jan  9 20:28:55 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Jan  9 20:28:55 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Jan  9 20:28:56 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Jan  9 20:28:56 node1 iscsid: iSCSI daemon with pid=4283 started!
Jan  9 20:28:56 node1 iscsid: conn 0 login rejected: initiator error - target not found (02/03)
Jan  9 20:28:56 node1 iscsid: conn 0 login rejected: initiator error - target not found (02/03)
Jan  9 20:28:56 node1 kernel: CE: hpet increasing min_delta_ns to 15000 nsec
Jan  9 20:28:56 node1 /usr/sbin/cron[4460]: (CRON) STARTUP (V5.0)
Jan  9 20:28:57 node1 kernel: bootsplash: status on console 0 changed to on
Jan  9 20:29:03 node1 gdm-simple-greeter[4612]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Jan  9 20:39:03 node1 gdm-session-worker[4615]: PAM pam_putenv: NULL pam handle passed
Jan  9 20:39:06 node1 gdm-session-worker[5249]: PAM pam_putenv: NULL pam handle passed
Jan  9 20:45:14 node1 python: hp-systray(init)[5451]: error: hp-systray cannot be run as root. Exiting.
Jan  9 20:45:15 node1 kernel: CE: hpet increasing min_delta_ns to 22500 nsec
Jan  9 20:53:26 node1 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Jan  9 20:56:23 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 20:56:46 node1 kernel: bnx2: eth2 NIC Copper Link is Down
Jan  9 20:56:48 node1 kernel: bnx2: eth3 NIC Copper Link is Down
Jan  9 20:56:50 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 20:57:03 node1 kernel: bnx2: eth2 NIC Copper Link is Down
Jan  9 20:57:06 node1 kernel: bnx2: eth3 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 20:57:15 node1 kernel: bnx2: eth3 NIC Copper Link is Down
Jan  9 20:57:19 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 20:58:23 node1 shutdown[5763]: shutting down for system reboot
Jan  9 20:58:23 node1 init: Switching to runlevel: 6
Jan  9 20:58:25 node1 kernel: bootsplash: status on console 0 changed to on
Jan  9 20:58:25 node1 smartd[4224]: smartd received signal 15: Terminated
Jan  9 20:58:25 node1 smartd[4224]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 20:58:25 node1 smartd[4224]: smartd is exiting (exit status 0)
Jan  9 20:58:25 node1 sshd[4251]: Received signal 15; terminating.
Jan  9 20:58:25 node1 libvirtd: Shutting down on signal 15
Jan  9 20:58:26 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Jan  9 20:58:26 node1 kernel: Kernel logging (proc) stopped.
Jan  9 20:58:26 node1 kernel: Kernel log daemon terminating.
Jan  9 20:58:26 node1 syslog-ng[2328]: Termination requested via signal, terminating;
Jan  9 20:58:26 node1 syslog-ng[2328]: syslog-ng shutting down; version='2.0.9'
Jan  9 21:02:27 node1 syslog-ng[2376]: syslog-ng starting up; version='2.0.9'
Jan  9 21:02:27 node1 rchal: CPU frequency scaling is not supported by your processor.
Jan  9 21:02:27 node1 rchal: boot with 'CPUFREQ=no' in to avoid this warning.
Jan  9 21:02:27 node1 rchal: Cannot load cpufreq governors - No cpufreq driver available
Jan  9 21:02:28 node1 ifup:     lo        
Jan  9 21:02:28 node1 ifup:     lo        
Jan  9 21:02:28 node1 ifup: IP address: 127.0.0.1/8  
Jan  9 21:02:28 node1 ifup:  
Jan  9 21:02:28 node1 ifup:               
Jan  9 21:02:28 node1 ifup: IP address: 127.0.0.2/8  
Jan  9 21:02:28 node1 ifup:  
Jan  9 21:02:28 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 21:02:29 node1 ifup:     eth0      
Jan  9 21:02:29 node1 ifup: IP address: 10.0.0.10/24  
Jan  9 21:02:29 node1 ifup:  
Jan  9 21:02:29 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:02:30 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 21:02:30 node1 ifup:     eth1      
Jan  9 21:02:30 node1 ifup: IP address: 10.0.1.11/24  
Jan  9 21:02:30 node1 ifup:  
Jan  9 21:02:30 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:02:30 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 21:02:31 node1 ifup:     eth2      
Jan  9 21:02:31 node1 ifup: IP address: 192.168.1.150/24  
Jan  9 21:02:31 node1 ifup:  
Jan  9 21:02:31 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:02:31 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 21:02:31 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Jan  9 21:02:31 node1 kernel: IA-32 Microcode Update Driver: v1.14a-xen <tigran at aivazian.fsnet.co.uk>
Jan  9 21:02:31 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 21:02:31 node1 kernel: bnx2: eth0: using MSIX
Jan  9 21:02:31 node1 kernel: bnx2: eth1: using MSIX
Jan  9 21:02:31 node1 kernel: bnx2: eth2: using MSIX
Jan  9 21:02:31 node1 kernel: bnx2: eth3: using MSIX
Jan  9 21:02:31 node1 ifup:     eth3      
Jan  9 21:02:31 node1 ifup: IP address: 192.168.1.151/24  
Jan  9 21:02:31 node1 ifup:  
Jan  9 21:02:32 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:02:32 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 21:02:32 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 21:02:32 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:02:32 node1 kernel: Loading iSCSI transport class v2.0-870.
Jan  9 21:02:33 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 21:02:33 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:02:33 node1 kernel: iscsi: registered transport (tcp)
Jan  9 21:02:33 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 21:02:33 node1 rpcbind: cannot create socket for udp6
Jan  9 21:02:33 node1 rpcbind: cannot create socket for tcp6
Jan  9 21:02:33 node1 multipathd: dm-0: remove map (uevent)
Jan  9 21:02:33 node1 kernel: iscsi: registered transport (iser)
Jan  9 21:02:33 node1 iscsid: iSCSI logger with pid=3595 started!
Jan  9 21:02:33 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Jan  9 21:02:34 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 21:02:34 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Jan  9 21:02:34 node1 iscsid: iSCSI daemon with pid=3596 started!
Jan  9 21:02:34 node1 iscsid: conn 0 login rejected: initiator error - target not found (02/03)
Jan  9 21:02:34 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Jan  9 21:02:35 node1 iscsid: conn 0 login rejected: initiator error - target not found (02/03)
Jan  9 21:02:37 node1 smartd[4023]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Jan  9 21:02:37 node1 smartd[4023]: Opened configuration file /etc/smartd.conf
Jan  9 21:02:37 node1 smartd[4023]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Jan  9 21:02:37 node1 smartd[4023]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda [SAT], opened
Jan  9 21:02:37 node1 sshd[4185]: Server listening on 0.0.0.0 port 22.
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda [SAT], found in smartd database.
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 21:02:37 node1 smartd[4023]: Monitoring 1 ATA and 0 SCSI devices
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 122 to 114
Jan  9 21:02:37 node1 smartd[4023]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 21:02:37 node1 smartd[4392]: smartd has fork()ed into background mode. New PID=4392.
Jan  9 21:02:38 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Jan  9 21:02:38 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Jan  9 21:02:38 node1 xenstored: Checking store ...
Jan  9 21:02:38 node1 xenstored: Checking store complete.
Jan  9 21:02:38 node1 kernel: suspend: event channel 52
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:795: blktapctrl: v1.0.0
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [raw image (aio)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [raw image (sync)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [vmware image (vmdk)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [ramdisk image (ram)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [qcow disk (qcow)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [qcow2 disk (qcow2)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [ioemu disk]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl.c:797: Found driver: [raw image (cdrom)]
Jan  9 21:02:38 node1 BLKTAPCTRL[4427]: blktapctrl_linux.c:23: /dev/xen/blktap0 device already exists
Jan  9 21:02:39 node1 kernel: Bridge firewalling registered
Jan  9 21:02:39 node1 /usr/sbin/cron[4545]: (CRON) STARTUP (V5.0)
Jan  9 21:02:50 node1 gdm-simple-greeter[4723]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Jan  9 21:03:45 node1 python: hp-systray(init)[4902]: error: hp-systray cannot be run as root. Exiting.
Jan  9 21:23:45 node1 -- MARK --
Jan  9 21:32:38 node1 smartd[4392]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 114 to 113
Jan  9 21:36:37 node1 kernel: No iBFT detected.
Jan  9 21:37:05 node1 kernel: scsi6 : iSCSI Initiator over TCP/IP
Jan  9 21:37:05 node1 kernel: scsi 6:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Write Protect is off
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Write Protect is off
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:05 node1 kernel:  sdb: unknown partition table
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: [sdb] Attached SCSI disk
Jan  9 21:37:05 node1 kernel: sd 6:0:0:0: Attached scsi generic sg2 type 0
Jan  9 21:37:05 node1 kernel: scsi 6:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Write Protect is off
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Mode Sense: 77 00 00 08
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Write Protect is off
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Mode Sense: 77 00 00 08
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:05 node1 kernel:  sdc: unknown partition table
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: [sdc] Attached SCSI disk
Jan  9 21:37:05 node1 kernel: sd 6:0:0:1: Attached scsi generic sg3 type 0
Jan  9 21:37:05 node1 iscsid: connection3:0 is operational now
Jan  9 21:37:06 node1 multipathd: 1494554000000000000000000010000008a0500000f000000: event checker started
Jan  9 21:37:06 node1 multipathd: sdc path added to devmap 1494554000000000000000000010000008a0500000f000000
Jan  9 21:37:06 node1 multipathd: dm-0: add map (uevent)
Jan  9 21:37:06 node1 kernel: device-mapper: table: device 8:16 too small for target
Jan  9 21:37:06 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:37:06 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:37:06 node1 multipathd: 149455400000000000000000001000000900500000f000000: event checker started
Jan  9 21:37:06 node1 multipathd: sdb path added to devmap 149455400000000000000000001000000900500000f000000
Jan  9 21:37:06 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 21:37:06 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:37:06 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:37:06 node1 multipathd: dm-3: add map (uevent)
Jan  9 21:37:21 node1 kernel: scsi7 : iSCSI Initiator over TCP/IP
Jan  9 21:37:21 node1 kernel: scsi 7:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Write Protect is off
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Mode Sense: 77 00 00 08
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Write Protect is off
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Mode Sense: 77 00 00 08
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:21 node1 kernel:  sdd: unknown partition table
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: [sdd] Attached SCSI disk
Jan  9 21:37:21 node1 kernel: sd 7:0:0:0: Attached scsi generic sg4 type 0
Jan  9 21:37:21 node1 kernel: scsi 7:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Write Protect is off
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Write Protect is off
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:37:21 node1 kernel:  sde: unknown partition table
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: [sde] Attached SCSI disk
Jan  9 21:37:21 node1 kernel: sd 7:0:0:1: Attached scsi generic sg5 type 0
Jan  9 21:37:21 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 21:37:21 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:37:21 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:37:21 node1 multipathd: 1494554000000000000000000010000008a0500000f000000: load table [0 125853210 multipath 0 0 2 1 round-robin 0 1 1 8:32 1000 round-
Jan  9 21:37:21 node1 multipathd: sde path added to devmap 1494554000000000000000000010000008a0500000f000000
Jan  9 21:37:21 node1 multipathd: dm-0: add map (uevent)
Jan  9 21:37:21 node1 multipathd: dm-0: devmap already registered
Jan  9 21:37:21 node1 iscsid: connection4:0 is operational now
Jan  9 21:37:22 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 21:37:22 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:37:22 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:37:22 node1 multipathd: sdd path added to devmap 149455400000000000000000001000000900500000f000000
Jan  9 21:37:22 node1 multipathd: dm-3: add map (uevent)
Jan  9 21:38:02 node1 kernel: device-mapper: table: 253:4: multipath: error getting device
Jan  9 21:38:02 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:38:03 node1 kernel: device-mapper: table: 253:4: multipath: error getting device
Jan  9 21:38:03 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:38:15 node1 shutdown[5690]: shutting down for system reboot
Jan  9 21:38:15 node1 init: Switching to runlevel: 6
Jan  9 21:38:18 node1 smartd[4392]: smartd received signal 15: Terminated
Jan  9 21:38:18 node1 smartd[4392]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 21:38:18 node1 smartd[4392]: smartd is exiting (exit status 0)
Jan  9 21:38:18 node1 multipathd: 149455400000000000000000001000000900500000f000000: stop event checker thread
Jan  9 21:38:18 node1 libvirtd: Shutting down on signal 15
Jan  9 21:38:18 node1 sshd[4185]: Received signal 15; terminating.
Jan  9 21:38:19 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Jan  9 21:38:19 node1 kernel: Kernel logging (proc) stopped.
Jan  9 21:38:19 node1 kernel: Kernel log daemon terminating.
Jan  9 21:38:19 node1 syslog-ng[2376]: Termination requested via signal, terminating;
Jan  9 21:38:19 node1 syslog-ng[2376]: syslog-ng shutting down; version='2.0.9'
Jan  9 21:39:39 node1 syslog-ng[2337]: syslog-ng starting up; version='2.0.9'
Jan  9 21:39:40 node1 ifup:     lo        
Jan  9 21:39:40 node1 rchal: CPU frequency scaling is not supported by your processor.
Jan  9 21:39:40 node1 rchal: boot with 'CPUFREQ=no' in to avoid this warning.
Jan  9 21:39:40 node1 rchal: Cannot load cpufreq governors - No cpufreq driver available
Jan  9 21:39:40 node1 ifup:     lo        
Jan  9 21:39:40 node1 ifup: IP address: 127.0.0.1/8  
Jan  9 21:39:40 node1 ifup:  
Jan  9 21:39:40 node1 ifup:               
Jan  9 21:39:40 node1 ifup: IP address: 127.0.0.2/8  
Jan  9 21:39:40 node1 ifup:  
Jan  9 21:39:41 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 21:39:41 node1 ifup:     eth0      
Jan  9 21:39:41 node1 ifup: IP address: 10.0.0.10/24  
Jan  9 21:39:41 node1 ifup:  
Jan  9 21:39:41 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:39:41 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 21:39:41 node1 ifup:     eth1      
Jan  9 21:39:41 node1 ifup: IP address: 10.0.1.11/24  
Jan  9 21:39:41 node1 ifup:  
Jan  9 21:39:42 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:39:42 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 21:39:42 node1 ifup:     eth2      
Jan  9 21:39:42 node1 ifup: IP address: 192.168.1.150/24  
Jan  9 21:39:42 node1 ifup:  
Jan  9 21:39:42 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:39:43 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 21:39:43 node1 ifup:     eth3      
Jan  9 21:39:43 node1 ifup: IP address: 192.168.1.151/24  
Jan  9 21:39:43 node1 ifup:  
Jan  9 21:39:43 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 21:39:43 node1 rpcbind: cannot create socket for udp6
Jan  9 21:39:43 node1 rpcbind: cannot create socket for tcp6
Jan  9 21:39:44 node1 iscsid: iSCSI logger with pid=3584 started!
Jan  9 21:39:44 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Jan  9 21:39:44 node1 kernel: IA-32 Microcode Update Driver: v1.14a <tigran at aivazian.fsnet.co.uk>
Jan  9 21:39:44 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 21:39:44 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 21:39:44 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 21:39:44 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 21:39:44 node1 kernel: bnx2: eth0: using MSIX
Jan  9 21:39:44 node1 kernel: bnx2: eth1: using MSIX
Jan  9 21:39:44 node1 kernel: bnx2: eth2: using MSIX
Jan  9 21:39:44 node1 kernel: bnx2: eth3: using MSIX
Jan  9 21:39:44 node1 kernel: Loading iSCSI transport class v2.0-870.
Jan  9 21:39:44 node1 kernel: iscsi: registered transport (tcp)
Jan  9 21:39:44 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 21:39:44 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:39:44 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 21:39:44 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:39:44 node1 kernel: iscsi: registered transport (iser)
Jan  9 21:39:44 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 21:39:45 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 21:39:45 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Jan  9 21:39:45 node1 iscsid: iSCSI daemon with pid=3585 started!
Jan  9 21:39:45 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Jan  9 21:39:45 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Jan  9 21:39:45 node1 kernel: scsi 4:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:39:45 node1 kernel: scsi 5:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel:  sdb: unknown partition table
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: [sdb] Attached SCSI disk
Jan  9 21:39:45 node1 kernel: sd 4:0:0:0: Attached scsi generic sg2 type 0
Jan  9 21:39:45 node1 kernel: scsi 4:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel:  sdc: unknown partition table
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: [sdc] Attached SCSI disk
Jan  9 21:39:45 node1 kernel: sd 5:0:0:0: Attached scsi generic sg3 type 0
Jan  9 21:39:45 node1 kernel: scsi 5:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel:  sde: unknown partition table
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: [sde] Attached SCSI disk
Jan  9 21:39:45 node1 kernel: sd 5:0:0:1: Attached scsi generic sg4 type 0
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Write Protect is off
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Mode Sense: 77 00 00 08
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 21:39:45 node1 kernel:  sdd: unknown partition table
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: [sdd] Attached SCSI disk
Jan  9 21:39:45 node1 kernel: sd 4:0:0:1: Attached scsi generic sg5 type 0
Jan  9 21:39:45 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 21:39:46 node1 iscsid: connection1:0 is operational now
Jan  9 21:39:46 node1 iscsid: connection2:0 is operational now
Jan  9 21:39:46 node1 multipathd: 1494554000000000000000000010000008a0500000f000000: event checker started
Jan  9 21:39:46 node1 multipathd: sde path added to devmap 1494554000000000000000000010000008a0500000f000000
Jan  9 21:39:47 node1 kernel: device-mapper: table: device 8:32 too small for target
Jan  9 21:39:47 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:39:47 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:39:47 node1 kernel: device-mapper: table: device 8:32 too small for target
Jan  9 21:39:47 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:39:47 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:39:47 node1 multipathd: 149455400000000000000000001000000900500000f000000: event checker started
Jan  9 21:39:47 node1 multipathd: sdc path added to devmap 149455400000000000000000001000000900500000f000000
Jan  9 21:39:47 node1 multipathd: dm-1: add map (uevent)
Jan  9 21:39:47 node1 multipathd: dm-4: remove map (uevent)
Jan  9 21:39:47 node1 multipathd: dm-4: devmap not registered, can't remove
Jan  9 21:39:48 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 21:39:48 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:39:48 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:39:48 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 21:39:48 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 21:39:48 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 21:39:48 node1 sshd[4506]: Server listening on 0.0.0.0 port 22.
Jan  9 21:39:49 node1 smartd[4375]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Jan  9 21:39:49 node1 smartd[4375]: Opened configuration file /etc/smartd.conf
Jan  9 21:39:49 node1 smartd[4375]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Jan  9 21:39:49 node1 smartd[4375]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sda [SAT], opened
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sda [SAT], found in smartd database.
Jan  9 21:39:49 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Jan  9 21:39:49 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sdb, opened
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sdb, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdb' to turn on SMART features
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sdc, opened
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sdc, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdc' to turn on SMART features
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sdd, opened
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sdd, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdd' to turn on SMART features
Jan  9 21:39:49 node1 smartd[4375]: Device: /dev/sde, opened
Jan  9 21:39:50 node1 smartd[4375]: Device: /dev/sde, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sde' to turn on SMART features
Jan  9 21:39:50 node1 smartd[4375]: Monitoring 1 ATA and 0 SCSI devices
Jan  9 21:39:50 node1 smartd[4375]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 21:39:50 node1 smartd[4601]: smartd has fork()ed into background mode. New PID=4601.
Jan  9 21:39:50 node1 /usr/sbin/cron[4660]: (CRON) STARTUP (V5.0)
Jan  9 21:39:51 node1 kernel: bootsplash: status on console 0 changed to on
Jan  9 21:39:57 node1 gdm-simple-greeter[4787]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Jan  9 21:39:58 node1 gdm-session-worker[4790]: PAM pam_putenv: NULL pam handle passed
Jan  9 21:43:41 node1 sshd[4802]: Accepted keyboard-interactive/pam for root from 192.168.1.61 port 43186 ssh2
Jan  9 21:48:23 node1 sshd[4877]: Accepted keyboard-interactive/pam for root from 192.168.1.61 port 51844 ssh2
Jan  9 21:49:58 node1 gdm-session-worker[4790]: PAM pam_putenv: NULL pam handle passed
Jan  9 21:53:12 node1 sshd[4928]: Accepted keyboard-interactive/pam for root from 192.168.1.160 port 47864 ssh2
Jan  9 21:58:51 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:58:51 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:58:51 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:58:51 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:58:52 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:58:52 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:58:52 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:58:52 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:58:52 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:58:52 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:58:59 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:58:59 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:24 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:28 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:28 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:29 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:29 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:34 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:34 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:41 node1 kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan  9 21:59:41 node1 kernel: ISO 9660 Extensions: RRIP_1991A
Jan  9 21:59:59 node1 useradd[6031]: new account added - account=ais, uid=39, gid=100, home=/, shell=/bin/false, by=0
Jan  9 21:59:59 node1 useradd[6031]: running USERADD_CMD command - script=/usr/sbin/useradd.local, account=ais, uid=39, gid=100, home=/, by=0
Jan  9 22:00:09 node1 shadow[6101]: new group added - group=haclient, gid=90, by=0
Jan  9 22:00:09 node1 shadow[6101]: running GROUPADD_CMD command - script=/usr/sbin/groupadd.local, account=haclient, uid=90, gid=0, home=, by=0
Jan  9 22:00:09 node1 useradd[6104]: new account added - account=hacluster, uid=90, gid=90, home=/var/lib/heartbeat/cores/hacluster, shell=/bin/false, by=0
Jan  9 22:00:09 node1 useradd[6104]: running USERADD_CMD command - script=/usr/sbin/useradd.local, account=hacluster, uid=90, gid=90, home=/var/lib/heartbeat/cores/hacluster, by=0
Jan  9 22:00:59 node1 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] AIS Executive Service RELEASE 'subrev 1152 version 0.80'
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributors.
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] Copyright (C) 2006 Red Hat, Inc.
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] AIS Executive Service: started and ready to provide service.
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] Token Timeout (5000 ms) retransmit timeout (490 ms)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] token hold (382 ms) retransmits before loss (10 retrans)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] join (1000 ms) send_join (45 ms) consensus (2500 ms) merge (200 ms)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] downcheck (1000 ms) fail to recv const (50 msgs)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1500
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] window size per rotation (50 messages) maximum messages per rotation (20 messages)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] send threads (0 threads)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] RRP token expired timeout (490 ms)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] RRP token problem counter (2000 ms)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] RRP threshold (10 problem count)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] RRP mode set to none.
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] heartbeat_failures_allowed (0)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] max_network_delay (50 ms)
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes).
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes).
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] The network interface [192.168.1.150] is now up.
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] Created or loaded sequence id 0.192.168.1.150 for this ring.
Jan  9 22:01:23 node1 openais[6728]: [TOTEM] entering GATHER state from 15.
Jan  9 22:01:23 node1 openais[6728]: [crm  ] info: process_ais_conf: Reading configure
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: config_find_next: Processing additional logging options...
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: get_config_opt: Found 'off' for option: debug
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: get_config_opt: Found 'yes' for option: to_syslog
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: get_config_opt: Found 'daemon' for option: syslog_facility
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: config_find_next: Processing additional service options...
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: get_config_opt: Found 'yes' for option: use_logd
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: get_config_opt: Found 'yes' for option: use_mgmtd
Jan  9 22:01:23 node1 openais[6728]: [crm  ] info: pcmk_plugin_init: CRM: Initialized
Jan  9 22:01:23 node1 openais[6728]: [crm  ] Logging: Initialized pcmk_plugin_init
Jan  9 22:01:23 node1 openais[6728]: [crm  ] info: pcmk_plugin_init: Service: 9
Jan  9 22:01:23 node1 openais[6728]: [crm  ] info: pcmk_plugin_init: Local node id: 369207488
Jan  9 22:01:23 node1 openais[6728]: [crm  ] info: pcmk_plugin_init: Local hostname: node1
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: update_member: Creating entry for node 369207488 born on 0
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: update_member: 0x772400 Node 369207488 now known as node1 (was: (null))
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: update_member: Node node1 now has 1 quorum votes (was 0)
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: update_member: Node 369207488/node1 is now: member
Jan  9 22:01:23 node1 lrmd: [6736]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 lrmd: [6736]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jan  9 22:01:23 node1 stonithd: [6734]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 stonithd: [6734]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan  9 22:01:23 node1 stonithd: [6734]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan  9 22:01:23 node1 mgmtd: [6740]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 mgmtd: [6740]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jan  9 22:01:23 node1 mgmtd: [6740]: debug: Enabling coredumps
Jan  9 22:01:23 node1 cib: [6735]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:01:23 node1 lrmd: [6736]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:01:23 node1 cib: [6735]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 lrmd: [6736]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan  9 22:01:23 node1 cib: [6735]: info: G_main_add_TriggerHandler: Added signal manual handler
Jan  9 22:01:23 node1 cib: [6735]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:01:23 node1 mgmtd: [6740]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan  9 22:01:23 node1 cib: [6735]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jan  9 22:01:23 node1 cib: [6735]: WARN: retrieveCib: Cluster configuration not found: /var/lib/heartbeat/crm/cib.xml
Jan  9 22:01:23 node1 pengine: [6738]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6734 for process stonithd
Jan  9 22:01:23 node1 lrmd: [6736]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan  9 22:01:23 node1 mgmtd: [6740]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan  9 22:01:23 node1 cib: [6735]: WARN: readCibXmlFile: Primary configuration corrupt or unusable, trying backup...
Jan  9 22:01:23 node1 attrd: [6737]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:01:23 node1 pengine: [6738]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 lrmd: [6736]: info: Started.
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6735 for process cib
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6736 for process lrmd
Jan  9 22:01:23 node1 crmd: [6739]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:01:23 node1 cib: [6735]: WARN: readCibXmlFile: Continuing with an empty configuration.
Jan  9 22:01:23 node1 attrd: [6737]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 attrd: [6737]: info: main: Starting up....
Jan  9 22:01:23 node1 stonithd: [6734]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:01:23 node1 mgmtd: [6740]: info: init_crm
Jan  9 22:01:23 node1 mgmtd: [6740]: info: login to cib: 0, ret:-10
Jan  9 22:01:23 node1 pengine: [6738]: info: main: Starting pengine
Jan  9 22:01:23 node1 cib: [6735]: info: startCib: CIB Initialization completed successfully
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6737 for process attrd
Jan  9 22:01:23 node1 crmd: [6739]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:01:23 node1 attrd: [6737]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:01:23 node1 cib: [6735]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:01:23 node1 attrd: [6737]: info: init_ais_connection: AIS connection established
Jan  9 22:01:23 node1 crmd: [6739]: info: main: CRM Hg Version: 0080ec086ae9c20ad5c4c3562000c0ad68374f0a
Jan  9 22:01:23 node1 attrd: [6737]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:01:23 node1 cib: [6735]: info: init_ais_connection: AIS connection established
Jan  9 22:01:24 node1 attrd: [6737]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:01:24 node1 crmd: [6739]: info: crmd_init: Starting crmd
Jan  9 22:01:24 node1 attrd: [6737]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:01:24 node1 crmd: [6739]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:01:24 node1 cib: [6735]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:01:24 node1 cib: [6735]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:01:24 node1 cib: [6735]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:01:23 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6738 for process pengine
Jan  9 22:01:24 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6739 for process crmd
Jan  9 22:01:24 node1 cib: [6735]: info: cib_init: Starting cib mainloop
Jan  9 22:01:24 node1 cib: [6735]: info: ais_dispatch: Membership 4: quorum still lost
Jan  9 22:01:24 node1 openais[6728]: [MAIN ] info: spawn_child: Forked child 6740 for process mgmtd
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_startup: CRM: Initialized
Jan  9 22:01:24 node1 cib: [6735]: info: crm_update_peer: Node node1: id=369207488 state=member (new) addr=r(0) ip(192.168.1.150)  (new) votes=1 (new) born=0 seen=4 proc=00000000000000000000000000053312 (new)
Jan  9 22:01:24 node1 openais[6728]: [MAIN ] Service initialized 'Pacemaker Cluster Manager'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais extended virtual synchrony service'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais cluster membership service B.01.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais availability management framework B.01.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais checkpoint service B.01.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais event service B.01.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais distributed locking service B.01.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais message service B.01.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais configuration service'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais cluster closed process group service v1.01'
Jan  9 22:01:24 node1 openais[6728]: [SERV ] Service initialized 'openais cluster config database access v1.01'
Jan  9 22:01:24 node1 openais[6728]: [SYNC ] Not using a virtual synchrony filter.
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] Saving state aru 0 high seq received 0
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] Storing new sequence id for ring 4
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] entering COMMIT state.
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] entering RECOVERY state.
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] previous ring seq 0 rep 192.168.1.150
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] aru 0 high delivered 0 received flag 1
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] Sending initial ORF token
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] New Configuration:
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] Members Left:
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] Members Joined:
Jan  9 22:01:24 node1 openais[6728]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 4: memb=0, new=0, lost=0
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] New Configuration:
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] Members Left:
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] Members Joined:
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:01:24 node1 openais[6728]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 4: memb=1, new=1, lost=0
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_peer_update: NEW:  node1 369207488
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:01:24 node1 openais[6728]: [MAIN ] info: update_member: Node node1 now has process list: 00000000000000000000000000053312 (340754)
Jan  9 22:01:24 node1 openais[6728]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:01:24 node1 openais[6728]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:01:24 node1 openais[6728]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_ipc: Recorded connection 0x77e120 for attrd/6737
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_ipc: Recorded connection 0x77e2a0 for cib/6735
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_ipc: Sending membership update 4 to cib
Jan  9 22:01:23 node1 stonithd: [6734]: info: init_ais_connection: AIS connection established
Jan  9 22:01:24 node1 openais[6728]: [crm  ] info: pcmk_ipc: Recorded connection 0x77e020 for stonithd/6734
Jan  9 22:01:24 node1 stonithd: [6734]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:01:24 node1 stonithd: [6734]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:01:24 node1 stonithd: [6734]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:01:24 node1 stonithd: [6734]: notice: /usr/lib64/heartbeat/stonithd start up successfully.
Jan  9 22:01:24 node1 stonithd: [6734]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:01:24 node1 cib: [6744]: info: write_cib_contents: Wrote version 0.0.0 of the CIB to disk (digest: 36355366b21803051156a92d5c175b07)
Jan  9 22:01:24 node1 cib: [6744]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.8Z0md8 (digest: /var/lib/heartbeat/crm/cib.KwC5c3)
Jan  9 22:01:25 node1 crmd: [6739]: info: do_cib_control: CIB connection established
Jan  9 22:01:25 node1 crmd: [6739]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:01:25 node1 crmd: [6739]: info: init_ais_connection: AIS connection established
Jan  9 22:01:25 node1 openais[6728]: [crm  ] info: pcmk_ipc: Recorded connection 0x77f7a0 for crmd/6739
Jan  9 22:01:25 node1 openais[6728]: [crm  ] info: pcmk_ipc: Sending membership update 4 to crmd
Jan  9 22:01:25 node1 crmd: [6739]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:01:25 node1 crmd: [6739]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:01:25 node1 crmd: [6739]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:01:25 node1 crmd: [6739]: info: do_ha_control: Connected to the cluster
Jan  9 22:01:25 node1 crmd: [6739]: info: do_started: Delaying start, CCM (0000000000100000) not connected
Jan  9 22:01:25 node1 crmd: [6739]: info: crmd_init: Starting crmd's mainloop
Jan  9 22:01:25 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:01:25 node1 openais[6728]: [crm  ] info: update_expected_votes: Expected quorum votes 1024 -> 2
Jan  9 22:01:25 node1 crmd: [6739]: info: ais_dispatch: Membership 4: quorum still lost
Jan  9 22:01:25 node1 crmd: [6739]: info: crm_update_peer: Node node1: id=369207488 state=member (new) addr=r(0) ip(192.168.1.150)  (new) votes=1 (new) born=0 seen=4 proc=00000000000000000000000000053312 (new)
Jan  9 22:01:25 node1 crmd: [6739]: info: do_started: The local CRM is operational
Jan  9 22:01:25 node1 crmd: [6739]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jan  9 22:01:25 node1 mgmtd: [6740]: debug: main: run the loop...
Jan  9 22:01:25 node1 mgmtd: [6740]: info: Started.
Jan  9 22:01:26 node1 crmd: [6739]: info: ais_dispatch: Membership 4: quorum still lost
Jan  9 22:01:34 node1 attrd: [6737]: info: main: Sending full refresh
Jan  9 22:01:34 node1 attrd: [6737]: info: main: Starting mainloop...
Jan  9 22:01:36 node1 crmd: [6739]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
Jan  9 22:01:36 node1 crmd: [6739]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jan  9 22:01:36 node1 crmd: [6739]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan  9 22:01:36 node1 crmd: [6739]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jan  9 22:01:36 node1 crmd: [6739]: info: do_te_control: Registering TE UUID: 9b736a73-9fb9-4706-9a4e-5880beca4fb0
Jan  9 22:01:36 node1 crmd: [6739]: WARN: cib_client_add_notify_callback: Callback already present
Jan  9 22:01:36 node1 crmd: [6739]: info: set_graph_functions: Setting custom graph functions
Jan  9 22:01:36 node1 crmd: [6739]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Jan  9 22:01:36 node1 crmd: [6739]: info: do_dc_takeover: Taking over DC status for this partition
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_readwrite: We are now in R/W mode
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=0.0.0): ok (rc=0)
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="0" num_updates="0" />
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib crm_feature_set="3.0.1" admin_epoch="0" epoch="1" num_updates="1" />
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=0.1.1): ok (rc=0)
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="1" num_updates="1" />
Jan  9 22:01:36 node1 crmd: [6739]: info: join_make_offer: Making join offers based on membership 4
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="2" num_updates="1" >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:01:36 node1 crmd: [6739]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" __crm_diff_marker__="added:top" >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.3-0080ec086ae9c20ad5c4c3562000c0ad68374f0a" />
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:01:36 node1 crmd: [6739]: info: ais_dispatch: Membership 4: quorum still lost
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:01:36 node1 cib: [6835]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-0.raw
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/10, version=0.2.1): ok (rc=0)
Jan  9 22:01:36 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:01:36 node1 crmd: [6739]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:01:36 node1 crmd: [6739]: info: ais_dispatch: Membership 4: quorum still lost
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="2" num_updates="1" />
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="3" num_updates="1" >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="2" __crm_diff_marker__="added:top" />
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:01:36 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/14, version=0.3.1): ok (rc=0)
Jan  9 22:01:36 node1 crmd: [6739]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:01:36 node1 crmd: [6739]: info: do_state_transition: All 1 cluster nodes responded to the join offer.
Jan  9 22:01:36 node1 crmd: [6739]: info: do_dc_join_finalize: join-1: Syncing the CIB from node1 to the rest of the cluster
Jan  9 22:01:36 node1 crmd: [6739]: info: te_connect_stonith: Attempting connection to fencing daemon...
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/17, version=0.3.1): ok (rc=0)
Jan  9 22:01:36 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/18, version=0.3.1): ok (rc=0)
Jan  9 22:01:36 node1 cib: [6835]: info: write_cib_contents: Wrote version 0.1.0 of the CIB to disk (digest: 28eb20e8139f7a31d8c4ab2884e6e335)
Jan  9 22:01:36 node1 cib: [6835]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.FwIExO (digest: /var/lib/heartbeat/crm/cib.WZVPbf)
Jan  9 22:01:36 node1 cib: [6836]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-1.raw
Jan  9 22:01:36 node1 cib: [6836]: info: write_cib_contents: Wrote version 0.3.0 of the CIB to disk (digest: d86d722b55d4f5a78dde3b7b01f114d0)
Jan  9 22:01:36 node1 cib: [6836]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ISsmlX (digest: /var/lib/heartbeat/crm/cib.KkJYno)
Jan  9 22:01:37 node1 crmd: [6739]: info: te_connect_stonith: Connected
Jan  9 22:01:37 node1 crmd: [6739]: info: update_attrd: Connecting to attrd...
Jan  9 22:01:37 node1 crmd: [6739]: info: update_attrd: Updating terminate=<none> via attrd for node1
Jan  9 22:01:37 node1 crmd: [6739]: info: update_attrd: Updating shutdown=<none> via attrd for node1
Jan  9 22:01:37 node1 attrd: [6737]: info: find_hash_entry: Creating hash entry for terminate
Jan  9 22:01:37 node1 attrd: [6737]: info: find_hash_entry: Creating hash entry for shutdown
Jan  9 22:01:37 node1 crmd: [6739]: info: do_dc_join_ack: join-1: Updating node state to member for node1
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="3" num_updates="1" />
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="4" num_updates="1" >
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: +     <nodes >
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: +       <node id="node1" uname="node1" type="normal" __crm_diff_marker__="added:top" />
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: +     </nodes>
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:01:37 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.4.1): ok (rc=0)
Jan  9 22:01:37 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=local/crmd/20, version=0.4.1): ok (rc=0)
Jan  9 22:01:37 node1 crmd: [6739]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
Jan  9 22:01:37 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/21, version=0.4.1): ok (rc=0)
Jan  9 22:01:37 node1 crmd: [6739]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:01:37 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/22, version=0.4.1): ok (rc=0)
Jan  9 22:01:37 node1 crmd: [6739]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:01:37 node1 crmd: [6739]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:01:37 node1 crmd: [6739]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan  9 22:01:37 node1 crmd: [6739]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan  9 22:01:37 node1 crmd: [6739]: info: crm_update_quorum: Updating quorum status to false (call=26)
Jan  9 22:01:37 node1 attrd: [6737]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan  9 22:01:37 node1 crmd: [6739]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan  9 22:01:37 node1 attrd: [6737]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan  9 22:01:37 node1 crmd: [6739]: info: do_pe_invoke: Query 27: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:01:37 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/24, version=0.4.2): ok (rc=0)
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="4" num_updates="2" />
Jan  9 22:01:37 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib have-quorum="0" dc-uuid="node1" admin_epoch="0" epoch="5" num_updates="1" />
Jan  9 22:01:37 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/26, version=0.5.1): ok (rc=0)
Jan  9 22:01:37 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:01:37 node1 crmd: [6739]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:01:37 node1 crmd: [6739]: info: do_pe_invoke: Query 28: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:01:37 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263070897-7, seq=4, quorate=0
Jan  9 22:01:37 node1 attrd: [6737]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan  9 22:01:37 node1 pengine: [6738]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan  9 22:01:37 node1 pengine: [6738]: WARN: unpack_resources: No STONITH resources have been defined
Jan  9 22:01:37 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:01:37 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:01:37 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 0: 1 actions in 1 synapses
Jan  9 22:01:37 node1 crmd: [6739]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1263070897-7) derived from /var/lib/pengine/pe-input-0.bz2
Jan  9 22:01:37 node1 crmd: [6739]: info: te_rsc_command: Initiating action 2: probe_complete probe_complete on node1 (local) - no waiting
Jan  9 22:01:37 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:01:37 node1 crmd: [6739]: notice: run_graph: Transition 0 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-0.bz2): Complete
Jan  9 22:01:37 node1 crmd: [6739]: info: te_graph_trigger: Transition 0 is now complete
Jan  9 22:01:37 node1 crmd: [6739]: info: notify_crmd: Transition 0 status: done - <null>
Jan  9 22:01:37 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:01:37 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:01:37 node1 cib: [6838]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-2.raw
Jan  9 22:01:37 node1 pengine: [6738]: info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-0.bz2
Jan  9 22:01:37 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:01:37 node1 cib: [6838]: info: write_cib_contents: Wrote version 0.5.0 of the CIB to disk (digest: 99eff5992e53c20b26a44e1336f16a01)
Jan  9 22:01:37 node1 cib: [6838]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.LbeWQ2 (digest: /var/lib/heartbeat/crm/cib.J1Jkxw)
Jan  9 22:01:56 node1 passwd[6840]: password changed - account=hacluster, uid=90, by=0
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] entering GATHER state from 11.
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] Saving state aru 22 high seq received 22
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] Storing new sequence id for ring 8
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] entering COMMIT state.
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] entering RECOVERY state.
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] previous ring seq 4 rep 192.168.1.150
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] aru 22 high delivered 22 received flag 1
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] position [1] member 192.168.1.160:
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] previous ring seq 4 rep 192.168.1.160
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] aru 3 high delivered 1 received flag 1
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] Sending initial ORF token
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] New Configuration:
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:02:30 node1 crmd: [6739]: notice: ais_dispatch: Membership 8: quorum aquired
Jan  9 22:02:30 node1 crmd: [6739]: info: crm_new_peer: Node <null> now has id: 536979648
Jan  9 22:02:30 node1 crmd: [6739]: info: crm_update_peer: Node (null): id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=0 born=0 seen=8 proc=00000000000000000000000000000000
Jan  9 22:02:30 node1 cib: [6735]: notice: ais_dispatch: Membership 8: quorum aquired
Jan  9 22:02:30 node1 cib: [6735]: info: crm_new_peer: Node <null> now has id: 536979648
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] Members Left:
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] Members Joined:
Jan  9 22:02:30 node1 openais[6728]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 8: memb=1, new=0, lost=0
Jan  9 22:02:30 node1 crmd: [6739]: info: crm_update_quorum: Updating quorum status to true (call=34)
Jan  9 22:02:30 node1 cib: [6735]: info: crm_update_peer: Node (null): id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=0 born=0 seen=8 proc=00000000000000000000000000000000
Jan  9 22:02:30 node1 openais[6728]: [crm  ] info: pcmk_peer_update: memb: node1 369207488
Jan  9 22:02:30 node1 cib: [6735]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:02:30 node1 cib: [6735]: info: crm_get_peer: Node 536979648 is now known as node2
Jan  9 22:02:30 node1 cib: [6735]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 (new) born=8 seen=8 proc=00000000000000000000000000053312 (new)
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] New Configuration:
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] Members Left:
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] Members Joined:
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:02:30 node1 openais[6728]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 8: memb=2, new=1, lost=0
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: Creating entry for node 536979648 born on 8
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: Node 536979648/unknown is now: member
Jan  9 22:02:30 node1 openais[6728]: [crm  ] info: pcmk_peer_update: NEW:  .pending. 536979648
Jan  9 22:02:30 node1 openais[6728]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:02:30 node1 openais[6728]: [crm  ] info: pcmk_peer_update: MEMB: .pending. 536979648
Jan  9 22:02:30 node1 openais[6728]: [crm  ] info: send_member_notification: Sending membership update 8 to 2 children
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: 0x772400 Node 369207488 ((null)) born on: 8
Jan  9 22:02:30 node1 openais[6728]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:02:30 node1 openais[6728]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: 0x771fd0 Node 536979648 (node2) born on: 8
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: 0x771fd0 Node 536979648 now known as node2 (was: (null))
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: Node node2 now has process list: 00000000000000000000000000053312 (340754)
Jan  9 22:02:30 node1 openais[6728]: [MAIN ] info: update_member: Node node2 now has 1 quorum votes (was 0)
Jan  9 22:02:30 node1 openais[6728]: [crm  ] info: send_member_notification: Sending membership update 8 to 2 children
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:02:30 node1 openais[6728]: [CLM  ] got nodejoin message 192.168.1.160
Jan  9 22:02:30 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/32, version=0.5.2): ok (rc=0)
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="5" num_updates="2" />
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="6" num_updates="1" />
Jan  9 22:02:30 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/34, version=0.6.1): ok (rc=0)
Jan  9 22:02:30 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:02:30 node1 crmd: [6739]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:02:30 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:02:30 node1 crmd: [6739]: info: crm_get_peer: Node 536979648 is now known as node2
Jan  9 22:02:30 node1 crmd: [6739]: info: ais_status_callback: status: node2 is now member
Jan  9 22:02:30 node1 crmd: [6739]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 (new) born=8 seen=8 proc=00000000000000000000000000053312 (new)
Jan  9 22:02:30 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/36, version=0.6.1): ok (rc=0)
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="6" num_updates="1" />
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="7" num_updates="1" >
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: +     <nodes >
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: +       <node id="node2" uname="node2" type="normal" __crm_diff_marker__="added:top" />
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: +     </nodes>
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:02:30 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:02:30 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/37, version=0.7.1): ok (rc=0)
Jan  9 22:02:30 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:02:30 node1 crmd: [6739]: info: do_state_transition: Membership changed: 4 -> 8 - join restart
Jan  9 22:02:30 node1 crmd: [6739]: info: do_pe_invoke: Query 41: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:02:30 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ]
Jan  9 22:02:30 node1 crmd: [6739]: info: update_dc: Unset DC node1
Jan  9 22:02:30 node1 crmd: [6739]: info: join_make_offer: Making join offers based on membership 8
Jan  9 22:02:30 node1 crmd: [6739]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Jan  9 22:02:30 node1 crmd: [6739]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:02:30 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/40, version=0.7.2): ok (rc=0)
Jan  9 22:02:30 node1 cib: [6843]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-3.raw
Jan  9 22:02:30 node1 cib: [6843]: info: write_cib_contents: Wrote version 0.7.0 of the CIB to disk (digest: 805cfdbea00bd570b73835c694f43e95)
Jan  9 22:02:30 node1 cib: [6843]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ViwBOo (digest: /var/lib/heartbeat/crm/cib.0upif7)
Jan  9 22:02:32 node1 crmd: [6739]: info: update_dc: Unset DC node1
Jan  9 22:02:32 node1 crmd: [6739]: info: do_dc_join_offer_all: A new node joined the cluster
Jan  9 22:02:32 node1 crmd: [6739]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Jan  9 22:02:32 node1 crmd: [6739]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Jan  9 22:02:32 node1 crmd: [6739]: info: do_dc_join_finalize: join-3: Syncing the CIB from node1 to the rest of the cluster
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/44, version=0.7.2): ok (rc=0)
Jan  9 22:02:32 node1 crmd: [6739]: info: do_dc_join_ack: join-3: Updating node state to member for node1
Jan  9 22:02:32 node1 crmd: [6739]: info: do_dc_join_ack: join-3: Updating node state to member for node2
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/45, version=0.7.2): ok (rc=0)
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/46, version=0.7.2): ok (rc=0)
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/47, version=0.7.3): ok (rc=0)
Jan  9 22:02:32 node1 crmd: [6739]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:02:32 node1 crmd: [6739]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan  9 22:02:32 node1 crmd: [6739]: info: crm_update_quorum: Updating quorum status to true (call=53)
Jan  9 22:02:32 node1 crmd: [6739]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan  9 22:02:32 node1 attrd: [6737]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan  9 22:02:32 node1 crmd: [6739]: info: do_pe_invoke: Query 54: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:02:32 node1 attrd: [6737]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/49, version=0.7.4): ok (rc=0)
Jan  9 22:02:32 node1 crmd: [6739]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=node2/crmd/6, version=0.7.4): ok (rc=0)
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=node2/crmd/7, version=0.7.4): ok (rc=0)
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/51, version=0.7.5): ok (rc=0)
Jan  9 22:02:32 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/53, version=0.7.5): ok (rc=0)
Jan  9 22:02:32 node1 attrd: [6737]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan  9 22:02:32 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263070952-20, seq=8, quorate=1
Jan  9 22:02:32 node1 pengine: [6738]: WARN: unpack_resources: No STONITH resources have been defined
Jan  9 22:02:32 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:02:32 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:02:32 node1 pengine: [6738]: info: stage6: Delaying fencing operations until there are resources to manage
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:02:32 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 1: 1 actions in 1 synapses
Jan  9 22:02:32 node1 crmd: [6739]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1263070952-20) derived from /var/lib/pengine/pe-input-1.bz2
Jan  9 22:02:32 node1 crmd: [6739]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node2 - no waiting
Jan  9 22:02:32 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:02:32 node1 crmd: [6739]: notice: run_graph: Transition 1 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-1.bz2): Complete
Jan  9 22:02:32 node1 crmd: [6739]: info: te_graph_trigger: Transition 1 is now complete
Jan  9 22:02:32 node1 crmd: [6739]: info: notify_crmd: Transition 1 status: done - <null>
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:02:32 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:02:32 node1 pengine: [6738]: info: process_pe_message: Transition 1: PEngine Input stored in: /var/lib/pengine/pe-input-1.bz2
Jan  9 22:02:32 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:02:40 node1 attrd: [6737]: info: crm_new_peer: Node node2 now has id: 536979648
Jan  9 22:02:40 node1 attrd: [6737]: info: crm_new_peer: Node 536979648 is now known as node2
Jan  9 22:04:17 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:08:17 node1 lrmd: [6736]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Jan  9 22:09:50 node1 smartd[4601]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 113 to 112
Jan  9 22:11:24 node1 cib: [6735]: info: cib_stats: Processed 72 operations (6250.00us average, 0% utilization) in the last 10min
Jan  9 22:15:39 node1 mgmtd: [6740]: info: CIB replace: resources
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="7" num_updates="6" />
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="8" num_updates="1" >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +     <resources >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +       <primitive class="stonith" id="STONITH-node1" type="external/drac5" __crm_diff_marker__="added:top" >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +         <meta_attributes id="STONITH-node1-meta_attributes" >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-72eb7c22-71cc-4d2e-82bd-1d079853ef42" name="target-role" value="Started" />
Jan  9 22:15:39 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +         </meta_attributes>
Jan  9 22:15:39 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +         <operations id="STONITH-node1-operations" >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +           <op id="STONITH-node1-op-monitor-15" interval="15" name="monitor" start-delay="15" timeout="15" />
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +         </operations>
Jan  9 22:15:39 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +         <instance_attributes id="STONITH-node1-instance_attributes" >
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-785957f5-14b7-4a83-91c7-bff3617c057e" name="hostname" value="node1" />
Jan  9 22:15:39 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-2ae0af41-33b3-480a-ab6b-b423523138ac" name="ipaddr" value="192.168.1.10" />
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-8c7f719b-60e4-456f-a394-2bc05239075f" name="userid" value="root" />
Jan  9 22:15:39 node1 crmd: [6739]: info: do_pe_invoke: Query 55: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-4a0b1725-470c-4425-b667-5acf96e603fc" name="passwd" value="novell" />
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +         </instance_attributes>
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +       </primitive>
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +     </resources>
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:15:39 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:15:39 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/mgmtd/4, version=0.8.1): ok (rc=0)
Jan  9 22:15:39 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263071739-22, seq=8, quorate=1
Jan  9 22:15:39 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:15:39 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:15:39 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:15:39 node1 pengine: [6738]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node1 on node1
Jan  9 22:15:39 node1 pengine: [6738]: notice: LogActions: Start STONITH-node1	(node1)
Jan  9 22:15:39 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:15:39 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 2: 7 actions in 7 synapses
Jan  9 22:15:39 node1 crmd: [6739]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1263071739-22) derived from /var/lib/pengine/pe-input-2.bz2
Jan  9 22:15:39 node1 crmd: [6739]: info: te_rsc_command: Initiating action 4: monitor STONITH-node1_monitor_0 on node1 (local)
Jan  9 22:15:39 node1 lrmd: [6736]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan  9 22:15:39 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=4:2:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node1_monitor_0 )
Jan  9 22:15:39 node1 lrmd: [6736]: info: rsc:STONITH-node1: monitor
Jan  9 22:15:39 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:39 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: monitor STONITH-node1_monitor_0 on node2
Jan  9 22:15:39 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node1_monitor_0 (call=2, rc=7, cib-update=56, confirmed=true) complete not running
Jan  9 22:15:39 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_monitor_0 (4) confirmed on node1 (rc=0)
Jan  9 22:15:39 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:39 node1 crmd: [6739]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node1 (local) - no waiting
Jan  9 22:15:39 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:39 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_monitor_0 (6) confirmed on node2 (rc=0)
Jan  9 22:15:39 node1 crmd: [6739]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on node2 - no waiting
Jan  9 22:15:39 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:15:39 node1 crmd: [6739]: info: te_rsc_command: Initiating action 7: start STONITH-node1_start_0 on node1 (local)
Jan  9 22:15:39 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=7:2:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node1_start_0 )
Jan  9 22:15:39 node1 lrmd: [6736]: info: rsc:STONITH-node1: start
Jan  9 22:15:39 node1 lrmd: [6970]: info: Try to start STONITH resource <rsc_id=STONITH-node1> : Device=external/drac5
Jan  9 22:15:39 node1 cib: [6967]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-4.raw
Jan  9 22:15:39 node1 pengine: [6738]: info: process_pe_message: Transition 2: PEngine Input stored in: /var/lib/pengine/pe-input-2.bz2
Jan  9 22:15:39 node1 cib: [6967]: info: write_cib_contents: Wrote version 0.8.0 of the CIB to disk (digest: 2607c05c350adc69f9e1889595d63721)
Jan  9 22:15:39 node1 cib: [6967]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.4bass3 (digest: /var/lib/heartbeat/crm/cib.JfElx2)
Jan  9 22:15:39 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:15:43 node1 stonithd: [6988]: info: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/drac5 status' returned 65280
Jan  9 22:15:43 node1 stonithd: [6734]: WARN: start STONITH-node1 failed, because its hostlist is empty
Jan  9 22:15:43 node1 lrmd: [6736]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Jan  9 22:15:43 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node1_start_0 (call=3, rc=1, cib-update=59, confirmed=true) complete unknown error
Jan  9 22:15:43 node1 crmd: [6739]: WARN: status_from_rc: Action 7 (STONITH-node1_start_0) on node1 failed (target: 0 vs. rc: 1): Error
Jan  9 22:15:43 node1 crmd: [6739]: WARN: update_failcount: Updating failcount for STONITH-node1 on node1 after failed start: rc=1 (update=INFINITY, time=1263071743)
Jan  9 22:15:43 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:43 node1 crmd: [6739]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node1_start_0, magic=0:1;7:2:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Event failed
Jan  9 22:15:43 node1 crmd: [6739]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:15:43 node1 crmd: [6739]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:15:43 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_start_0 (7) confirmed on node1 (rc=4)
Jan  9 22:15:43 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:15:43 node1 crmd: [6739]: notice: run_graph: Transition 2 (Complete=6, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-2.bz2): Stopped
Jan  9 22:15:43 node1 crmd: [6739]: info: te_graph_trigger: Transition 2 is now complete
Jan  9 22:15:43 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:15:43 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:15:43 node1 crmd: [6739]: info: do_pe_invoke: Query 66: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:15:43 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263071743-28, seq=8, quorate=1
Jan  9 22:15:43 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:15:43 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:15:43 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:15:43 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:15:43 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Started node1 FAILED
Jan  9 22:15:43 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:15:43 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:15:43 node1 pengine: [6738]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node1 on node2
Jan  9 22:15:43 node1 pengine: [6738]: notice: LogActions: Move resource STONITH-node1	(Started node1 -> node2)
Jan  9 22:15:43 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:15:43 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 3: 3 actions in 3 synapses
Jan  9 22:15:43 node1 crmd: [6739]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1263071743-28) derived from /var/lib/pengine/pe-input-3.bz2
Jan  9 22:15:43 node1 crmd: [6739]: info: te_rsc_command: Initiating action 1: stop STONITH-node1_stop_0 on node1 (local)
Jan  9 22:15:43 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=1:3:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node1_stop_0 )
Jan  9 22:15:43 node1 lrmd: [6736]: info: rsc:STONITH-node1: stop
Jan  9 22:15:43 node1 lrmd: [7060]: info: Try to stop STONITH resource <rsc_id=STONITH-node1> : Device=external/drac5
Jan  9 22:15:43 node1 stonithd: [6734]: notice: try to stop a resource STONITH-node1 who is not in started resource queue.
Jan  9 22:15:43 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node1_stop_0 (call=4, rc=0, cib-update=67, confirmed=true) complete ok
Jan  9 22:15:43 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_stop_0 (1) confirmed on node1 (rc=0)
Jan  9 22:15:43 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:43 node1 pengine: [6738]: info: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pengine/pe-input-3.bz2
Jan  9 22:15:43 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: start STONITH-node1_start_0 on node2
Jan  9 22:15:44 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:15:46 node1 crmd: [6739]: WARN: status_from_rc: Action 6 (STONITH-node1_start_0) on node2 failed (target: 0 vs. rc: 1): Error
Jan  9 22:15:46 node1 crmd: [6739]: WARN: update_failcount: Updating failcount for STONITH-node1 on node2 after failed start: rc=1 (update=INFINITY, time=1263071746)
Jan  9 22:15:46 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:46 node1 crmd: [6739]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node1_start_0, magic=0:1;6:3:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Event failed
Jan  9 22:15:46 node1 crmd: [6739]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:15:46 node1 crmd: [6739]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:15:46 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_start_0 (6) confirmed on node2 (rc=4)
Jan  9 22:15:46 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:15:46 node1 crmd: [6739]: notice: run_graph: Transition 3 (Complete=2, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-3.bz2): Stopped
Jan  9 22:15:46 node1 crmd: [6739]: info: te_graph_trigger: Transition 3 is now complete
Jan  9 22:15:46 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:15:46 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:15:46 node1 crmd: [6739]: info: do_pe_invoke: Query 74: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:15:46 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263071746-31, seq=8, quorate=1
Jan  9 22:15:46 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:15:46 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:15:46 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:15:46 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:15:46 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:15:46 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:15:46 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Started node2 FAILED
Jan  9 22:15:46 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:15:46 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:15:46 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:15:46 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:15:46 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:15:46 node1 pengine: [6738]: notice: LogActions: Stop resource STONITH-node1	(node2)
Jan  9 22:15:46 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:15:46 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 4: 2 actions in 2 synapses
Jan  9 22:15:46 node1 crmd: [6739]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1263071746-31) derived from /var/lib/pengine/pe-warn-0.bz2
Jan  9 22:15:46 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:15:46 node1 crmd: [6739]: info: te_rsc_command: Initiating action 1: stop STONITH-node1_stop_0 on node2
Jan  9 22:15:46 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_stop_0 (1) confirmed on node2 (rc=0)
Jan  9 22:15:46 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:15:46 node1 pengine: [6738]: WARN: process_pe_message: Transition 4: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-0.bz2
Jan  9 22:15:46 node1 crmd: [6739]: notice: run_graph: Transition 4 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-0.bz2): Complete
Jan  9 22:15:46 node1 crmd: [6739]: info: te_graph_trigger: Transition 4 is now complete
Jan  9 22:15:46 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:15:46 node1 crmd: [6739]: info: notify_crmd: Transition 4 status: done - <null>
Jan  9 22:15:46 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:15:46 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:15:46 node1 haclient: on_event:evt:cib_changed
Jan  9 22:15:46 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:16:22 node1 mgmtd: [6740]: info: CIB replace: resources
Jan  9 22:16:22 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/mgmtd/9, version=0.8.11): ok (rc=0)
Jan  9 22:20:07 node1 gdm-session-worker[7071]: PAM pam_putenv: NULL pam handle passed
Jan  9 22:21:24 node1 cib: [6735]: info: cib_stats: Processed 31 operations (4516.00us average, 0% utilization) in the last 10min
Jan  9 22:23:27 node1 mgmtd: [6740]: info: CIB replace: constraints
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="8" num_updates="11" />
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="9" num_updates="1" >
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: +     <constraints >
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: +       <rsc_location id="STONITH-node1-location" node="node1" rsc="STONITH-node1" score="-INFINITY" __crm_diff_marker__="added:top" />
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: +     </constraints>
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:23:27 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:23:27 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:23:27 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:23:27 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section constraints (origin=local/mgmtd/11, version=0.9.1): ok (rc=0)
Jan  9 22:23:27 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:23:27 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:23:27 node1 crmd: [6739]: info: do_pe_invoke: Query 75: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:23:27 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072207-33, seq=8, quorate=1
Jan  9 22:23:27 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:23:27 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:23:27 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:23:27 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:23:27 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:23:27 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:23:27 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:23:27 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:23:27 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:23:27 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:23:27 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:23:27 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:23:27 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:23:27 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:23:27 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 5: 0 actions in 0 synapses
Jan  9 22:23:27 node1 crmd: [6739]: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1263072207-33) derived from /var/lib/pengine/pe-warn-1.bz2
Jan  9 22:23:27 node1 haclient: on_event:evt:cib_changed
Jan  9 22:23:27 node1 cib: [7085]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-5.raw
Jan  9 22:23:27 node1 pengine: [6738]: WARN: process_pe_message: Transition 5: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-1.bz2
Jan  9 22:23:27 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:23:27 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:23:27 node1 crmd: [6739]: notice: run_graph: Transition 5 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-1.bz2): Complete
Jan  9 22:23:27 node1 crmd: [6739]: info: te_graph_trigger: Transition 5 is now complete
Jan  9 22:23:27 node1 crmd: [6739]: info: notify_crmd: Transition 5 status: done - <null>
Jan  9 22:23:27 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:23:27 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:23:27 node1 cib: [7085]: info: write_cib_contents: Wrote version 0.9.0 of the CIB to disk (digest: f05497986246d951c657aef6795fa8df)
Jan  9 22:23:27 node1 cib: [7085]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Ic3CgW (digest: /var/lib/heartbeat/crm/cib.v2txxC)
Jan  9 22:23:27 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:24:21 node1 mgmtd: [6740]: info: CIB replace: resources
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="9" num_updates="1" />
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="10" num_updates="1" >
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +     <resources >
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +       <primitive class="stonith" id="STONITH-node2" type="external/drac5" __crm_diff_marker__="added:top" >
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +         <meta_attributes id="STONITH-node2-meta_attributes" >
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-fa459867-08b3-49cb-af63-b570a4c4afe8" name="target-role" value="Started" />
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +         </meta_attributes>
Jan  9 22:24:21 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +         <operations id="STONITH-node2-operations" >
Jan  9 22:24:21 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +           <op id="STONITH-node2-op-monitor-15" interval="15" name="monitor" start-delay="15" timeout="15" />
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +         </operations>
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +         <instance_attributes id="STONITH-node2-instance_attributes" >
Jan  9 22:24:21 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-73504e93-d98b-49c8-865c-79c0cc266435" name="hostname" value="node2" />
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-8e9088e0-9aa9-4be0-8c3d-9e1118ce47c6" name="ipaddr" value="192.168.1.20" />
Jan  9 22:24:21 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +         </instance_attributes>
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +       </primitive>
Jan  9 22:24:21 node1 crmd: [6739]: info: do_pe_invoke: Query 76: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +     </resources>
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:24:21 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:24:21 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/mgmtd/14, version=0.10.1): ok (rc=0)
Jan  9 22:24:21 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:21 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072261-34, seq=8, quorate=1
Jan  9 22:24:21 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:24:21 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:24:21 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:24:21 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:24:21 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:24:21 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:24:21 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:24:21 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:21 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:24:21 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:24:21 node1 pengine: [6738]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node2 on node1
Jan  9 22:24:21 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:24:21 node1 pengine: [6738]: notice: LogActions: Start STONITH-node2	(node1)
Jan  9 22:24:21 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:24:21 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 6: 7 actions in 7 synapses
Jan  9 22:24:21 node1 crmd: [6739]: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1263072261-34) derived from /var/lib/pengine/pe-warn-2.bz2
Jan  9 22:24:21 node1 crmd: [6739]: info: te_rsc_command: Initiating action 4: monitor STONITH-node2_monitor_0 on node1 (local)
Jan  9 22:24:21 node1 lrmd: [6736]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan  9 22:24:21 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=4:6:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node2_monitor_0 )
Jan  9 22:24:21 node1 cib: [7088]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-6.raw
Jan  9 22:24:21 node1 pengine: [6738]: WARN: process_pe_message: Transition 6: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-2.bz2
Jan  9 22:24:21 node1 lrmd: [6736]: info: rsc:STONITH-node2: monitor
Jan  9 22:24:21 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:24:21 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: monitor STONITH-node2_monitor_0 on node2
Jan  9 22:24:21 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node2_monitor_0 (call=5, rc=7, cib-update=77, confirmed=true) complete not running
Jan  9 22:24:21 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_monitor_0 (4) confirmed on node1 (rc=0)
Jan  9 22:24:21 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:21 node1 crmd: [6739]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node1 (local) - no waiting
Jan  9 22:24:21 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_monitor_0 (6) confirmed on node2 (rc=0)
Jan  9 22:24:21 node1 cib: [7088]: info: write_cib_contents: Wrote version 0.10.0 of the CIB to disk (digest: f7633320c6f89e05eedbbe0da9b249c4)
Jan  9 22:24:21 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:21 node1 crmd: [6739]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on node2 - no waiting
Jan  9 22:24:21 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:24:21 node1 crmd: [6739]: info: te_rsc_command: Initiating action 7: start STONITH-node2_start_0 on node1 (local)
Jan  9 22:24:21 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=7:6:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node2_start_0 )
Jan  9 22:24:21 node1 lrmd: [6736]: info: rsc:STONITH-node2: start
Jan  9 22:24:21 node1 lrmd: [7091]: info: Try to start STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan  9 22:24:21 node1 stonithd: [6734]: info: Cannot get parameter userid from StonithNVpair
Jan  9 22:24:21 node1 cib: [7088]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.XXGAGL (digest: /var/lib/heartbeat/crm/cib.hXRScK)
Jan  9 22:24:21 node1 stonithd: [7109]: info: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/drac5 status' returned 256
Jan  9 22:24:21 node1 stonithd: [6734]: WARN: start STONITH-node2 failed, because its hostlist is empty
Jan  9 22:24:21 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node2_start_0 (call=6, rc=1, cib-update=80, confirmed=true) complete unknown error
Jan  9 22:24:21 node1 crmd: [6739]: WARN: status_from_rc: Action 7 (STONITH-node2_start_0) on node1 failed (target: 0 vs. rc: 1): Error
Jan  9 22:24:21 node1 crmd: [6739]: WARN: update_failcount: Updating failcount for STONITH-node2 on node1 after failed start: rc=1 (update=INFINITY, time=1263072261)
Jan  9 22:24:21 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:21 node1 crmd: [6739]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node2_start_0, magic=0:1;7:6:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Event failed
Jan  9 22:24:21 node1 crmd: [6739]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:24:21 node1 crmd: [6739]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:24:21 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_start_0 (7) confirmed on node1 (rc=4)
Jan  9 22:24:21 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:24:21 node1 crmd: [6739]: notice: run_graph: Transition 6 (Complete=6, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-2.bz2): Stopped
Jan  9 22:24:21 node1 crmd: [6739]: info: te_graph_trigger: Transition 6 is now complete
Jan  9 22:24:21 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:24:21 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:24:21 node1 crmd: [6739]: info: do_pe_invoke: Query 87: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:24:21 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072261-40, seq=8, quorate=1
Jan  9 22:24:21 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:24:21 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:24:21 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:24:21 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:24:21 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:24:21 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:24:21 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Started node1 FAILED
Jan  9 22:24:21 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:24:21 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:21 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:24:21 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:21 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:24:21 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:24:21 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:24:21 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:24:21 node1 pengine: [6738]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node2 on node2
Jan  9 22:24:21 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:24:21 node1 pengine: [6738]: notice: LogActions: Move resource STONITH-node2	(Started node1 -> node2)
Jan  9 22:24:21 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:24:21 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 7: 3 actions in 3 synapses
Jan  9 22:24:21 node1 crmd: [6739]: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1263072261-40) derived from /var/lib/pengine/pe-warn-3.bz2
Jan  9 22:24:21 node1 crmd: [6739]: info: te_rsc_command: Initiating action 1: stop STONITH-node2_stop_0 on node1 (local)
Jan  9 22:24:21 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=1:7:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node2_stop_0 )
Jan  9 22:24:21 node1 lrmd: [6736]: info: rsc:STONITH-node2: stop
Jan  9 22:24:21 node1 pengine: [6738]: WARN: process_pe_message: Transition 7: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-3.bz2
Jan  9 22:24:22 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:24:22 node1 lrmd: [7127]: info: Try to stop STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan  9 22:24:22 node1 stonithd: [6734]: notice: try to stop a resource STONITH-node2 who is not in started resource queue.
Jan  9 22:24:22 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node2_stop_0 (call=7, rc=0, cib-update=88, confirmed=true) complete ok
Jan  9 22:24:22 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_stop_0 (1) confirmed on node1 (rc=0)
Jan  9 22:24:22 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:22 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: start STONITH-node2_start_0 on node2
Jan  9 22:24:22 node1 crmd: [6739]: WARN: status_from_rc: Action 6 (STONITH-node2_start_0) on node2 failed (target: 0 vs. rc: 1): Error
Jan  9 22:24:22 node1 crmd: [6739]: WARN: update_failcount: Updating failcount for STONITH-node2 on node2 after failed start: rc=1 (update=INFINITY, time=1263072262)
Jan  9 22:24:22 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:22 node1 crmd: [6739]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node2_start_0, magic=0:1;6:7:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Event failed
Jan  9 22:24:22 node1 crmd: [6739]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:24:22 node1 crmd: [6739]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:24:22 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_start_0 (6) confirmed on node2 (rc=4)
Jan  9 22:24:22 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:24:22 node1 crmd: [6739]: notice: run_graph: Transition 7 (Complete=2, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-3.bz2): Stopped
Jan  9 22:24:22 node1 crmd: [6739]: info: te_graph_trigger: Transition 7 is now complete
Jan  9 22:24:22 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:24:22 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:24:22 node1 crmd: [6739]: info: do_pe_invoke: Query 95: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:24:22 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072262-43, seq=8, quorate=1
Jan  9 22:24:22 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:24:22 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:22 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:24:22 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:22 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:24:22 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:24:22 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:22 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:24:22 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:22 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:24:22 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:24:22 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Started node2 FAILED
Jan  9 22:24:22 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:24:22 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:22 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:24:22 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:22 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:24:22 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:24:22 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:24:22 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:24:22 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:24:22 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:24:22 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:24:22 node1 pengine: [6738]: notice: LogActions: Stop resource STONITH-node2	(node2)
Jan  9 22:24:22 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:24:22 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 8: 2 actions in 2 synapses
Jan  9 22:24:22 node1 crmd: [6739]: info: do_te_invoke: Processing graph 8 (ref=pe_calc-dc-1263072262-43) derived from /var/lib/pengine/pe-warn-4.bz2
Jan  9 22:24:22 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:24:22 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:24:22 node1 crmd: [6739]: info: te_rsc_command: Initiating action 1: stop STONITH-node2_stop_0 on node2
Jan  9 22:24:22 node1 pengine: [6738]: WARN: process_pe_message: Transition 8: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-4.bz2
Jan  9 22:24:22 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:24:22 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_stop_0 (1) confirmed on node2 (rc=0)
Jan  9 22:24:22 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:24:22 node1 crmd: [6739]: notice: run_graph: Transition 8 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-4.bz2): Complete
Jan  9 22:24:22 node1 crmd: [6739]: info: te_graph_trigger: Transition 8 is now complete
Jan  9 22:24:22 node1 crmd: [6739]: info: notify_crmd: Transition 8 status: done - <null>
Jan  9 22:24:22 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:24:22 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:24:22 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:22 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:24:51 node1 mgmtd: [6740]: info: CIB replace: resources
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="10" num_updates="11" />
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="11" num_updates="1" >
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +     <resources >
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +       <primitive id="STONITH-node2" >
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +         <instance_attributes id="STONITH-node2-instance_attributes" >
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-e3d5e41f-febd-43f3-aed4-e0e0742096cc" name="userid" value="root" __crm_diff_marker__="added:top" />
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +           <nvpair id="nvpair-5b17beca-8fc3-4385-b806-ad2eef76a074" name="passwd" value="novell" __crm_diff_marker__="added:top" />
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +         </instance_attributes>
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +       </primitive>
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +     </resources>
Jan  9 22:24:51 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:24:51 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:24:51 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:24:51 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/mgmtd/19, version=0.11.1): ok (rc=0)
Jan  9 22:24:51 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:24:51 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:24:51 node1 crmd: [6739]: info: do_pe_invoke: Query 96: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:24:51 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072291-45, seq=8, quorate=1
Jan  9 22:24:51 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:24:51 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:51 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:24:51 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:51 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:24:51 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:24:51 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:51 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:24:51 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:24:51 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:24:51 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:24:51 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:24:51 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:24:51 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:51 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:24:51 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:24:51 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:24:52 node1 cib: [7130]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-7.raw
Jan  9 22:24:52 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:24:52 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:24:52 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:24:52 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:24:52 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:24:52 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:24:52 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:24:52 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:24:52 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 9: 0 actions in 0 synapses
Jan  9 22:24:52 node1 cib: [7130]: info: write_cib_contents: Wrote version 0.11.0 of the CIB to disk (digest: 06e72882d9fba03804757f68b41c9ba5)
Jan  9 22:24:51 node1 haclient: on_event:evt:cib_changed
Jan  9 22:24:52 node1 crmd: [6739]: info: do_te_invoke: Processing graph 9 (ref=pe_calc-dc-1263072291-45) derived from /var/lib/pengine/pe-warn-5.bz2
Jan  9 22:24:52 node1 pengine: [6738]: WARN: process_pe_message: Transition 9: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-5.bz2
Jan  9 22:24:52 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:24:52 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:24:52 node1 crmd: [6739]: notice: run_graph: Transition 9 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-5.bz2): Complete
Jan  9 22:24:52 node1 crmd: [6739]: info: te_graph_trigger: Transition 9 is now complete
Jan  9 22:24:52 node1 crmd: [6739]: info: notify_crmd: Transition 9 status: done - <null>
Jan  9 22:24:52 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:24:52 node1 cib: [7130]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.W21lVu (digest: /var/lib/heartbeat/crm/cib.EtTVbL)
Jan  9 22:24:52 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:24:52 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:25:00 node1 mgmtd: [6740]: info: CIB replace: resources
Jan  9 22:25:00 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/mgmtd/22, version=0.11.1): ok (rc=0)
Jan  9 22:25:26 node1 mgmtd: [6740]: info: CIB replace: constraints
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="11" num_updates="1" />
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="12" num_updates="1" >
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: +     <constraints >
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: +       <rsc_location id="STONITH-node2-location" node="node2" rsc="STONITH-node2" score="-INFINITY" __crm_diff_marker__="added:top" />
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: +     </constraints>
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:25:26 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:25:26 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:25:26 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:25:26 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_replace for section constraints (origin=local/mgmtd/23, version=0.12.1): ok (rc=0)
Jan  9 22:25:26 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:25:26 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:25:26 node1 crmd: [6739]: info: do_pe_invoke: Query 97: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:25:26 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072326-46, seq=8, quorate=1
Jan  9 22:25:26 node1 haclient: on_event:evt:cib_changed
Jan  9 22:25:26 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:25:26 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:25:26 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node1: unknown error
Jan  9 22:25:26 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:25:26 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:25:26 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:25:26 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:25:26 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:25:26 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:25:26 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:25:26 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:25:26 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:25:26 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node1
Jan  9 22:25:26 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:25:26 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:25:26 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:25:26 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:25:26 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:25:26 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:25:26 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:25:26 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:25:26 node1 cib: [7132]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-8.raw
Jan  9 22:25:26 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:25:26 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:25:26 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:25:26 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:25:26 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 10: 0 actions in 0 synapses
Jan  9 22:25:26 node1 crmd: [6739]: info: do_te_invoke: Processing graph 10 (ref=pe_calc-dc-1263072326-46) derived from /var/lib/pengine/pe-warn-6.bz2
Jan  9 22:25:26 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:25:26 node1 crmd: [6739]: notice: run_graph: Transition 10 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-6.bz2): Complete
Jan  9 22:25:26 node1 crmd: [6739]: info: te_graph_trigger: Transition 10 is now complete
Jan  9 22:25:26 node1 crmd: [6739]: info: notify_crmd: Transition 10 status: done - <null>
Jan  9 22:25:26 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:25:26 node1 cib: [7132]: info: write_cib_contents: Wrote version 0.12.0 of the CIB to disk (digest: cec9f0055496185ee9b29fae236ae8f0)
Jan  9 22:25:26 node1 pengine: [6738]: WARN: process_pe_message: Transition 10: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-6.bz2
Jan  9 22:25:27 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:25:27 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:25:27 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:25:27 node1 cib: [7132]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.GLBOmU (digest: /var/lib/heartbeat/crm/cib.4zOzBD)
Jan  9 22:25:45 node1 pengine: [7134]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jan  9 22:25:45 node1 pengine: [7134]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:25:46 node1 crmd: [7135]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jan  9 22:25:46 node1 crmd: [7135]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:26:02 node1 crmd: [6739]: info: do_lrm_invoke: Removing resource STONITH-node1 from the LRM
Jan  9 22:26:02 node1 crmd: [6739]: info: send_direct_ack: ACK'ing resource op STONITH-node1_delete_0 from mgmtd-6740: lrm_invoke-lrmd-1263072362-48
Jan  9 22:26:02 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='STONITH-node1'] (origin=local/crmd/98, version=0.12.2): ok (rc=0)
Jan  9 22:26:02 node1 mgmtd: [6740]: info: Delete fail-count for STONITH-node1 from node1
Jan  9 22:26:02 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node1_monitor_0, magic=0:7;4:2:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Resource op removal
Jan  9 22:26:02 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=node1, magic=NA) : Transient attribute: removal
Jan  9 22:26:02 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:26:02 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:26:02 node1 crmd: [6739]: info: do_pe_invoke: Query 102: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:02 node1 crmd: [6739]: info: do_pe_invoke: Query 103: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:02 node1 crmd: [6739]: info: do_lrm_invoke: Forcing a local LRM refresh
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="12" num_updates="3" />
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="13" num_updates="1" >
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1263072362" __crm_diff_marker__="added:top" />
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:26:03 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:26:03 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:26:03 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:26:03 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/101, version=0.13.1): ok (rc=0)
Jan  9 22:26:03 node1 crmd: [6739]: info: do_pe_invoke: Query 105: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:03 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072363-49, seq=8, quorate=1
Jan  9 22:26:03 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:26:03 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:26:03 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:26:03 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:03 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:26:03 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:26:03 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:03 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:26:03 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:03 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:26:03 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:26:03 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:26:03 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:26:03 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:26:03 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:26:03 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:03 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:26:03 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:03 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:26:03 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:26:03 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:26:03 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:26:03 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:26:03 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 11: 3 actions in 3 synapses
Jan  9 22:26:03 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/108, version=0.13.1): ok (rc=0)
Jan  9 22:26:03 node1 pengine: [6738]: WARN: process_pe_message: Transition 11: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-7.bz2
Jan  9 22:26:03 node1 cib: [7137]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-9.raw
Jan  9 22:26:03 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:26:03 node1 crmd: [6739]: info: do_te_invoke: Processing graph 11 (ref=pe_calc-dc-1263072363-49) derived from /var/lib/pengine/pe-warn-7.bz2
Jan  9 22:26:03 node1 crmd: [6739]: info: te_rsc_command: Initiating action 4: monitor STONITH-node1_monitor_0 on node1 (local)
Jan  9 22:26:03 node1 lrmd: [6736]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan  9 22:26:03 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=4:11:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node1_monitor_0 )
Jan  9 22:26:03 node1 lrmd: [6736]: info: rsc:STONITH-node1: monitor
Jan  9 22:26:03 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node1_monitor_0 (call=8, rc=7, cib-update=109, confirmed=true) complete not running
Jan  9 22:26:03 node1 cib: [7137]: info: write_cib_contents: Wrote version 0.13.0 of the CIB to disk (digest: 30a5ce4f844a5242f10b838fb0014a6f)
Jan  9 22:26:03 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_monitor_0 (4) confirmed on node1 (rc=0)
Jan  9 22:26:03 node1 crmd: [6739]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node1 (local) - no waiting
Jan  9 22:26:03 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:26:03 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:26:03 node1 crmd: [6739]: notice: run_graph: Transition 11 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-7.bz2): Complete
Jan  9 22:26:03 node1 crmd: [6739]: info: te_graph_trigger: Transition 11 is now complete
Jan  9 22:26:03 node1 crmd: [6739]: info: notify_crmd: Transition 11 status: done - <null>
Jan  9 22:26:03 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:26:03 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:26:03 node1 cib: [7137]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.EB5hxC (digest: /var/lib/heartbeat/crm/cib.zAyUGS)
Jan  9 22:26:05 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/mgmtd/30, version=0.13.2): ok (rc=0)
Jan  9 22:26:05 node1 openais[6728]: [crm  ] ERROR: route_ais_message: Child 7140 spawned to record non-fatal assertion failure line 1299: dest > 0 && dest < SIZEOF(pcmk_children)
Jan  9 22:26:05 node1 openais[6728]: [crm  ] ERROR: route_ais_message: Invalid destination: 0
Jan  9 22:26:05 node1 openais[6728]: [MAIN ] Msg[6] (dest=local:unknown, from=node2:crmd.4836, remote=true, size=857): <create_request_adv origin="send_direct_ack" t="crmd" version="3.0.1" subt="request" refer
Jan  9 22:26:05 node1 crmd: [6739]: info: do_lrm_invoke: Forcing a local LRM refresh
Jan  9 22:26:05 node1 mgmtd: [6740]: info: Delete fail-count for STONITH-node1 from node2
Jan  9 22:26:05 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=node2, magic=NA) : Transient attribute: removal
Jan  9 22:26:05 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:26:05 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:26:05 node1 crmd: [6739]: info: do_pe_invoke: Query 113: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:05 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='STONITH-node1'] (origin=node2/crmd/22, version=0.13.4): ok (rc=0)
Jan  9 22:26:05 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node1_monitor_0, magic=0:7;6:2:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Resource op removal
Jan  9 22:26:05 node1 crmd: [6739]: info: do_pe_invoke: Query 114: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="13" num_updates="4" >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -   <configuration >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -     <crm_config >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:05 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -         <nvpair value="1263072362" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:05 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -       </cluster_property_set>
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -     </crm_config>
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: -   </configuration>
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: - </cib>
Jan  9 22:26:05 node1 crmd: [6739]: info: do_pe_invoke: Query 115: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="14" num_updates="1" >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair value="1263072369" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:26:05 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:26:05 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node2/crmd/24, version=0.14.1): ok (rc=0)
Jan  9 22:26:05 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072365-53, seq=8, quorate=1
Jan  9 22:26:05 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:26:05 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:26:05 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:26:05 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:05 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:26:05 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:26:05 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:05 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:26:05 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:26:05 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:26:05 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:26:05 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:26:05 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:26:05 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:05 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:26:05 node1 pengine: [6738]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node1 on node2
Jan  9 22:26:05 node1 pengine: [6738]: notice: LogActions: Start STONITH-node1	(node2)
Jan  9 22:26:05 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:26:05 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:26:05 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 12: 5 actions in 5 synapses
Jan  9 22:26:05 node1 crmd: [6739]: info: do_te_invoke: Processing graph 12 (ref=pe_calc-dc-1263072365-53) derived from /var/lib/pengine/pe-warn-8.bz2
Jan  9 22:26:05 node1 crmd: [6739]: info: te_rsc_command: Initiating action 5: monitor STONITH-node1_monitor_0 on node2
Jan  9 22:26:05 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/118, version=0.14.1): ok (rc=0)
Jan  9 22:26:05 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_monitor_0 (5) confirmed on node2 (rc=0)
Jan  9 22:26:05 node1 cib: [7141]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-10.raw
Jan  9 22:26:05 node1 crmd: [6739]: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on node2 - no waiting
Jan  9 22:26:05 node1 pengine: [6738]: WARN: process_pe_message: Transition 12: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-8.bz2
Jan  9 22:26:05 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:26:05 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:26:05 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: start STONITH-node1_start_0 on node2
Jan  9 22:26:05 node1 cib: [7141]: info: write_cib_contents: Wrote version 0.14.0 of the CIB to disk (digest: 2624f10f058c62c1153085bdbde953f0)
Jan  9 22:26:05 node1 cib: [7141]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ejAvRS (digest: /var/lib/heartbeat/crm/cib.RO1wye)
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="14" num_updates="2" >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -   <configuration >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -     <crm_config >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -         <nvpair value="1263072369" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -       </cluster_property_set>
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -     </crm_config>
Jan  9 22:26:07 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: -   </configuration>
Jan  9 22:26:07 node1 crmd: [6739]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Jan  9 22:26:07 node1 crmd: [6739]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:26:07 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: - </cib>
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="15" num_updates="1" >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair value="1263072365" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:26:07 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:26:07 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/mgmtd/35, version=0.15.1): ok (rc=0)
Jan  9 22:26:07 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:26:07 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:26:07 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:07 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/121, version=0.15.1): ok (rc=0)
Jan  9 22:26:07 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:07 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:07 node1 cib: [7142]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-11.raw
Jan  9 22:26:07 node1 cib: [7142]: info: write_cib_contents: Wrote version 0.15.0 of the CIB to disk (digest: a7267b5762f82e40a0adc24a0ce3e557)
Jan  9 22:26:07 node1 cib: [7142]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.jdJehH (digest: /var/lib/heartbeat/crm/cib.p8k2B7)
Jan  9 22:26:07 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:26:07 node1 crmd: [6739]: WARN: status_from_rc: Action 6 (STONITH-node1_start_0) on node2 failed (target: 0 vs. rc: 1): Error
Jan  9 22:26:07 node1 crmd: [6739]: WARN: update_failcount: Updating failcount for STONITH-node1 on node2 after failed start: rc=1 (update=INFINITY, time=1263072367)
Jan  9 22:26:07 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:07 node1 crmd: [6739]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node1_start_0, magic=0:1;6:12:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Event failed
Jan  9 22:26:07 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_start_0 (6) confirmed on node2 (rc=4)
Jan  9 22:26:07 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:26:07 node1 crmd: [6739]: notice: run_graph: Transition 12 (Complete=4, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-8.bz2): Stopped
Jan  9 22:26:07 node1 crmd: [6739]: info: te_graph_trigger: Transition 12 is now complete
Jan  9 22:26:07 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:26:07 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:26:07 node1 crmd: [6739]: info: do_pe_invoke: Query 127: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:07 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072367-57, seq=8, quorate=1
Jan  9 22:26:07 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:26:07 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:07 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:26:07 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:26:07 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:07 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:26:07 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:07 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:26:07 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Started node2 FAILED
Jan  9 22:26:07 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:26:07 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:26:07 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:26:07 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:26:07 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:07 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:26:07 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:07 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:26:07 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:26:07 node1 pengine: [6738]: notice: LogActions: Stop resource STONITH-node1	(node2)
Jan  9 22:26:07 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:26:07 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:26:07 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 13: 2 actions in 2 synapses
Jan  9 22:26:08 node1 pengine: [6738]: WARN: process_pe_message: Transition 13: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-9.bz2
Jan  9 22:26:08 node1 crmd: [6739]: info: do_te_invoke: Processing graph 13 (ref=pe_calc-dc-1263072367-57) derived from /var/lib/pengine/pe-warn-9.bz2
Jan  9 22:26:08 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:26:08 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:26:08 node1 crmd: [6739]: info: te_rsc_command: Initiating action 1: stop STONITH-node1_stop_0 on node2
Jan  9 22:26:08 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:26:08 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node1_stop_0 (1) confirmed on node2 (rc=0)
Jan  9 22:26:08 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:26:08 node1 crmd: [6739]: notice: run_graph: Transition 13 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-9.bz2): Complete
Jan  9 22:26:08 node1 crmd: [6739]: info: te_graph_trigger: Transition 13 is now complete
Jan  9 22:26:08 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:08 node1 crmd: [6739]: info: notify_crmd: Transition 13 status: done - <null>
Jan  9 22:26:08 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:26:08 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:26:08 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:26:20 node1 crmd: [6739]: info: do_lrm_invoke: Removing resource STONITH-node2 from the LRM
Jan  9 22:26:20 node1 crmd: [6739]: info: send_direct_ack: ACK'ing resource op STONITH-node2_delete_0 from mgmtd-6740: lrm_invoke-lrmd-1263072380-60
Jan  9 22:26:20 node1 mgmtd: [6740]: info: Delete fail-count for STONITH-node2 from node1
Jan  9 22:26:20 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='STONITH-node2'] (origin=local/crmd/128, version=0.15.7): ok (rc=0)
Jan  9 22:26:20 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=node1, magic=NA) : Transient attribute: removal
Jan  9 22:26:20 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node2_monitor_0, magic=0:7;4:6:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Resource op removal
Jan  9 22:26:20 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:26:20 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:26:20 node1 crmd: [6739]: info: do_pe_invoke: Query 131: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:20 node1 crmd: [6739]: info: do_pe_invoke: Query 132: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:20 node1 crmd: [6739]: info: do_lrm_invoke: Forcing a local LRM refresh
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="15" num_updates="7" >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -   <configuration >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -     <crm_config >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -         <nvpair value="1263072365" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -       </cluster_property_set>
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -     </crm_config>
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: -   </configuration>
Jan  9 22:26:20 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: - </cib>
Jan  9 22:26:20 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="16" num_updates="1" >
Jan  9 22:26:20 node1 crmd: [6739]: info: do_pe_invoke: Query 134: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair value="1263072380" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:26:20 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:26:20 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/130, version=0.16.1): ok (rc=0)
Jan  9 22:26:20 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072380-61, seq=8, quorate=1
Jan  9 22:26:20 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:26:20 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:26:20 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:26:20 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:26:20 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:20 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node2: unknown error
Jan  9 22:26:20 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:20 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:26:20 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:26:20 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:26:20 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:26:20 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:20 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node2
Jan  9 22:26:20 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:20 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:26:20 node1 pengine: [6738]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node2 on node1
Jan  9 22:26:20 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:26:20 node1 pengine: [6738]: notice: LogActions: Start STONITH-node2	(node1)
Jan  9 22:26:20 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:26:20 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 14: 5 actions in 5 synapses
Jan  9 22:26:20 node1 crmd: [6739]: info: do_te_invoke: Processing graph 14 (ref=pe_calc-dc-1263072380-61) derived from /var/lib/pengine/pe-warn-10.bz2
Jan  9 22:26:20 node1 crmd: [6739]: info: te_rsc_command: Initiating action 4: monitor STONITH-node2_monitor_0 on node1 (local)
Jan  9 22:26:20 node1 lrmd: [6736]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan  9 22:26:20 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=4:14:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node2_monitor_0 )
Jan  9 22:26:20 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/137, version=0.16.1): ok (rc=0)
Jan  9 22:26:20 node1 cib: [7143]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-12.raw
Jan  9 22:26:20 node1 pengine: [6738]: WARN: process_pe_message: Transition 14: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-10.bz2
Jan  9 22:26:20 node1 lrmd: [6736]: info: rsc:STONITH-node2: monitor
Jan  9 22:26:20 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:26:20 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node2_monitor_0 (call=9, rc=7, cib-update=138, confirmed=true) complete not running
Jan  9 22:26:20 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_monitor_0 (4) confirmed on node1 (rc=0)
Jan  9 22:26:20 node1 crmd: [6739]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node1 (local) - no waiting
Jan  9 22:26:20 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:26:20 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: start STONITH-node2_start_0 on node1 (local)
Jan  9 22:26:20 node1 cib: [7143]: info: write_cib_contents: Wrote version 0.16.0 of the CIB to disk (digest: 46084383c549411092c9a7065039f186)
Jan  9 22:26:20 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=6:14:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node2_start_0 )
Jan  9 22:26:20 node1 lrmd: [6736]: info: rsc:STONITH-node2: start
Jan  9 22:26:20 node1 lrmd: [7146]: info: Try to start STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan  9 22:26:20 node1 cib: [7143]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.9H6rav (digest: /var/lib/heartbeat/crm/cib.mRBv7t)
Jan  9 22:26:22 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/mgmtd/45, version=0.16.2): ok (rc=0)
Jan  9 22:26:22 node1 openais[6728]: [crm  ] ERROR: route_ais_message: Child 7179 spawned to record non-fatal assertion failure line 1299: dest > 0 && dest < SIZEOF(pcmk_children)
Jan  9 22:26:22 node1 openais[6728]: [crm  ] ERROR: route_ais_message: Invalid destination: 0
Jan  9 22:26:22 node1 openais[6728]: [MAIN ] Msg[7] (dest=local:unknown, from=node2:crmd.4836, remote=true, size=857): <create_request_adv origin="send_direct_ack" t="crmd" version="3.0.1" subt="request" refer
Jan  9 22:26:22 node1 crmd: [6739]: WARN: msg_to_op(1224): failed to get the value of field lrm_opstatus from a ha_msg
Jan  9 22:26:22 node1 crmd: [6739]: info: msg_to_op: Message follows:
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG: Dumping message with 16 fields
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[0] : [lrm_t=op]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[1] : [lrm_rid=STONITH-node2]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[2] : [lrm_op=start]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[3] : [lrm_timeout=20000]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[4] : [lrm_interval=0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[5] : [lrm_delay=0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[6] : [lrm_copyparams=1]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[7] : [lrm_t_run=0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[8] : [lrm_t_rcchange=0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[9] : [lrm_exec_time=0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[10] : [lrm_queue_time=0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[11] : [lrm_targetrc=-1]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[12] : [lrm_app=crmd]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[13] : [lrm_userdata=6:14:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[14] : [(2)lrm_param=0x657700(129 163)]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG: Dumping message with 6 fields
Jan  9 22:26:22 node1 mgmtd: [6740]: info: Delete fail-count for STONITH-node2 from node2
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[0] : [ipaddr=192.168.1.20]
Jan  9 22:26:22 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']//lrm_resource[@id='STONITH-node2'] (origin=node2/crmd/32, version=0.16.4): ok (rc=0)
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[1] : [CRM_meta_timeout=20000]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[2] : [crm_feature_set=3.0.1]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[3] : [hostname=node2]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[4] : [passwd=novell]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[5] : [userid=root]
Jan  9 22:26:22 node1 crmd: [6739]: info: MSG[15] : [lrm_callid=10]
Jan  9 22:26:22 node1 crmd: [6739]: info: do_lrm_invoke: Forcing a local LRM refresh
Jan  9 22:26:22 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=0, tag=transient_attributes, id=node2, magic=NA) : Transient attribute: removal
Jan  9 22:26:22 node1 crmd: [6739]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Jan  9 22:26:22 node1 crmd: [6739]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:26:22 node1 crmd: [6739]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node2_monitor_0, magic=0:7;6:6:7:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Resource op removal
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="16" num_updates="4" >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -   <configuration >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -     <crm_config >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -         <nvpair value="1263072380" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -       </cluster_property_set>
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -     </crm_config>
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: -   </configuration>
Jan  9 22:26:22 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=0) : Non-status change
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: - </cib>
Jan  9 22:26:22 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="17" num_updates="1" >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair value="1263072386" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:26:22 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:26:22 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=node2/crmd/34, version=0.17.1): ok (rc=0)
Jan  9 22:26:22 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:26:22 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:26:22 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/144, version=0.17.2): ok (rc=0)
Jan  9 22:26:22 node1 cib: [7180]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-13.raw
Jan  9 22:26:22 node1 cib: [7180]: info: write_cib_contents: Wrote version 0.17.0 of the CIB to disk (digest: c4f43ac23c36b43eca06205b746c967a)
Jan  9 22:26:22 node1 cib: [7180]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ARgXGD (digest: /var/lib/heartbeat/crm/cib.Gz3XTH)
Jan  9 22:26:22 node1 stonithd: [7164]: info: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/drac5 status' returned 65280
Jan  9 22:26:22 node1 stonithd: [6734]: WARN: start STONITH-node2 failed, because its hostlist is empty
Jan  9 22:26:22 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node2_start_0 (call=10, rc=1, cib-update=145, confirmed=true) complete unknown error
Jan  9 22:26:22 node1 crmd: [6739]: WARN: status_from_rc: Action 6 (STONITH-node2_start_0) on node1 failed (target: 0 vs. rc: 1): Error
Jan  9 22:26:22 node1 crmd: [6739]: WARN: update_failcount: Updating failcount for STONITH-node2 on node1 after failed start: rc=1 (update=INFINITY, time=1263072382)
Jan  9 22:26:22 node1 crmd: [6739]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node2_start_0, magic=0:1;6:14:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0) : Event failed
Jan  9 22:26:22 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_start_0 (6) confirmed on node1 (rc=4)
Jan  9 22:26:22 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:26:22 node1 crmd: [6739]: notice: run_graph: Transition 14 (Complete=4, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-10.bz2): Stopped
Jan  9 22:26:22 node1 crmd: [6739]: info: te_graph_trigger: Transition 14 is now complete
Jan  9 22:26:22 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:26:22 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:26:22 node1 crmd: [6739]: info: do_pe_invoke: Query 151: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:22 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072382-66, seq=8, quorate=1
Jan  9 22:26:22 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:26:22 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:22 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:26:22 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:26:22 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:22 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:26:22 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:26:22 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Started node1 FAILED
Jan  9 22:26:22 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:26:22 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:26:22 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:26:22 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:22 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:26:22 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:26:22 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:26:22 node1 pengine: [6738]: notice: LogActions: Stop resource STONITH-node2	(node1)
Jan  9 22:26:22 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:26:22 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 15: 5 actions in 5 synapses
Jan  9 22:26:22 node1 crmd: [6739]: info: do_te_invoke: Processing graph 15 (ref=pe_calc-dc-1263072382-66) derived from /var/lib/pengine/pe-warn-11.bz2
Jan  9 22:26:23 node1 pengine: [6738]: WARN: process_pe_message: Transition 15: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-11.bz2
Jan  9 22:26:22 node1 crmd: [6739]: info: te_rsc_command: Initiating action 6: monitor STONITH-node2_monitor_0 on node2
Jan  9 22:26:23 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:26:23 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:26:23 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_monitor_0 (6) confirmed on node2 (rc=0)
Jan  9 22:26:23 node1 crmd: [6739]: info: te_rsc_command: Initiating action 5: probe_complete probe_complete on node2 - no waiting
Jan  9 22:26:23 node1 crmd: [6739]: info: te_pseudo_action: Pseudo action 3 fired and confirmed
Jan  9 22:26:23 node1 crmd: [6739]: info: te_rsc_command: Initiating action 1: stop STONITH-node2_stop_0 on node1 (local)
Jan  9 22:26:23 node1 crmd: [6739]: info: do_lrm_rsc_op: Performing key=1:15:0:9b736a73-9fb9-4706-9a4e-5880beca4fb0 op=STONITH-node2_stop_0 )
Jan  9 22:26:23 node1 lrmd: [6736]: info: rsc:STONITH-node2: stop
Jan  9 22:26:23 node1 lrmd: [7186]: info: Try to stop STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan  9 22:26:23 node1 stonithd: [6734]: notice: try to stop a resource STONITH-node2 who is not in started resource queue.
Jan  9 22:26:23 node1 crmd: [6739]: info: process_lrm_event: LRM operation STONITH-node2_stop_0 (call=11, rc=0, cib-update=152, confirmed=true) complete ok
Jan  9 22:26:23 node1 crmd: [6739]: info: match_graph_event: Action STONITH-node2_stop_0 (1) confirmed on node1 (rc=0)
Jan  9 22:26:23 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:26:23 node1 crmd: [6739]: notice: run_graph: Transition 15 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-11.bz2): Complete
Jan  9 22:26:23 node1 crmd: [6739]: info: te_graph_trigger: Transition 15 is now complete
Jan  9 22:26:23 node1 crmd: [6739]: info: notify_crmd: Transition 15 status: done - <null>
Jan  9 22:26:23 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:26:23 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="17" num_updates="7" >
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -   <configuration >
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -     <crm_config >
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -         <nvpair value="1263072386" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -       </cluster_property_set>
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -     </crm_config>
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: -   </configuration>
Jan  9 22:26:24 node1 crmd: [6739]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: - </cib>
Jan  9 22:26:24 node1 crmd: [6739]: info: need_abort: Aborting on change to admin_epoch
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="18" num_updates="1" >
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +   <configuration >
Jan  9 22:26:24 node1 crmd: [6739]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +     <crm_config >
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +       <cluster_property_set id="cib-bootstrap-options" >
Jan  9 22:26:24 node1 crmd: [6739]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +         <nvpair value="1263072382" id="cib-bootstrap-options-last-lrm-refresh" />
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +       </cluster_property_set>
Jan  9 22:26:24 node1 crmd: [6739]: info: do_pe_invoke: Query 153: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +     </crm_config>
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: +   </configuration>
Jan  9 22:26:24 node1 cib: [6735]: info: log_data_element: cib:diff: + </cib>
Jan  9 22:26:24 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/mgmtd/50, version=0.18.1): ok (rc=0)
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 crmd: [6739]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072384-70, seq=8, quorate=1
Jan  9 22:26:24 node1 crmd: [6739]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:26:24 node1 crmd: [6739]: info: ais_dispatch: Membership 8: quorum retained
Jan  9 22:26:24 node1 pengine: [6738]: info: determine_online_status: Node node1 is online
Jan  9 22:26:24 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:24 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:26:24 node1 pengine: [6738]: info: determine_online_status: Node node2 is online
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 pengine: [6738]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:26:24 node1 pengine: [6738]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:26:24 node1 pengine: [6738]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 pengine: [6738]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:26:24 node1 pengine: [6738]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:26:24 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:26:24 node1 pengine: [6738]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 pengine: [6738]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:26:24 node1 cib: [6735]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/156, version=0.18.1): ok (rc=0)
Jan  9 22:26:24 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:26:24 node1 cib: [7188]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-14.raw
Jan  9 22:26:24 node1 pengine: [6738]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:26:24 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:26:24 node1 pengine: [6738]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 crmd: [6739]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:26:24 node1 crmd: [6739]: info: unpack_graph: Unpacked transition 16: 0 actions in 0 synapses
Jan  9 22:26:24 node1 crmd: [6739]: info: do_te_invoke: Processing graph 16 (ref=pe_calc-dc-1263072384-70) derived from /var/lib/pengine/pe-warn-12.bz2
Jan  9 22:26:24 node1 pengine: [6738]: WARN: process_pe_message: Transition 16: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-12.bz2
Jan  9 22:26:24 node1 crmd: [6739]: info: run_graph: ====================================================
Jan  9 22:26:24 node1 cib: [7188]: info: write_cib_contents: Wrote version 0.18.0 of the CIB to disk (digest: c1086c0798c025156e9659094a31987b)
Jan  9 22:26:24 node1 pengine: [6738]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:26:24 node1 crmd: [6739]: notice: run_graph: Transition 16 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-12.bz2): Complete
Jan  9 22:26:24 node1 crmd: [6739]: info: te_graph_trigger: Transition 16 is now complete
Jan  9 22:26:24 node1 crmd: [6739]: info: notify_crmd: Transition 16 status: done - <null>
Jan  9 22:26:24 node1 crmd: [6739]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 crmd: [6739]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:26:24 node1 cib: [7188]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Si5mlH (digest: /var/lib/heartbeat/crm/cib.g15lNQ)
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 haclient: on_event:evt:cib_changed
Jan  9 22:26:24 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:24 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:24 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:24 node1 haclient: on_event: from message queue: evt:cib_changed
Jan  9 22:26:24 node1 mgmtd: [6740]: info: CIB query: cib
Jan  9 22:26:46 node1 attrd: [6737]: ERROR: ais_dispatch: Receiving message body failed: (-1) unknown: Resource temporarily unavailable (11)
Jan  9 22:26:46 node1 crmd: [6739]: ERROR: ais_dispatch: Receiving message body failed: (-1) unknown: Resource temporarily unavailable (11)
Jan  9 22:26:46 node1 attrd: [6737]: ERROR: ais_dispatch: AIS connection failed
Jan  9 22:26:46 node1 crmd: [6739]: ERROR: ais_dispatch: AIS connection failed
Jan  9 22:26:46 node1 crmd: [6739]: ERROR: crm_ais_destroy: AIS connection terminated
Jan  9 22:26:46 node1 attrd: [6737]: CRIT: attrd_ais_destroy: Lost connection to OpenAIS service!
Jan  9 22:26:46 node1 attrd: [6737]: info: main: Exiting...
Jan  9 22:26:46 node1 attrd: [6737]: ERROR: attrd_cib_connection_destroy: Connection to the CIB terminated...
Jan  9 22:26:46 node1 cib: [6735]: ERROR: ais_dispatch: Receiving message body failed: (-1) unknown: Resource temporarily unavailable (11)
Jan  9 22:26:46 node1 cib: [6735]: ERROR: ais_dispatch: AIS connection failed
Jan  9 22:26:46 node1 cib: [6735]: ERROR: cib_ais_destroy: AIS connection terminated
Jan  9 22:26:46 node1 mgmtd: [6740]: CRIT: cib_native_dispatch: Lost connection to the CIB service [6735/callback].
Jan  9 22:26:46 node1 mgmtd: [6740]: CRIT: cib_native_dispatch: Lost connection to the CIB service [6735/command].
Jan  9 22:26:46 node1 mgmtd: [6740]: ERROR: Connection to the CIB terminated... exiting
Jan  9 22:26:46 node1 stonithd: [6734]: ERROR: ais_dispatch: Receiving message body failed: (-1) unknown: Resource temporarily unavailable (11)
Jan  9 22:26:46 node1 stonithd: [6734]: ERROR: ais_dispatch: AIS connection failed
Jan  9 22:26:46 node1 stonithd: [6734]: ERROR: AIS connection terminated
Jan  9 22:27:07 node1 shutdown[7192]: shutting down for system reboot
Jan  9 22:27:07 node1 init: Switching to runlevel: 6
Jan  9 22:27:09 node1 kernel: bootsplash: status on console 0 changed to on
Jan  9 22:27:09 node1 smartd[4601]: smartd received signal 15: Terminated
Jan  9 22:27:09 node1 smartd[4601]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 22:27:09 node1 smartd[4601]: smartd is exiting (exit status 0)
Jan  9 22:27:10 node1 sshd[4506]: Received signal 15; terminating.
Jan  9 22:27:10 node1 libvirtd: Shutting down on signal 15
Jan  9 22:27:10 node1 multipathd: 149455400000000000000000001000000900500000f000000: stop event checker thread
Jan  9 22:27:10 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Jan  9 22:27:11 node1 kernel: Kernel logging (proc) stopped.
Jan  9 22:27:11 node1 kernel: Kernel log daemon terminating.
Jan  9 22:27:11 node1 syslog-ng[2337]: Termination requested via signal, terminating;
Jan  9 22:27:11 node1 syslog-ng[2337]: syslog-ng shutting down; version='2.0.9'
Jan  9 22:28:46 node1 syslog-ng[5207]: syslog-ng starting up; version='2.0.9'
Jan  9 22:28:47 node1 rchal: CPU frequency scaling is not supported by your processor.
Jan  9 22:28:47 node1 rchal: boot with 'CPUFREQ=no' in to avoid this warning.
Jan  9 22:28:47 node1 rchal: Cannot load cpufreq governors - No cpufreq driver available
Jan  9 22:28:47 node1 ifup:     lo        
Jan  9 22:28:47 node1 ifup:     lo        
Jan  9 22:28:47 node1 ifup: IP address: 127.0.0.1/8  
Jan  9 22:28:47 node1 ifup:  
Jan  9 22:28:47 node1 ifup:               
Jan  9 22:28:47 node1 ifup: IP address: 127.0.0.2/8  
Jan  9 22:28:47 node1 ifup:  
Jan  9 22:28:48 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 22:28:48 node1 ifup:     eth0      
Jan  9 22:28:48 node1 ifup: IP address: 10.0.0.10/24  
Jan  9 22:28:48 node1 ifup:  
Jan  9 22:28:49 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 22:28:49 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan  9 22:28:49 node1 ifup:     eth1      
Jan  9 22:28:49 node1 ifup: IP address: 10.0.0.11/24  
Jan  9 22:28:49 node1 ifup:  
Jan  9 22:28:50 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 22:28:50 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 22:28:50 node1 ifup:     eth2      
Jan  9 22:28:50 node1 ifup: IP address: 192.168.1.150/24  
Jan  9 22:28:50 node1 ifup:  
Jan  9 22:28:51 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 22:28:51 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan  9 22:28:51 node1 ifup:     eth3      
Jan  9 22:28:51 node1 ifup: IP address: 192.168.1.151/24  
Jan  9 22:28:51 node1 ifup:  
Jan  9 22:28:51 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Jan  9 22:28:51 node1 kernel: IA-32 Microcode Update Driver: v1.14a-xen <tigran at aivazian.fsnet.co.uk>
Jan  9 22:28:51 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan  9 22:28:51 node1 kernel: bnx2: eth0: using MSIX
Jan  9 22:28:51 node1 kernel: bnx2: eth1: using MSIX
Jan  9 22:28:51 node1 kernel: bnx2: eth2: using MSIX
Jan  9 22:28:51 node1 kernel: bnx2: eth3: using MSIX
Jan  9 22:28:51 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 22:28:51 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan  9 22:28:52 node1 kernel: Loading iSCSI transport class v2.0-870.
Jan  9 22:28:52 node1 kernel: iscsi: registered transport (tcp)
Jan  9 22:28:52 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 22:28:52 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 22:28:52 node1 rpcbind: cannot create socket for udp6
Jan  9 22:28:52 node1 rpcbind: cannot create socket for tcp6
Jan  9 22:28:52 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan  9 22:28:53 node1 kernel: iscsi: registered transport (iser)
Jan  9 22:28:53 node1 iscsid: iSCSI logger with pid=6418 started!
Jan  9 22:28:53 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan  9 22:28:53 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 22:28:53 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Jan  9 22:28:53 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Jan  9 22:28:53 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan  9 22:28:54 node1 kernel: scsi 4:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: scsi 5:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel:  sdb:<5>sd 5:0:0:0: [sdc] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel:  unknown partition table
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: [sdb] Attached SCSI disk
Jan  9 22:28:54 node1 kernel: sd 4:0:0:0: Attached scsi generic sg2 type 0
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel:  sdc:<5>scsi 4:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 22:28:54 node1 kernel:  unknown partition table
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: [sdc] Attached SCSI disk
Jan  9 22:28:54 node1 kernel: sd 5:0:0:0: Attached scsi generic sg3 type 0
Jan  9 22:28:54 node1 kernel: scsi 5:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Write Protect is off
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Mode Sense: 77 00 00 08
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 kernel:  sdd:<5>sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan  9 22:28:54 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Jan  9 22:28:54 node1 iscsid: iSCSI daemon with pid=6419 started!
Jan  9 22:28:54 node1 multipathd: 1494554000000000000000000010000008a0500000f000000: event checker started
Jan  9 22:28:54 node1 multipathd: sde path added to devmap 1494554000000000000000000010000008a0500000f000000
Jan  9 22:28:54 node1 iscsid: connection1:0 is operational now
Jan  9 22:28:54 node1 iscsid: connection2:0 is operational now
Jan  9 22:28:54 node1 kernel:  sde: unknown partition table
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: [sde] Attached SCSI disk
Jan  9 22:28:54 node1 kernel: sd 5:0:0:1: Attached scsi generic sg4 type 0
Jan  9 22:28:54 node1 kernel:  unknown partition table
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: [sdd] Attached SCSI disk
Jan  9 22:28:54 node1 kernel: sd 4:0:0:1: Attached scsi generic sg5 type 0
Jan  9 22:28:54 node1 kernel: device-mapper: table: device 8:32 too small for target
Jan  9 22:28:54 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 22:28:54 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 22:28:54 node1 kernel: device-mapper: table: device 8:32 too small for target
Jan  9 22:28:54 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 22:28:54 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 22:28:54 node1 multipathd: 149455400000000000000000001000000900500000f000000: event checker started
Jan  9 22:28:54 node1 multipathd: sdb path added to devmap 149455400000000000000000001000000900500000f000000
Jan  9 22:28:55 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 22:28:55 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 22:28:55 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 22:28:55 node1 multipathd: sdc path added to devmap 149455400000000000000000001000000900500000f000000
Jan  9 22:28:55 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan  9 22:28:55 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan  9 22:28:55 node1 kernel: device-mapper: ioctl: error adding target to table
Jan  9 22:28:56 node1 smartd[7024]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Jan  9 22:28:56 node1 smartd[7024]: Opened configuration file /etc/smartd.conf
Jan  9 22:28:56 node1 smartd[7024]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Jan  9 22:28:56 node1 smartd[7024]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sda [SAT], opened
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sda [SAT], found in smartd database.
Jan  9 22:28:56 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Jan  9 22:28:56 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sdb, opened
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sdb, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdb' to turn on SMART features
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sdc, opened
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sdc, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdc' to turn on SMART features
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sdd, opened
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sdd, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdd' to turn on SMART features
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sde, opened
Jan  9 22:28:56 node1 smartd[7024]: Device: /dev/sde, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sde' to turn on SMART features
Jan  9 22:28:56 node1 smartd[7024]: Monitoring 1 ATA and 0 SCSI devices
Jan  9 22:28:57 node1 smartd[7024]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 22:28:57 node1 smartd[7382]: smartd has fork()ed into background mode. New PID=7382.
Jan  9 22:28:57 node1 openais[7395]: [MAIN ] AIS Executive Service RELEASE 'subrev 1152 version 0.80'
Jan  9 22:28:57 node1 openais[7395]: [MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributors.
Jan  9 22:28:57 node1 openais[7395]: [MAIN ] Copyright (C) 2006 Red Hat, Inc.
Jan  9 22:28:57 node1 openais[7395]: [MAIN ] AIS Executive Service: started and ready to provide service.
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] Token Timeout (5000 ms) retransmit timeout (490 ms)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] token hold (382 ms) retransmits before loss (10 retrans)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] join (1000 ms) send_join (45 ms) consensus (2500 ms) merge (200 ms)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] downcheck (1000 ms) fail to recv const (50 msgs)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1500
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] window size per rotation (50 messages) maximum messages per rotation (20 messages)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] send threads (0 threads)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] RRP token expired timeout (490 ms)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] RRP token problem counter (2000 ms)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] RRP threshold (10 problem count)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] RRP mode set to none.
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] heartbeat_failures_allowed (0)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] max_network_delay (50 ms)
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes).
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes).
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] The network interface [192.168.1.150] is now up.
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] Created or loaded sequence id 8.192.168.1.150 for this ring.
Jan  9 22:28:57 node1 openais[7395]: [TOTEM] entering GATHER state from 15.
Jan  9 22:28:57 node1 sshd[7411]: Server listening on 0.0.0.0 port 22.
Jan  9 22:28:57 node1 xenstored: Checking store ...
Jan  9 22:28:57 node1 xenstored: Checking store complete.
Jan  9 22:28:57 node1 kernel: suspend: event channel 52
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:795: blktapctrl: v1.0.0
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [raw image (aio)]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [raw image (sync)]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [vmware image (vmdk)]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [ramdisk image (ram)]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [qcow disk (qcow)]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [qcow2 disk (qcow2)]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [ioemu disk]
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl.c:797: Found driver: [raw image (cdrom)]
Jan  9 22:28:58 node1 openais[7395]: [crm  ] info: process_ais_conf: Reading configure
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: config_find_next: Processing additional logging options...
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: get_config_opt: Found 'off' for option: debug
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: get_config_opt: Found 'yes' for option: to_syslog
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: get_config_opt: Found 'daemon' for option: syslog_facility
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: config_find_next: Processing additional service options...
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: get_config_opt: Found 'yes' for option: use_logd
Jan  9 22:28:58 node1 openais[7395]: [MAIN ] info: get_config_opt: Found 'yes' for option: use_mgmtd
Jan  9 22:28:58 node1 openais[7395]: [crm  ] info: pcmk_plugin_init: CRM: Initialized
Jan  9 22:28:58 node1 openais[7395]: [crm  ] Logging: Initialized pcmk_plugin_init
Jan  9 22:28:57 node1 BLKTAPCTRL[7426]: blktapctrl_linux.c:23: /dev/xen/blktap0 device already exists
Jan  9 22:28:58 node1 openais[7395]: [crm  ] info: pcmk_plugin_init: Service: 9
Jan  9 22:28:58 node1 lrmd: [7461]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:58 node1 attrd: [7462]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:28:58 node1 pengine: [7463]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:28:58 node1 cib: [7460]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:28:58 node1 crmd: [7464]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan  9 22:28:58 node1 mgmtd: [7465]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:58 node1 stonithd: [7459]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:59 node1 kernel: Bridge firewalling registered
Jan  9 22:28:59 node1 lrmd: [7461]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jan  9 22:28:59 node1 openais[7395]: [crm  ] info: pcmk_plugin_init: Local node id: 369207488
Jan  9 22:28:59 node1 attrd: [7462]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:59 node1 cib: [7460]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:59 node1 stonithd: [7459]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan  9 22:28:59 node1 stonithd: [7459]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan  9 22:28:59 node1 crmd: [7464]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:59 node1 pengine: [7463]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan  9 22:28:59 node1 attrd: [7462]: info: main: Starting up....
Jan  9 22:28:59 node1 cib: [7460]: info: G_main_add_TriggerHandler: Added signal manual handler
Jan  9 22:29:00 node1 cib: [7460]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:28:59 node1 mgmtd: [7465]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jan  9 22:28:59 node1 crmd: [7464]: info: main: CRM Hg Version: 0080ec086ae9c20ad5c4c3562000c0ad68374f0a
Jan  9 22:28:59 node1 stonithd: [7459]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:28:59 node1 lrmd: [7461]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:29:00 node1 /usr/sbin/cron[7552]: (CRON) STARTUP (V5.0)
Jan  9 22:29:00 node1 lrmd: [7461]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan  9 22:29:00 node1 attrd: [7462]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:29:00 node1 attrd: [7462]: info: init_ais_connection: AIS connection established
Jan  9 22:29:00 node1 mgmtd: [7465]: debug: Enabling coredumps
Jan  9 22:29:00 node1 crmd: [7464]: info: crmd_init: Starting crmd
Jan  9 22:29:00 node1 lrmd: [7461]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan  9 22:29:00 node1 stonithd: [7459]: info: init_ais_connection: AIS connection established
Jan  9 22:29:00 node1 pengine: [7463]: info: main: Starting pengine
Jan  9 22:29:00 node1 cib: [7460]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jan  9 22:28:59 node1 openais[7395]: [crm  ] info: pcmk_plugin_init: Local hostname: node1
Jan  9 22:29:00 node1 attrd: [7462]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:29:00 node1 attrd: [7462]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:29:00 node1 crmd: [7464]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:29:00 node1 lrmd: [7461]: info: Started.
Jan  9 22:29:00 node1 stonithd: [7459]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: update_member: Creating entry for node 369207488 born on 0
Jan  9 22:29:00 node1 cib: [7460]: info: startCib: CIB Initialization completed successfully
Jan  9 22:29:00 node1 mgmtd: [7465]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan  9 22:29:00 node1 attrd: [7462]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: update_member: 0x73f380 Node 369207488 now known as node1 (was: (null))
Jan  9 22:29:00 node1 stonithd: [7459]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:29:00 node1 cib: [7460]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:29:00 node1 mgmtd: [7465]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: update_member: Node node1 now has 1 quorum votes (was 0)
Jan  9 22:29:00 node1 stonithd: [7459]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:29:00 node1 cib: [7460]: info: init_ais_connection: AIS connection established
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: update_member: Node 369207488/node1 is now: member
Jan  9 22:29:00 node1 mgmtd: [7465]: info: init_crm
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7459 for process stonithd
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7460 for process cib
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7461 for process lrmd
Jan  9 22:29:00 node1 mgmtd: [7465]: info: login to cib: 0, ret:-10
Jan  9 22:29:00 node1 stonithd: [7459]: notice: /usr/lib64/heartbeat/stonithd start up successfully.
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7462 for process attrd
Jan  9 22:29:00 node1 cib: [7460]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7463 for process pengine
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7464 for process crmd
Jan  9 22:29:00 node1 cib: [7460]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: spawn_child: Forked child 7465 for process mgmtd
Jan  9 22:29:00 node1 cib: [7460]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:29:00 node1 cib: [7460]: info: cib_init: Starting cib mainloop
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_startup: CRM: Initialized
Jan  9 22:29:00 node1 cib: [7460]: info: ais_dispatch: Membership 12: quorum still lost
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] Service initialized 'Pacemaker Cluster Manager'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais extended virtual synchrony service'
Jan  9 22:29:00 node1 cib: [7460]: info: crm_update_peer: Node node1: id=369207488 state=member (new) addr=r(0) ip(192.168.1.150)  (new) votes=1 (new) born=0 seen=12 proc=00000000000000000000000000053312 (new)
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais cluster membership service B.01.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais availability management framework B.01.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais checkpoint service B.01.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais event service B.01.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais distributed locking service B.01.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais message service B.01.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais configuration service'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais cluster closed process group service v1.01'
Jan  9 22:29:00 node1 openais[7395]: [SERV ] Service initialized 'openais cluster config database access v1.01'
Jan  9 22:29:00 node1 openais[7395]: [SYNC ] Not using a virtual synchrony filter.
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] Saving state aru 0 high seq received 0
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] Storing new sequence id for ring c
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] entering COMMIT state.
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] entering RECOVERY state.
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] previous ring seq 8 rep 192.168.1.150
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] aru 0 high delivered 0 received flag 1
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] Sending initial ORF token
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:29:00 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 12: memb=0, new=0, lost=0
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:29:00 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 12: memb=1, new=1, lost=0
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_peer_update: NEW:  node1 369207488
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:29:00 node1 openais[7395]: [MAIN ] info: update_member: Node node1 now has process list: 00000000000000000000000000053312 (340754)
Jan  9 22:29:00 node1 openais[7395]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:29:00 node1 openais[7395]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:29:00 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_ipc: Recorded connection 0x7fb22c034c10 for attrd/7462
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_ipc: Recorded connection 0x7fb22c0346f0 for stonithd/7459
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_ipc: Recorded connection 0x7fb22c034fa0 for cib/7460
Jan  9 22:29:00 node1 openais[7395]: [crm  ] info: pcmk_ipc: Sending membership update 12 to cib
Jan  9 22:29:00 node1 stonithd: [7459]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan  9 22:29:00 node1 cib: [7583]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-15.raw
Jan  9 22:29:01 node1 cib: [7583]: info: write_cib_contents: Wrote version 0.18.0 of the CIB to disk (digest: 0a563c9b784a98f23d11b093063056a2)
Jan  9 22:29:01 node1 cib: [7583]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.ovM25h (digest: /var/lib/heartbeat/crm/cib.4s0FZj)
Jan  9 22:29:01 node1 crmd: [7464]: info: do_cib_control: CIB connection established
Jan  9 22:29:01 node1 crmd: [7464]: info: init_ais_connection: Creating connection to our AIS plugin
Jan  9 22:29:01 node1 crmd: [7464]: info: init_ais_connection: AIS connection established
Jan  9 22:29:01 node1 openais[7395]: [crm  ] info: pcmk_ipc: Recorded connection 0x7fb22c0345f0 for crmd/7464
Jan  9 22:29:01 node1 crmd: [7464]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan  9 22:29:01 node1 crmd: [7464]: info: crm_new_peer: Node node1 now has id: 369207488
Jan  9 22:29:01 node1 crmd: [7464]: info: crm_new_peer: Node 369207488 is now known as node1
Jan  9 22:29:01 node1 crmd: [7464]: info: do_ha_control: Connected to the cluster
Jan  9 22:29:01 node1 openais[7395]: [crm  ] info: pcmk_ipc: Sending membership update 12 to crmd
Jan  9 22:29:01 node1 crmd: [7464]: info: do_started: Delaying start, CCM (0000000000100000) not connected
Jan  9 22:29:01 node1 crmd: [7464]: info: crmd_init: Starting crmd's mainloop
Jan  9 22:29:01 node1 crmd: [7464]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:29:01 node1 openais[7395]: [crm  ] info: update_expected_votes: Expected quorum votes 1024 -> 2
Jan  9 22:29:01 node1 crmd: [7464]: info: ais_dispatch: Membership 12: quorum still lost
Jan  9 22:29:01 node1 crmd: [7464]: info: crm_update_peer: Node node1: id=369207488 state=member (new) addr=r(0) ip(192.168.1.150)  (new) votes=1 (new) born=0 seen=12 proc=00000000000000000000000000053312 (new)
Jan  9 22:29:01 node1 crmd: [7464]: info: do_started: The local CRM is operational
Jan  9 22:29:01 node1 crmd: [7464]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jan  9 22:29:02 node1 crmd: [7464]: info: ais_dispatch: Membership 12: quorum still lost
Jan  9 22:29:02 node1 mgmtd: [7465]: debug: main: run the loop...
Jan  9 22:29:02 node1 mgmtd: [7465]: info: Started.
Jan  9 22:29:10 node1 gdm-simple-greeter[7725]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Jan  9 22:29:10 node1 gdm-session-worker[7728]: PAM pam_putenv: NULL pam handle passed
Jan  9 22:29:10 node1 attrd: [7462]: info: main: Sending full refresh
Jan  9 22:29:10 node1 attrd: [7462]: info: main: Starting mainloop...
Jan  9 22:29:12 node1 crmd: [7464]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
Jan  9 22:29:12 node1 crmd: [7464]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jan  9 22:29:12 node1 crmd: [7464]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan  9 22:29:12 node1 crmd: [7464]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jan  9 22:29:12 node1 crmd: [7464]: info: do_te_control: Registering TE UUID: 4df55898-0fdc-4334-a05d-8c6c56d80d35
Jan  9 22:29:12 node1 crmd: [7464]: WARN: cib_client_add_notify_callback: Callback already present
Jan  9 22:29:12 node1 crmd: [7464]: info: set_graph_functions: Setting custom graph functions
Jan  9 22:29:12 node1 crmd: [7464]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Jan  9 22:29:12 node1 crmd: [7464]: info: do_dc_takeover: Taking over DC status for this partition
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_readwrite: We are now in R/W mode
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=0.18.0): ok (rc=0)
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=0.18.0): ok (rc=0)
Jan  9 22:29:12 node1 crmd: [7464]: info: join_make_offer: Making join offers based on membership 12
Jan  9 22:29:12 node1 crmd: [7464]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jan  9 22:29:12 node1 crmd: [7464]: info: ais_dispatch: Membership 12: quorum still lost
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=0.18.0): ok (rc=0)
Jan  9 22:29:12 node1 crmd: [7464]: info: config_query_callback: Checking for expired actions every 900000ms
Jan  9 22:29:12 node1 crmd: [7464]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:29:12 node1 crmd: [7464]: info: ais_dispatch: Membership 12: quorum still lost
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/12, version=0.18.0): ok (rc=0)
Jan  9 22:29:12 node1 crmd: [7464]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/15, version=0.18.0): ok (rc=0)
Jan  9 22:29:12 node1 crmd: [7464]: info: do_state_transition: All 1 cluster nodes responded to the join offer.
Jan  9 22:29:12 node1 crmd: [7464]: info: do_dc_join_finalize: join-1: Syncing the CIB from node1 to the rest of the cluster
Jan  9 22:29:12 node1 crmd: [7464]: info: te_connect_stonith: Attempting connection to fencing daemon...
Jan  9 22:29:12 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/16, version=0.18.0): ok (rc=0)
Jan  9 22:29:13 node1 crmd: [7464]: info: te_connect_stonith: Connected
Jan  9 22:29:13 node1 crmd: [7464]: info: update_attrd: Connecting to attrd...
Jan  9 22:29:13 node1 crmd: [7464]: info: update_attrd: Updating terminate=<none> via attrd for node1
Jan  9 22:29:13 node1 crmd: [7464]: info: update_attrd: Updating shutdown=<none> via attrd for node1
Jan  9 22:29:13 node1 attrd: [7462]: info: find_hash_entry: Creating hash entry for terminate
Jan  9 22:29:13 node1 attrd: [7462]: info: find_hash_entry: Creating hash entry for shutdown
Jan  9 22:29:13 node1 crmd: [7464]: info: do_dc_join_ack: join-1: Updating node state to member for node1
Jan  9 22:29:13 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/17, version=0.18.0): ok (rc=0)
Jan  9 22:29:13 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=local/crmd/18, version=0.18.0): ok (rc=0)
Jan  9 22:29:13 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
Jan  9 22:29:13 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/19, version=0.18.0): ok (rc=0)
Jan  9 22:29:13 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:29:13 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/20, version=0.18.0): ok (rc=0)
Jan  9 22:29:13 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:29:13 node1 crmd: [7464]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:29:13 node1 crmd: [7464]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan  9 22:29:13 node1 crmd: [7464]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan  9 22:29:13 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to false (call=24)
Jan  9 22:29:13 node1 attrd: [7462]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan  9 22:29:13 node1 crmd: [7464]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan  9 22:29:13 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan  9 22:29:13 node1 crmd: [7464]: info: do_pe_invoke: Query 25: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:29:13 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/22, version=0.18.1): ok (rc=0)
Jan  9 22:29:13 node1 cib: [7460]: info: log_data_element: cib:diff: - <cib have-quorum="1" admin_epoch="0" epoch="18" num_updates="1" />
Jan  9 22:29:13 node1 cib: [7460]: info: log_data_element: cib:diff: + <cib have-quorum="0" dc-uuid="node1" admin_epoch="0" epoch="19" num_updates="1" />
Jan  9 22:29:13 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/24, version=0.19.1): ok (rc=0)
Jan  9 22:29:13 node1 crmd: [7464]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:29:13 node1 crmd: [7464]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:29:13 node1 crmd: [7464]: info: do_pe_invoke: Query 26: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:29:13 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan  9 22:29:13 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072553-7, seq=12, quorate=0
Jan  9 22:29:13 node1 pengine: [7463]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan  9 22:29:13 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:29:13 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:29:13 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:29:13 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:29:13 node1 pengine: [7463]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node2 on node1
Jan  9 22:29:13 node1 pengine: [7463]: WARN: stage6: Node node2 is unclean!
Jan  9 22:29:13 node1 pengine: [7463]: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore)
Jan  9 22:29:13 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:29:13 node1 pengine: [7463]: notice: LogActions: Start STONITH-node2	(node1)
Jan  9 22:29:13 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:29:13 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 0: 6 actions in 6 synapses
Jan  9 22:29:13 node1 crmd: [7464]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1263072553-7) derived from /var/lib/pengine/pe-warn-13.bz2
Jan  9 22:29:14 node1 crmd: [7464]: info: te_rsc_command: Initiating action 4: monitor STONITH-node1_monitor_0 on node1 (local)
Jan  9 22:29:14 node1 lrmd: [7461]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan  9 22:29:14 node1 crmd: [7464]: info: do_lrm_rsc_op: Performing key=4:0:7:4df55898-0fdc-4334-a05d-8c6c56d80d35 op=STONITH-node1_monitor_0 )
Jan  9 22:29:14 node1 lrmd: [7461]: info: rsc:STONITH-node1: monitor
Jan  9 22:29:14 node1 crmd: [7464]: info: te_rsc_command: Initiating action 5: monitor STONITH-node2_monitor_0 on node1 (local)
Jan  9 22:29:14 node1 lrmd: [7461]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan  9 22:29:14 node1 crmd: [7464]: info: do_lrm_rsc_op: Performing key=5:0:7:4df55898-0fdc-4334-a05d-8c6c56d80d35 op=STONITH-node2_monitor_0 )
Jan  9 22:29:14 node1 lrmd: [7461]: info: rsc:STONITH-node2: monitor
Jan  9 22:29:14 node1 crmd: [7464]: info: process_lrm_event: LRM operation STONITH-node1_monitor_0 (call=2, rc=7, cib-update=27, confirmed=true) complete not running
Jan  9 22:29:14 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_monitor_0 (4) confirmed on node1 (rc=0)
Jan  9 22:29:14 node1 cib: [7732]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-16.raw
Jan  9 22:29:14 node1 pengine: [7463]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-13.bz2
Jan  9 22:29:14 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:29:14 node1 crmd: [7464]: info: process_lrm_event: LRM operation STONITH-node2_monitor_0 (call=3, rc=7, cib-update=28, confirmed=true) complete not running
Jan  9 22:29:14 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node2_monitor_0 (5) confirmed on node1 (rc=0)
Jan  9 22:29:14 node1 crmd: [7464]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node1 (local) - no waiting
Jan  9 22:29:14 node1 crmd: [7464]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:29:14 node1 crmd: [7464]: info: te_rsc_command: Initiating action 6: start STONITH-node2_start_0 on node1 (local)
Jan  9 22:29:14 node1 crmd: [7464]: info: do_lrm_rsc_op: Performing key=6:0:0:4df55898-0fdc-4334-a05d-8c6c56d80d35 op=STONITH-node2_start_0 )
Jan  9 22:29:14 node1 lrmd: [7461]: info: rsc:STONITH-node2: start
Jan  9 22:29:14 node1 lrmd: [7737]: info: Try to start STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan  9 22:29:14 node1 cib: [7732]: info: write_cib_contents: Wrote version 0.19.0 of the CIB to disk (digest: 61425df6beb2ebb5ece5072faeea0bf9)
Jan  9 22:29:14 node1 cib: [7732]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.5FpbfU (digest: /var/lib/heartbeat/crm/cib.s5TWAu)
Jan  9 22:29:16 node1 sshd[7770]: Accepted keyboard-interactive/pam for root from 192.168.1.61 port 42804 ssh2
Jan  9 22:29:18 node1 stonithd: [7755]: info: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/drac5 status' returned 65280
Jan  9 22:29:18 node1 stonithd: [7459]: WARN: start STONITH-node2 failed, because its hostlist is empty
Jan  9 22:29:18 node1 lrmd: [7461]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Jan  9 22:29:18 node1 crmd: [7464]: info: process_lrm_event: LRM operation STONITH-node2_start_0 (call=4, rc=1, cib-update=32, confirmed=true) complete unknown error
Jan  9 22:29:18 node1 crmd: [7464]: WARN: status_from_rc: Action 6 (STONITH-node2_start_0) on node1 failed (target: 0 vs. rc: 1): Error
Jan  9 22:29:18 node1 crmd: [7464]: WARN: update_failcount: Updating failcount for STONITH-node2 on node1 after failed start: rc=1 (update=INFINITY, time=1263072558)
Jan  9 22:29:18 node1 crmd: [7464]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node2_start_0, magic=0:1;6:0:0:4df55898-0fdc-4334-a05d-8c6c56d80d35) : Event failed
Jan  9 22:29:18 node1 crmd: [7464]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:29:18 node1 crmd: [7464]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:29:18 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node2_start_0 (6) confirmed on node1 (rc=4)
Jan  9 22:29:18 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:29:18 node1 crmd: [7464]: notice: run_graph: Transition 0 (Complete=5, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-13.bz2): Stopped
Jan  9 22:29:18 node1 crmd: [7464]: info: te_graph_trigger: Transition 0 is now complete
Jan  9 22:29:18 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:29:18 node1 crmd: [7464]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan  9 22:29:18 node1 crmd: [7464]: info: do_pe_invoke: Query 39: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:29:18 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072558-12, seq=12, quorate=0
Jan  9 22:29:18 node1 pengine: [7463]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan  9 22:29:18 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:29:18 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:29:18 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:29:18 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:29:18 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Started node1 FAILED
Jan  9 22:29:18 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:29:18 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:29:18 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:29:18 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:29:18 node1 pengine: [7463]: WARN: stage6: Node node2 is unclean!
Jan  9 22:29:18 node1 pengine: [7463]: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore)
Jan  9 22:29:18 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:29:18 node1 pengine: [7463]: notice: LogActions: Stop resource STONITH-node2	(node1)
Jan  9 22:29:18 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:29:18 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 1: 2 actions in 2 synapses
Jan  9 22:29:18 node1 crmd: [7464]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1263072558-12) derived from /var/lib/pengine/pe-warn-14.bz2
Jan  9 22:29:18 node1 crmd: [7464]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:29:18 node1 crmd: [7464]: info: te_rsc_command: Initiating action 1: stop STONITH-node2_stop_0 on node1 (local)
Jan  9 22:29:18 node1 crmd: [7464]: info: do_lrm_rsc_op: Performing key=1:1:0:4df55898-0fdc-4334-a05d-8c6c56d80d35 op=STONITH-node2_stop_0 )
Jan  9 22:29:18 node1 lrmd: [7461]: info: rsc:STONITH-node2: stop
Jan  9 22:29:18 node1 pengine: [7463]: WARN: process_pe_message: Transition 1: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-14.bz2
Jan  9 22:29:18 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:29:18 node1 lrmd: [7870]: info: Try to stop STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan  9 22:29:18 node1 stonithd: [7459]: notice: try to stop a resource STONITH-node2 who is not in started resource queue.
Jan  9 22:29:18 node1 crmd: [7464]: info: process_lrm_event: LRM operation STONITH-node2_stop_0 (call=5, rc=0, cib-update=40, confirmed=true) complete ok
Jan  9 22:29:18 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node2_stop_0 (1) confirmed on node1 (rc=0)
Jan  9 22:29:18 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:29:18 node1 crmd: [7464]: notice: run_graph: Transition 1 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-14.bz2): Complete
Jan  9 22:29:18 node1 crmd: [7464]: info: te_graph_trigger: Transition 1 is now complete
Jan  9 22:29:18 node1 crmd: [7464]: info: notify_crmd: Transition 1 status: done - <null>
Jan  9 22:29:18 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:29:18 node1 crmd: [7464]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:30:13 node1 openais[7395]: [TOTEM] entering GATHER state from 11.
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] Saving state aru 28 high seq received 28
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] Storing new sequence id for ring 14
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] entering COMMIT state.
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] entering RECOVERY state.
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] previous ring seq 12 rep 192.168.1.150
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] aru 28 high delivered 28 received flag 1
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] position [1] member 192.168.1.160:
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] previous ring seq 16 rep 192.168.1.160
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] aru a high delivered a received flag 1
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] Sending initial ORF token
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:30:15 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 20: memb=1, new=0, lost=0
Jan  9 22:30:15 node1 openais[7395]: [crm  ] info: pcmk_peer_update: memb: node1 369207488
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:30:15 node1 cib: [7460]: notice: ais_dispatch: Membership 20: quorum aquired
Jan  9 22:30:15 node1 cib: [7460]: info: crm_new_peer: Node <null> now has id: 536979648
Jan  9 22:30:15 node1 cib: [7460]: info: crm_update_peer: Node (null): id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=0 born=0 seen=20 proc=00000000000000000000000000000000
Jan  9 22:30:15 node1 crmd: [7464]: notice: ais_dispatch: Membership 20: quorum aquired
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:30:15 node1 crmd: [7464]: info: crm_new_peer: Node <null> now has id: 536979648
Jan  9 22:30:15 node1 cib: [7460]: info: ais_dispatch: Membership 20: quorum retained
Jan  9 22:30:15 node1 cib: [7460]: info: crm_get_peer: Node 536979648 is now known as node2
Jan  9 22:30:15 node1 crmd: [7464]: info: crm_update_peer: Node (null): id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=0 born=0 seen=20 proc=00000000000000000000000000000000
Jan  9 22:30:15 node1 cib: [7460]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 (new) born=20 seen=20 proc=00000000000000000000000000053312 (new)
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:30:15 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to true (call=43)
Jan  9 22:30:15 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 20: memb=2, new=1, lost=0
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: Creating entry for node 536979648 born on 20
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: Node 536979648/unknown is now: member
Jan  9 22:30:15 node1 openais[7395]: [crm  ] info: pcmk_peer_update: NEW:  .pending. 536979648
Jan  9 22:30:15 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:30:15 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: .pending. 536979648
Jan  9 22:30:15 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 20 to 2 children
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: 0x73f380 Node 369207488 ((null)) born on: 20
Jan  9 22:30:15 node1 openais[7395]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:30:15 node1 openais[7395]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: 0x7fb22c035290 Node 536979648 (node2) born on: 20
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: 0x7fb22c035290 Node 536979648 now known as node2 (was: (null))
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: Node node2 now has process list: 00000000000000000000000000053312 (340754)
Jan  9 22:30:15 node1 openais[7395]: [MAIN ] info: update_member: Node node2 now has 1 quorum votes (was 0)
Jan  9 22:30:15 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 20 to 2 children
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:30:15 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.160
Jan  9 22:30:15 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/41, version=0.19.8): ok (rc=0)
Jan  9 22:30:15 node1 cib: [7460]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="19" num_updates="8" />
Jan  9 22:30:15 node1 cib: [7460]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="20" num_updates="1" />
Jan  9 22:30:15 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/43, version=0.20.1): ok (rc=0)
Jan  9 22:30:15 node1 crmd: [7464]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:30:15 node1 crmd: [7464]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:30:15 node1 crmd: [7464]: info: ais_dispatch: Membership 20: quorum retained
Jan  9 22:30:15 node1 crmd: [7464]: info: crm_get_peer: Node 536979648 is now known as node2
Jan  9 22:30:15 node1 crmd: [7464]: info: ais_status_callback: status: node2 is now member
Jan  9 22:30:15 node1 crmd: [7464]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 (new) born=20 seen=20 proc=00000000000000000000000000053312 (new)
Jan  9 22:30:15 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/45, version=0.20.1): ok (rc=0)
Jan  9 22:30:15 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/46, version=0.20.1): ok (rc=0)
Jan  9 22:30:15 node1 crmd: [7464]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:30:15 node1 crmd: [7464]: info: do_state_transition: Membership changed: 12 -> 20 - join restart
Jan  9 22:30:15 node1 crmd: [7464]: info: do_pe_invoke: Query 50: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:30:15 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ]
Jan  9 22:30:15 node1 crmd: [7464]: info: update_dc: Unset DC node1
Jan  9 22:30:15 node1 crmd: [7464]: info: join_make_offer: Making join offers based on membership 20
Jan  9 22:30:15 node1 crmd: [7464]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Jan  9 22:30:15 node1 crmd: [7464]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:30:15 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/49, version=0.20.2): ok (rc=0)
Jan  9 22:30:15 node1 cib: [7910]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-17.raw
Jan  9 22:30:15 node1 cib: [7910]: info: write_cib_contents: Wrote version 0.20.0 of the CIB to disk (digest: 5804e197c0a974cb4f6715e743b2acba)
Jan  9 22:30:15 node1 cib: [7910]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.MnSlyd (digest: /var/lib/heartbeat/crm/cib.7HuvVo)
Jan  9 22:30:17 node1 crmd: [7464]: info: update_dc: Unset DC node1
Jan  9 22:30:17 node1 crmd: [7464]: info: do_dc_join_offer_all: A new node joined the cluster
Jan  9 22:30:17 node1 crmd: [7464]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Jan  9 22:30:17 node1 crmd: [7464]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:30:17 node1 crmd: [7464]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:30:17 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Jan  9 22:30:17 node1 crmd: [7464]: info: do_dc_join_finalize: join-3: Syncing the CIB from node1 to the rest of the cluster
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/53, version=0.20.2): ok (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: do_dc_join_ack: join-3: Updating node state to member for node2
Jan  9 22:30:17 node1 crmd: [7464]: info: do_dc_join_ack: join-3: Updating node state to member for node1
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/54, version=0.20.2): ok (rc=0)
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/55, version=0.20.2): ok (rc=0)
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/56, version=0.20.2): ok (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:30:17 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:30:17 node1 crmd: [7464]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan  9 22:30:17 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to true (call=62)
Jan  9 22:30:17 node1 crmd: [7464]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan  9 22:30:17 node1 attrd: [7462]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan  9 22:30:17 node1 crmd: [7464]: info: do_pe_invoke: Query 63: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/58, version=0.20.4): ok (rc=0)
Jan  9 22:30:17 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan  9 22:30:17 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node1_monitor_0, magic=0:7;4:0:7:4df55898-0fdc-4334-a05d-8c6c56d80d35) : Resource op removal
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=node2/crmd/6, version=0.20.4): ok (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=node2/crmd/7, version=0.20.5): ok (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: do_pe_invoke: Query 64: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:30:17 node1 crmd: [7464]: info: te_update_diff: Detected LRM refresh - 2 resources updated: Skipping all resource events
Jan  9 22:30:17 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA) : LRM Refresh
Jan  9 22:30:17 node1 crmd: [7464]: info: do_pe_invoke: Query 65: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/60, version=0.20.6): ok (rc=0)
Jan  9 22:30:17 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/62, version=0.20.6): ok (rc=0)
Jan  9 22:30:17 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan  9 22:30:17 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072617-25, seq=20, quorate=1
Jan  9 22:30:17 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:30:17 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:30:17 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:30:17 node1 pengine: [7463]: info: determine_online_status: Node node2 is online
Jan  9 22:30:17 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:30:17 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:30:17 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:30:17 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:30:17 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:30:17 node1 pengine: [7463]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node1 on node2
Jan  9 22:30:17 node1 pengine: [7463]: notice: LogActions: Start STONITH-node1	(node2)
Jan  9 22:30:17 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:30:17 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:30:17 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 2: 6 actions in 6 synapses
Jan  9 22:30:17 node1 crmd: [7464]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1263072617-25) derived from /var/lib/pengine/pe-warn-15.bz2
Jan  9 22:30:17 node1 crmd: [7464]: info: te_rsc_command: Initiating action 5: monitor STONITH-node1_monitor_0 on node2
Jan  9 22:30:17 node1 crmd: [7464]: info: te_rsc_command: Initiating action 6: monitor STONITH-node2_monitor_0 on node2
Jan  9 22:30:17 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_monitor_0 (5) confirmed on node2 (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node2_monitor_0 (6) confirmed on node2 (rc=0)
Jan  9 22:30:17 node1 crmd: [7464]: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on node2 - no waiting
Jan  9 22:30:17 node1 crmd: [7464]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:30:17 node1 crmd: [7464]: info: te_rsc_command: Initiating action 7: start STONITH-node1_start_0 on node2
Jan  9 22:30:17 node1 pengine: [7463]: WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-15.bz2
Jan  9 22:30:17 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:30:20 node1 crmd: [7464]: WARN: status_from_rc: Action 7 (STONITH-node1_start_0) on node2 failed (target: 0 vs. rc: 1): Error
Jan  9 22:30:20 node1 crmd: [7464]: WARN: update_failcount: Updating failcount for STONITH-node1 on node2 after failed start: rc=1 (update=INFINITY, time=1263072620)
Jan  9 22:30:20 node1 crmd: [7464]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node1_start_0, magic=0:1;7:2:0:4df55898-0fdc-4334-a05d-8c6c56d80d35) : Event failed
Jan  9 22:30:20 node1 crmd: [7464]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:30:20 node1 crmd: [7464]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:30:20 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_start_0 (7) confirmed on node2 (rc=4)
Jan  9 22:30:20 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:30:20 node1 crmd: [7464]: notice: run_graph: Transition 2 (Complete=5, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-15.bz2): Stopped
Jan  9 22:30:20 node1 crmd: [7464]: info: te_graph_trigger: Transition 2 is now complete
Jan  9 22:30:20 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:30:20 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:30:20 node1 crmd: [7464]: info: do_pe_invoke: Query 72: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:30:20 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072620-30, seq=20, quorate=1
Jan  9 22:30:20 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:30:20 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:30:20 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:30:20 node1 pengine: [7463]: info: determine_online_status: Node node2 is online
Jan  9 22:30:20 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:30:20 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:30:20 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Started node2 FAILED
Jan  9 22:30:20 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:30:20 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:30:20 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:30:20 node1 pengine: [7463]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:30:20 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:30:20 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:30:20 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:30:20 node1 pengine: [7463]: notice: LogActions: Stop resource STONITH-node1	(node2)
Jan  9 22:30:20 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:30:20 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:30:20 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 3: 2 actions in 2 synapses
Jan  9 22:30:20 node1 crmd: [7464]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1263072620-30) derived from /var/lib/pengine/pe-warn-16.bz2
Jan  9 22:30:20 node1 crmd: [7464]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:30:20 node1 crmd: [7464]: info: te_rsc_command: Initiating action 1: stop STONITH-node1_stop_0 on node2
Jan  9 22:30:20 node1 pengine: [7463]: WARN: process_pe_message: Transition 3: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-16.bz2
Jan  9 22:30:20 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:30:20 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_stop_0 (1) confirmed on node2 (rc=0)
Jan  9 22:30:20 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:30:20 node1 crmd: [7464]: notice: run_graph: Transition 3 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-16.bz2): Complete
Jan  9 22:30:20 node1 crmd: [7464]: info: te_graph_trigger: Transition 3 is now complete
Jan  9 22:30:20 node1 crmd: [7464]: info: notify_crmd: Transition 3 status: done - <null>
Jan  9 22:30:20 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:30:20 node1 crmd: [7464]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:30:25 node1 attrd: [7462]: info: crm_new_peer: Node node2 now has id: 536979648
Jan  9 22:30:25 node1 attrd: [7462]: info: crm_new_peer: Node 536979648 is now known as node2
Jan  9 22:32:23 node1 openais[7395]: [TOTEM] The token was lost in the OPERATIONAL state.
Jan  9 22:32:23 node1 openais[7395]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes).
Jan  9 22:32:23 node1 openais[7395]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes).
Jan  9 22:32:23 node1 openais[7395]: [TOTEM] entering GATHER state from 2.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] entering GATHER state from 0.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] Saving state aru 5b high seq received 5b
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] Storing new sequence id for ring 18
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] entering COMMIT state.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] entering RECOVERY state.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] previous ring seq 20 rep 192.168.1.150
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] aru 5b high delivered 5b received flag 1
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] Sending initial ORF token
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:32:25 node1 crmd: [7464]: notice: ais_dispatch: Membership 24: quorum lost
Jan  9 22:32:25 node1 cib: [7460]: notice: ais_dispatch: Membership 24: quorum lost
Jan  9 22:32:25 node1 crmd: [7464]: info: ais_status_callback: status: node2 is now lost (was member)
Jan  9 22:32:25 node1 crmd: [7464]: info: crm_update_peer: Node node2: id=536979648 state=lost (new) addr=r(0) ip(192.168.1.160)  votes=1 born=20 seen=20 proc=00000000000000000000000000053312
Jan  9 22:32:25 node1 cib: [7460]: info: crm_update_peer: Node node2: id=536979648 state=lost (new) addr=r(0) ip(192.168.1.160)  votes=1 born=20 seen=20 proc=00000000000000000000000000053312
Jan  9 22:32:25 node1 crmd: [7464]: info: erase_node_from_join: Removed node node2 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:32:25 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 24: memb=1, new=0, lost=1
Jan  9 22:32:25 node1 openais[7395]: [crm  ] info: pcmk_peer_update: memb: node1 369207488
Jan  9 22:32:25 node1 openais[7395]: [crm  ] info: pcmk_peer_update: lost: node2 536979648
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:32:25 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to false (call=75)
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:32:25 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 24: memb=1, new=0, lost=0
Jan  9 22:32:25 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:32:25 node1 openais[7395]: [crm  ] info: ais_mark_unseen_peer_dead: Node node2 was not seen in the previous transition
Jan  9 22:32:25 node1 openais[7395]: [MAIN ] info: update_member: Node 536979648/node2 is now: lost
Jan  9 22:32:25 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 24 to 2 children
Jan  9 22:32:25 node1 openais[7395]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:32:25 node1 openais[7395]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:32:25 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:32:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/73, version=0.20.13): ok (rc=0)
Jan  9 22:32:26 node1 cib: [7460]: info: log_data_element: cib:diff: - <cib have-quorum="1" admin_epoch="0" epoch="20" num_updates="14" />
Jan  9 22:32:26 node1 cib: [7460]: info: log_data_element: cib:diff: + <cib have-quorum="0" admin_epoch="0" epoch="21" num_updates="1" />
Jan  9 22:32:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/75, version=0.21.1): ok (rc=0)
Jan  9 22:32:26 node1 crmd: [7464]: WARN: match_down_event: No match for shutdown action on node2
Jan  9 22:32:26 node1 crmd: [7464]: info: te_update_diff: Stonith/shutdown of node2 not matched
Jan  9 22:32:26 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:191 - Triggered transition abort (complete=1, tag=node_state, id=node2, magic=NA) : Node failure
Jan  9 22:32:26 node1 crmd: [7464]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:32:26 node1 crmd: [7464]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:32:26 node1 crmd: [7464]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:32:26 node1 crmd: [7464]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan  9 22:32:26 node1 crmd: [7464]: info: do_pe_invoke: Query 78: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:32:26 node1 crmd: [7464]: info: do_pe_invoke: Query 79: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:32:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/77, version=0.21.1): ok (rc=0)
Jan  9 22:32:26 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072746-33, seq=24, quorate=0
Jan  9 22:32:26 node1 pengine: [7463]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan  9 22:32:26 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:32:26 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:32:26 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:32:26 node1 pengine: [7463]: WARN: determine_online_status_fencing: Node node2 (node2) is un-expectedly down
Jan  9 22:32:26 node1 pengine: [7463]: info: determine_online_status_fencing: 	ha_state=active, ccm_state=false, crm_state=online, join_state=member, expected=member
Jan  9 22:32:26 node1 pengine: [7463]: WARN: determine_online_status: Node node2 is unclean
Jan  9 22:32:26 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:32:26 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:32:26 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:32:26 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:32:26 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:32:26 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:32:26 node1 pengine: [7463]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:32:26 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:32:26 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:32:26 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:32:26 node1 cib: [7917]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-18.raw
Jan  9 22:32:26 node1 pengine: [7463]: WARN: stage6: Node node2 is unclean!
Jan  9 22:32:26 node1 pengine: [7463]: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore)
Jan  9 22:32:26 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:32:26 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:32:26 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:32:26 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 4: 0 actions in 0 synapses
Jan  9 22:32:26 node1 crmd: [7464]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1263072746-33) derived from /var/lib/pengine/pe-warn-17.bz2
Jan  9 22:32:26 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:32:26 node1 crmd: [7464]: notice: run_graph: Transition 4 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-17.bz2): Complete
Jan  9 22:32:26 node1 cib: [7917]: info: write_cib_contents: Wrote version 0.21.0 of the CIB to disk (digest: f599cbfb8916d4a890ee61980d29fc7c)
Jan  9 22:32:26 node1 pengine: [7463]: WARN: process_pe_message: Transition 4: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-17.bz2
Jan  9 22:32:26 node1 crmd: [7464]: info: te_graph_trigger: Transition 4 is now complete
Jan  9 22:32:26 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:32:26 node1 crmd: [7464]: info: notify_crmd: Transition 4 status: done - <null>
Jan  9 22:32:26 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:32:26 node1 crmd: [7464]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:32:26 node1 cib: [7917]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.U1W4c7 (digest: /var/lib/heartbeat/crm/cib.uLRLxN)
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] entering GATHER state from 11.
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] Saving state aru 10 high seq received 10
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] Storing new sequence id for ring 1c
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] entering COMMIT state.
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] entering RECOVERY state.
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] previous ring seq 24 rep 192.168.1.150
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] aru 10 high delivered 10 received flag 1
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] position [1] member 192.168.1.160:
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] previous ring seq 24 rep 192.168.1.160
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] aru a high delivered a received flag 1
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] Sending initial ORF token
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:33:26 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 28: memb=1, new=0, lost=0
Jan  9 22:33:26 node1 openais[7395]: [crm  ] info: pcmk_peer_update: memb: node1 369207488
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:33:26 node1 crmd: [7464]: notice: ais_dispatch: Membership 28: quorum aquired
Jan  9 22:33:26 node1 crmd: [7464]: info: ais_status_callback: status: node2 is now member (was lost)
Jan  9 22:33:26 node1 crmd: [7464]: info: crm_update_peer: Node node2: id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=1 born=20 seen=28 proc=00000000000000000000000000053312
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:33:26 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 28: memb=2, new=1, lost=0
Jan  9 22:33:26 node1 openais[7395]: [MAIN ] info: update_member: Node 536979648/node2 is now: member
Jan  9 22:33:26 node1 openais[7395]: [crm  ] info: pcmk_peer_update: NEW:  node2 536979648
Jan  9 22:33:26 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:33:26 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: node2 536979648
Jan  9 22:33:26 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 28 to 2 children
Jan  9 22:33:26 node1 openais[7395]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:33:26 node1 openais[7395]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:33:26 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to true (call=84)
Jan  9 22:33:26 node1 cib: [7460]: notice: ais_dispatch: Membership 28: quorum aquired
Jan  9 22:33:26 node1 cib: [7460]: info: crm_update_peer: Node node2: id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=1 born=20 seen=28 proc=00000000000000000000000000053312
Jan  9 22:33:26 node1 openais[7395]: [MAIN ] info: update_member: 0x7fb22c035290 Node 536979648 (node2) born on: 28
Jan  9 22:33:26 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 28 to 2 children
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:33:26 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.160
Jan  9 22:33:26 node1 cib: [7460]: info: ais_dispatch: Membership 28: quorum retained
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/80, version=0.21.2): ok (rc=0)
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=local/crmd/81, version=0.21.3): ok (rc=0)
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/82, version=0.21.3): ok (rc=0)
Jan  9 22:33:26 node1 cib: [7460]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="21" num_updates="4" />
Jan  9 22:33:26 node1 cib: [7460]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="22" num_updates="1" />
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/84, version=0.22.1): ok (rc=0)
Jan  9 22:33:26 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node1_monitor_0, magic=0:7;5:2:7:4df55898-0fdc-4334-a05d-8c6c56d80d35) : Resource op removal
Jan  9 22:33:26 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Jan  9 22:33:26 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=1, tag=transient_attributes, id=node2, magic=NA) : Transient attribute: removal
Jan  9 22:33:26 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/transient_attributes": ok (rc=0)
Jan  9 22:33:26 node1 crmd: [7464]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:33:26 node1 crmd: [7464]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:33:26 node1 crmd: [7464]: info: ais_dispatch: Membership 28: quorum retained
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/86, version=0.22.1): ok (rc=0)
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/87, version=0.22.1): ok (rc=0)
Jan  9 22:33:26 node1 crmd: [7464]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:33:26 node1 crmd: [7464]: info: do_state_transition: Membership changed: 20 -> 28 - join restart
Jan  9 22:33:26 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/90, version=0.22.1): ok (rc=0)
Jan  9 22:33:26 node1 crmd: [7464]: info: do_pe_invoke: Query 91: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:33:26 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ]
Jan  9 22:33:26 node1 crmd: [7464]: info: update_dc: Unset DC node1
Jan  9 22:33:26 node1 crmd: [7464]: info: join_make_offer: Making join offers based on membership 28
Jan  9 22:33:26 node1 crmd: [7464]: info: do_dc_join_offer_all: join-4: Waiting on 2 outstanding join acks
Jan  9 22:33:26 node1 crmd: [7464]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:33:26 node1 cib: [7921]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-19.raw
Jan  9 22:33:26 node1 cib: [7921]: info: write_cib_contents: Wrote version 0.22.0 of the CIB to disk (digest: fffeb465779e299419f2627a68c4205f)
Jan  9 22:33:26 node1 cib: [7921]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.KpsHrb (digest: /var/lib/heartbeat/crm/cib.0tCYEp)
Jan  9 22:33:28 node1 crmd: [7464]: info: update_dc: Unset DC node1
Jan  9 22:33:28 node1 crmd: [7464]: info: do_dc_join_offer_all: A new node joined the cluster
Jan  9 22:33:28 node1 crmd: [7464]: info: do_dc_join_offer_all: join-5: Waiting on 2 outstanding join acks
Jan  9 22:33:28 node1 crmd: [7464]: info: update_dc: Set DC to node1 (3.0.1)
Jan  9 22:33:28 node1 crmd: [7464]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:33:28 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Jan  9 22:33:28 node1 crmd: [7464]: info: do_dc_join_finalize: join-5: Syncing the CIB from node1 to the rest of the cluster
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/94, version=0.22.1): ok (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: do_dc_join_ack: join-5: Updating node state to member for node2
Jan  9 22:33:28 node1 crmd: [7464]: info: do_dc_join_ack: join-5: Updating node state to member for node1
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/95, version=0.22.1): ok (rc=0)
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/96, version=0.22.1): ok (rc=0)
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/97, version=0.22.1): ok (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan  9 22:33:28 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:33:28 node1 crmd: [7464]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan  9 22:33:28 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to true (call=103)
Jan  9 22:33:28 node1 crmd: [7464]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan  9 22:33:28 node1 attrd: [7462]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan  9 22:33:28 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan  9 22:33:28 node1 crmd: [7464]: info: do_pe_invoke: Query 104: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/99, version=0.22.3): ok (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node1_monitor_0, magic=0:7;4:0:7:4df55898-0fdc-4334-a05d-8c6c56d80d35) : Resource op removal
Jan  9 22:33:28 node1 crmd: [7464]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: do_pe_invoke: Query 105: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=node2/crmd/6, version=0.22.3): ok (rc=0)
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=node2/crmd/7, version=0.22.4): ok (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: te_update_diff: Detected LRM refresh - 2 resources updated: Skipping all resource events
Jan  9 22:33:28 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA) : LRM Refresh
Jan  9 22:33:28 node1 crmd: [7464]: info: do_pe_invoke: Query 106: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/101, version=0.22.5): ok (rc=0)
Jan  9 22:33:28 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/103, version=0.22.5): ok (rc=0)
Jan  9 22:33:28 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan  9 22:33:28 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072808-45, seq=28, quorate=1
Jan  9 22:33:28 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:33:28 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:33:28 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:33:28 node1 pengine: [7463]: info: determine_online_status: Node node2 is online
Jan  9 22:33:28 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:33:28 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:33:28 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:33:28 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:33:28 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:33:28 node1 pengine: [7463]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node1 on node2
Jan  9 22:33:28 node1 pengine: [7463]: notice: LogActions: Start STONITH-node1	(node2)
Jan  9 22:33:28 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:33:28 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:33:28 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 5: 6 actions in 6 synapses
Jan  9 22:33:28 node1 crmd: [7464]: info: do_te_invoke: Processing graph 5 (ref=pe_calc-dc-1263072808-45) derived from /var/lib/pengine/pe-warn-18.bz2
Jan  9 22:33:28 node1 crmd: [7464]: info: te_rsc_command: Initiating action 5: monitor STONITH-node1_monitor_0 on node2
Jan  9 22:33:28 node1 crmd: [7464]: info: te_rsc_command: Initiating action 6: monitor STONITH-node2_monitor_0 on node2
Jan  9 22:33:28 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_monitor_0 (5) confirmed on node2 (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node2_monitor_0 (6) confirmed on node2 (rc=0)
Jan  9 22:33:28 node1 crmd: [7464]: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on node2 - no waiting
Jan  9 22:33:28 node1 crmd: [7464]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:33:28 node1 crmd: [7464]: info: te_rsc_command: Initiating action 7: start STONITH-node1_start_0 on node2
Jan  9 22:33:28 node1 pengine: [7463]: WARN: process_pe_message: Transition 5: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-18.bz2
Jan  9 22:33:29 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:33:37 node1 crmd: [7464]: WARN: status_from_rc: Action 7 (STONITH-node1_start_0) on node2 failed (target: 0 vs. rc: 1): Error
Jan  9 22:33:37 node1 crmd: [7464]: WARN: update_failcount: Updating failcount for STONITH-node1 on node2 after failed start: rc=1 (update=INFINITY, time=1263072817)
Jan  9 22:33:37 node1 crmd: [7464]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node1_start_0, magic=0:1;7:5:0:4df55898-0fdc-4334-a05d-8c6c56d80d35) : Event failed
Jan  9 22:33:37 node1 crmd: [7464]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan  9 22:33:37 node1 crmd: [7464]: info: update_abort_priority: Abort action done superceeded by restart
Jan  9 22:33:37 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_start_0 (7) confirmed on node2 (rc=4)
Jan  9 22:33:37 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:33:37 node1 crmd: [7464]: notice: run_graph: Transition 5 (Complete=5, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-18.bz2): Stopped
Jan  9 22:33:37 node1 crmd: [7464]: info: te_graph_trigger: Transition 5 is now complete
Jan  9 22:33:37 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:33:37 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:33:37 node1 crmd: [7464]: info: do_pe_invoke: Query 113: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:33:37 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072817-50, seq=28, quorate=1
Jan  9 22:33:37 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:33:37 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:33:37 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:33:37 node1 pengine: [7463]: info: determine_online_status: Node node2 is online
Jan  9 22:33:37 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:33:37 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:33:37 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Started node2 FAILED
Jan  9 22:33:37 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:33:37 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:33:37 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:33:37 node1 pengine: [7463]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:33:37 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:33:37 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:33:37 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:33:37 node1 pengine: [7463]: notice: LogActions: Stop resource STONITH-node1	(node2)
Jan  9 22:33:37 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:33:37 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:33:37 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 6: 2 actions in 2 synapses
Jan  9 22:33:37 node1 crmd: [7464]: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1263072817-50) derived from /var/lib/pengine/pe-warn-19.bz2
Jan  9 22:33:37 node1 crmd: [7464]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan  9 22:33:37 node1 crmd: [7464]: info: te_rsc_command: Initiating action 1: stop STONITH-node1_stop_0 on node2
Jan  9 22:33:37 node1 pengine: [7463]: WARN: process_pe_message: Transition 6: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-19.bz2
Jan  9 22:33:37 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:33:37 node1 crmd: [7464]: info: match_graph_event: Action STONITH-node1_stop_0 (1) confirmed on node2 (rc=0)
Jan  9 22:33:37 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:33:37 node1 crmd: [7464]: notice: run_graph: Transition 6 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-19.bz2): Complete
Jan  9 22:33:37 node1 crmd: [7464]: info: te_graph_trigger: Transition 6 is now complete
Jan  9 22:33:37 node1 crmd: [7464]: info: notify_crmd: Transition 6 status: done - <null>
Jan  9 22:33:37 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:33:37 node1 crmd: [7464]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:35:04 node1 crmd: [7464]: info: handle_shutdown_request: Creating shutdown request for node2 (state=S_IDLE)
Jan  9 22:35:04 node1 crmd: [7464]: info: update_attrd: Updating shutdown=1263072904 via attrd for node2
Jan  9 22:35:04 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=node2, magic=NA) : Transient attribute: update
Jan  9 22:35:04 node1 crmd: [7464]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:35:04 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:35:04 node1 crmd: [7464]: info: do_pe_invoke: Query 115: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:35:04 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072904-52, seq=28, quorate=1
Jan  9 22:35:04 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:35:04 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:35:04 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:35:04 node1 pengine: [7463]: info: determine_online_status: Node node2 is shutting down
Jan  9 22:35:04 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:35:04 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:35:04 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:35:04 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:35:04 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:35:04 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:35:04 node1 pengine: [7463]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:35:04 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:35:04 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:35:04 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:35:04 node1 pengine: [7463]: info: stage6: Scheduling Node node2 for shutdown
Jan  9 22:35:04 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:35:04 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:35:04 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:35:04 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 7: 1 actions in 1 synapses
Jan  9 22:35:04 node1 crmd: [7464]: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1263072904-52) derived from /var/lib/pengine/pe-warn-20.bz2
Jan  9 22:35:04 node1 crmd: [7464]: info: te_crm_command: Executing crm-event (6): do_shutdown on node2
Jan  9 22:35:04 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:35:04 node1 pengine: [7463]: WARN: process_pe_message: Transition 7: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-20.bz2
Jan  9 22:35:04 node1 crmd: [7464]: notice: run_graph: Transition 7 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-20.bz2): Complete
Jan  9 22:35:04 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:35:04 node1 crmd: [7464]: info: te_graph_trigger: Transition 7 is now complete
Jan  9 22:35:04 node1 crmd: [7464]: info: notify_crmd: Transition 7 status: done - <null>
Jan  9 22:35:04 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:35:04 node1 crmd: [7464]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:35:06 node1 shutdown[7925]: shutting down for system halt
Jan  9 22:35:06 node1 init: Switching to runlevel: 0
Jan  9 22:35:08 node1 cib: [7460]: info: cib_process_shutdown_req: Shutdown REQ from node2
Jan  9 22:35:08 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_shutdown_req for section 'all' (origin=node2/node2/(null), version=0.22.14): ok (rc=0)
Jan  9 22:35:08 node1 smartd[7382]: smartd received signal 15: Terminated
Jan  9 22:35:08 node1 smartd[7382]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan  9 22:35:08 node1 smartd[7382]: smartd is exiting (exit status 0)
Jan  9 22:35:09 node1 sshd[7411]: Received signal 15; terminating.
Jan  9 22:35:09 node1 libvirtd: Shutting down on signal 15
Jan  9 22:35:09 node1 gdm-session-worker[7728]: PAM pam_putenv: NULL pam handle passed
Jan  9 22:35:09 node1 mgmtd: [7465]: info: mgmtd is shutting down
Jan  9 22:35:09 node1 mgmtd: [7465]: debug: [mgmtd] stopped
Jan  9 22:35:09 node1 openais[7395]: [SERV ] Unloading all openais components
Jan  9 22:35:09 node1 openais[7395]: [SERV ] Unloading slot 10: openais cluster config database access v1.01
Jan  9 22:35:09 node1 openais[7395]: [SERV ] Unloading openais component: openais_confdb v0 (16/10)
Jan  9 22:35:09 node1 openais[7395]: [SERV ] Unloading slot 9: Pacemaker Cluster Manager
Jan  9 22:35:09 node1 openais[7395]: [SERV ] Unloading openais component: pacemaker v0 (2/9)
Jan  9 22:35:09 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: Begining shutdown
Jan  9 22:35:09 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to mgmtd: [7465]
Jan  9 22:35:10 node1 rpcbind: rpcbind terminating on signal. Restart with "rpcbind -w"
Jan  9 22:35:10 node1 openais[7395]: [MAIN ] info: update_member: Node node2 now has process list: 00000000000000000000000000000002 (2)
Jan  9 22:35:10 node1 crmd: [7464]: info: ais_dispatch: Membership 28: quorum retained
Jan  9 22:35:10 node1 cib: [7460]: info: ais_dispatch: Membership 28: quorum retained
Jan  9 22:35:10 node1 crmd: [7464]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 born=28 seen=28 proc=00000000000000000000000000000002 (new)
Jan  9 22:35:10 node1 cib: [7460]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 born=28 seen=28 proc=00000000000000000000000000000002 (new)
Jan  9 22:35:10 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 28 to 2 children
Jan  9 22:35:10 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/116, version=0.22.14): ok (rc=0)
Jan  9 22:35:10 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/119, version=0.22.15): ok (rc=0)
Jan  9 22:35:10 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: mgmtd (pid=7465) confirmed dead
Jan  9 22:35:10 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to crmd: [7464]
Jan  9 22:35:10 node1 crmd: [7464]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jan  9 22:35:10 node1 crmd: [7464]: info: crm_shutdown: Requesting shutdown
Jan  9 22:35:10 node1 crmd: [7464]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
Jan  9 22:35:10 node1 crmd: [7464]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan  9 22:35:10 node1 crmd: [7464]: info: do_shutdown_req: Sending shutdown request to DC: node1
Jan  9 22:35:15 node1 openais[7395]: [TOTEM] The token was lost in the OPERATIONAL state.
Jan  9 22:35:15 node1 openais[7395]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes).
Jan  9 22:35:15 node1 openais[7395]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes).
Jan  9 22:35:15 node1 openais[7395]: [TOTEM] entering GATHER state from 2.
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] entering GATHER state from 0.
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] Creating commit token because I am the rep.
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] Saving state aru 6d high seq received 6d
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] Storing new sequence id for ring 20
Jan  9 22:35:18 node1 crmd: [7464]: notice: ais_dispatch: Membership 32: quorum lost
Jan  9 22:35:18 node1 crmd: [7464]: info: ais_status_callback: status: node2 is now lost (was member)
Jan  9 22:35:18 node1 crmd: [7464]: info: crm_update_peer: Node node2: id=536979648 state=lost (new) addr=r(0) ip(192.168.1.160)  votes=1 born=28 seen=28 proc=00000000000000000000000000000002
Jan  9 22:35:18 node1 crmd: [7464]: info: erase_node_from_join: Removed node node2 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1
Jan  9 22:35:18 node1 crmd: [7464]: info: crm_update_quorum: Updating quorum status to false (call=122)
Jan  9 22:35:18 node1 cib: [7460]: notice: ais_dispatch: Membership 32: quorum lost
Jan  9 22:35:18 node1 cib: [7460]: info: crm_update_peer: Node node2: id=536979648 state=lost (new) addr=r(0) ip(192.168.1.160)  votes=1 born=28 seen=28 proc=00000000000000000000000000000002
Jan  9 22:35:18 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/120, version=0.22.15): ok (rc=0)
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] entering COMMIT state.
Jan  9 22:35:18 node1 cib: [7460]: info: log_data_element: cib:diff: - <cib have-quorum="1" admin_epoch="0" epoch="22" num_updates="16" />
Jan  9 22:35:18 node1 cib: [7460]: info: log_data_element: cib:diff: + <cib have-quorum="0" admin_epoch="0" epoch="23" num_updates="1" />
Jan  9 22:35:18 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/122, version=0.23.1): ok (rc=0)
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] entering RECOVERY state.
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] position [0] member 192.168.1.150:
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] previous ring seq 28 rep 192.168.1.150
Jan  9 22:35:18 node1 crmd: [7464]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan  9 22:35:18 node1 crmd: [7464]: info: need_abort: Aborting on change to have-quorum
Jan  9 22:35:18 node1 crmd: [7464]: info: do_pe_invoke: Query 125: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:35:18 node1 crmd: [7464]: info: handle_shutdown_request: Creating shutdown request for node1 (state=S_POLICY_ENGINE)
Jan  9 22:35:18 node1 crmd: [7464]: info: update_attrd: Updating shutdown=1263072918 via attrd for node1
Jan  9 22:35:18 node1 attrd: [7462]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan  9 22:35:18 node1 crmd: [7464]: info: te_graph_trigger: Transition 7 is now complete
Jan  9 22:35:18 node1 crmd: [7464]: info: notify_crmd: Transition 7 status: done - <null>
Jan  9 22:35:18 node1 cib: [7460]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/124, version=0.23.1): ok (rc=0)
Jan  9 22:35:18 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072918-57, seq=32, quorate=0
Jan  9 22:35:18 node1 pengine: [7463]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan  9 22:35:18 node1 pengine: [7463]: info: determine_online_status: Node node1 is online
Jan  9 22:35:18 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:35:18 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:35:18 node1 pengine: [7463]: info: determine_online_status_fencing: Node node2 is down
Jan  9 22:35:18 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:35:18 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:35:18 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:35:18 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:35:18 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:35:18 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:35:18 node1 pengine: [7463]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:35:18 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:35:18 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:35:18 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:35:18 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:35:18 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:35:18 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 8: 0 actions in 0 synapses
Jan  9 22:35:18 node1 crmd: [7464]: info: do_te_invoke: Processing graph 8 (ref=pe_calc-dc-1263072918-57) derived from /var/lib/pengine/pe-warn-21.bz2
Jan  9 22:35:18 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:35:18 node1 crmd: [7464]: notice: run_graph: Transition 8 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-21.bz2): Complete
Jan  9 22:35:18 node1 crmd: [7464]: info: te_graph_trigger: Transition 8 is now complete
Jan  9 22:35:18 node1 attrd: [7462]: info: attrd_perform_update: Sent update 21: shutdown=1263072918
Jan  9 22:35:18 node1 pengine: [7463]: WARN: process_pe_message: Transition 8: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-21.bz2
Jan  9 22:35:18 node1 cib: [8686]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-20.raw
Jan  9 22:35:18 node1 crmd: [7464]: info: notify_crmd: Transition 8 status: done - <null>
Jan  9 22:35:18 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: (Re)Issuing shutdown request now that we are the DC
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: Starting PEngine Recheck Timer
Jan  9 22:35:18 node1 crmd: [7464]: info: do_shutdown_req: Sending shutdown request to DC: node1
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] aru 6d high delivered 6d received flag 1
Jan  9 22:35:18 node1 cib: [8686]: info: write_cib_contents: Wrote version 0.23.0 of the CIB to disk (digest: b6a5bbcb9a638f38a341c937594495d6)
Jan  9 22:35:18 node1 crmd: [7464]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=node1, magic=NA) : Transient attribute: update
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan  9 22:35:18 node1 crmd: [7464]: info: do_pe_invoke: Query 127: Requesting the current CIB: S_POLICY_ENGINE
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] Did not need to originate any messages in recovery.
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] Sending initial ORF token
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:35:18 node1 crmd: [7464]: info: handle_shutdown_request: Creating shutdown request for node1 (state=S_POLICY_ENGINE)
Jan  9 22:35:18 node1 cib: [8686]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.NsbT4s (digest: /var/lib/heartbeat/crm/cib.oRUO0o)
Jan  9 22:35:18 node1 crmd: [7464]: info: update_attrd: Updating shutdown=1263072918 via attrd for node1
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:35:18 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 32: memb=1, new=0, lost=1
Jan  9 22:35:18 node1 openais[7395]: [crm  ] info: pcmk_peer_update: memb: node1 369207488
Jan  9 22:35:18 node1 openais[7395]: [crm  ] info: pcmk_peer_update: lost: node2 536979648
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] CLM CONFIGURATION CHANGE
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] New Configuration:
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] Members Left:
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] Members Joined:
Jan  9 22:35:18 node1 crmd: [7464]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263072918-59, seq=32, quorate=0
Jan  9 22:35:18 node1 pengine: [7463]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan  9 22:35:18 node1 pengine: [7463]: info: determine_online_status: Node node1 is shutting down
Jan  9 22:35:18 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:35:18 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan  9 22:35:18 node1 pengine: [7463]: info: determine_online_status_fencing: Node node2 is down
Jan  9 22:35:18 node1 pengine: [7463]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan  9 22:35:18 node1 pengine: [7463]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan  9 22:35:18 node1 pengine: [7463]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan  9 22:35:18 node1 pengine: [7463]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan  9 22:35:18 node1 pengine: [7463]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan  9 22:35:18 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan  9 22:35:18 node1 pengine: [7463]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan  9 22:35:18 node1 pengine: [7463]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan  9 22:35:18 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan  9 22:35:18 node1 pengine: [7463]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan  9 22:35:18 node1 pengine: [7463]: info: stage6: Scheduling Node node1 for shutdown
Jan  9 22:35:18 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan  9 22:35:18 node1 pengine: [7463]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan  9 22:35:18 node1 openais[7395]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 32: memb=1, new=0, lost=0
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan  9 22:35:18 node1 crmd: [7464]: info: unpack_graph: Unpacked transition 9: 1 actions in 1 synapses
Jan  9 22:35:18 node1 pengine: [7463]: WARN: process_pe_message: Transition 9: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-22.bz2
Jan  9 22:35:18 node1 crmd: [7464]: info: do_te_invoke: Processing graph 9 (ref=pe_calc-dc-1263072918-59) derived from /var/lib/pengine/pe-warn-22.bz2
Jan  9 22:35:18 node1 pengine: [7463]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan  9 22:35:18 node1 crmd: [7464]: info: te_crm_command: Executing crm-event (5): do_shutdown on node1
Jan  9 22:35:18 node1 crmd: [7464]: info: te_crm_command: crm-event (5) is a local shutdown
Jan  9 22:35:18 node1 crmd: [7464]: info: run_graph: ====================================================
Jan  9 22:35:18 node1 crmd: [7464]: notice: run_graph: Transition 9 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-22.bz2): Complete
Jan  9 22:35:18 node1 crmd: [7464]: info: te_graph_trigger: Transition 9 is now complete
Jan  9 22:35:18 node1 crmd: [7464]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_STOPPING [ input=I_STOP cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan  9 22:35:18 node1 crmd: [7464]: info: do_dc_release: DC role released
Jan  9 22:35:18 node1 crmd: [7464]: info: pe_connection_destroy: Connection to the Policy Engine released
Jan  9 22:35:18 node1 openais[7395]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan  9 22:35:18 node1 openais[7395]: [crm  ] info: ais_mark_unseen_peer_dead: Node node2 was not seen in the previous transition
Jan  9 22:35:18 node1 crmd: [7464]: info: do_te_control: Transitioner is now inactive
Jan  9 22:35:18 node1 crmd: [7464]: info: do_te_control: Disconnecting STONITH...
Jan  9 22:35:18 node1 crmd: [7464]: info: tengine_stonith_connection_destroy: Fencing daemon disconnected
Jan  9 22:35:18 node1 crmd: [7464]: notice: Not currently connected.
Jan  9 22:35:18 node1 openais[7395]: [MAIN ] info: update_member: Node 536979648/node2 is now: lost
Jan  9 22:35:18 node1 crmd: [7464]: info: do_lrm_control: Disconnected from the LRM
Jan  9 22:35:18 node1 crmd: [7464]: info: do_ha_control: Disconnected from OpenAIS
Jan  9 22:35:18 node1 crmd: [7464]: info: do_cib_control: Disconnecting CIB
Jan  9 22:35:18 node1 crmd: [7464]: info: crmd_cib_connection_destroy: Connection to the CIB terminated...
Jan  9 22:35:18 node1 crmd: [7464]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
Jan  9 22:35:18 node1 crmd: [7464]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_dc_release ]
Jan  9 22:35:18 node1 crmd: [7464]: info: free_mem: Dropping I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
Jan  9 22:35:18 node1 crmd: [7464]: info: do_exit: [crmd] stopped (0)
Jan  9 22:35:18 node1 cib: [7460]: info: cib_process_readwrite: We are now in R/O mode
Jan  9 22:35:18 node1 cib: [7460]: WARN: send_ipc_message: IPC Channel to 7464 is not connected
Jan  9 22:35:18 node1 cib: [7460]: WARN: send_via_callback_channel: Delivery of reply to client 7464/7b5b0bf4-eeca-4b73-b72d-10eed2414bfa failed
Jan  9 22:35:18 node1 cib: [7460]: WARN: do_local_notify: A-Sync reply to crmd failed: reply failed
Jan  9 22:35:18 node1 openais[7395]: [crm  ] info: send_member_notification: Sending membership update 32 to 2 children
Jan  9 22:35:18 node1 openais[7395]: [MAIN ] info: update_member: Node node1 now has process list: 00000000000000000000000000013312 (78610)
Jan  9 22:35:18 node1 openais[7395]: [SYNC ] This node is within the primary component and will provide service.
Jan  9 22:35:18 node1 openais[7395]: [TOTEM] entering OPERATIONAL state.
Jan  9 22:35:18 node1 openais[7395]: [CLM  ] got nodejoin message 192.168.1.150
Jan  9 22:35:18 node1 openais[7395]: [crm  ] info: pcmk_ipc_exit: Client crmd (conn=0x7fb22c0345f0, async-conn=0x7fb22c0345f0) left
Jan  9 22:35:19 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: crmd (pid=7464) confirmed dead
Jan  9 22:35:19 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to pengine: [7463]
Jan  9 22:35:19 node1 pengine: [7463]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jan  9 22:35:20 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: pengine (pid=7463) confirmed dead
Jan  9 22:35:20 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to attrd: [7462]
Jan  9 22:35:20 node1 attrd: [7462]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jan  9 22:35:20 node1 attrd: [7462]: info: attrd_shutdown: Exiting
Jan  9 22:35:20 node1 attrd: [7462]: info: main: Exiting...
Jan  9 22:35:20 node1 attrd: [7462]: info: attrd_cib_connection_destroy: Connection to the CIB terminated...
Jan  9 22:35:20 node1 openais[7395]: [crm  ] info: pcmk_ipc_exit: Client attrd (conn=0x7fb22c034c10, async-conn=0x7fb22c034c10) left
Jan  9 22:35:21 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: attrd (pid=7462) confirmed dead
Jan  9 22:35:21 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to lrmd: [7461]
Jan  9 22:35:21 node1 lrmd: [7461]: info: lrmd is shutting down
Jan  9 22:35:22 node1 cib: [7460]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jan  9 22:35:22 node1 cib: [7460]: info: cib_shutdown: Disconnected 0 clients
Jan  9 22:35:22 node1 cib: [7460]: info: cib_process_disconnect: All clients disconnected...
Jan  9 22:35:22 node1 cib: [7460]: info: cib_ha_connection_destroy: Heartbeat disconnection complete... exiting
Jan  9 22:35:22 node1 cib: [7460]: info: cib_ha_connection_destroy: Exiting...
Jan  9 22:35:22 node1 cib: [7460]: info: main: Done
Jan  9 22:35:22 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: lrmd (pid=7461) confirmed dead
Jan  9 22:35:22 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to cib: [7460]
Jan  9 22:35:22 node1 openais[7395]: [crm  ] info: pcmk_ipc_exit: Client cib (conn=0x7fb22c034fa0, async-conn=0x7fb22c034fa0) left
Jan  9 22:35:23 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: cib (pid=7460) confirmed dead
Jan  9 22:35:23 node1 openais[7395]: [MAIN ] notice: stop_child: Sent -15 to stonithd: [7459]
Jan  9 22:35:23 node1 stonithd: [7459]: notice: /usr/lib64/heartbeat/stonithd normally quit.
Jan  9 22:35:23 node1 openais[7395]: [crm  ] info: pcmk_ipc_exit: Client stonithd (conn=0x7fb22c0346f0, async-conn=0x7fb22c0346f0) left
Jan  9 22:35:24 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: stonithd (pid=7459) confirmed dead
Jan  9 22:35:24 node1 openais[7395]: [MAIN ] info: update_member: Node node1 now has process list: 00000000000000000000000000000002 (2)
Jan  9 22:35:24 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: Shutdown complete
Jan  9 22:35:24 node1 openais[7395]: [crm  ] notice: pcmk_shutdown: Forcing clean exit of OpenAIS
Jan  9 22:35:25 node1 kernel: Kernel logging (proc) stopped.
Jan  9 22:35:25 node1 kernel: Kernel log daemon terminating.
Jan  9 22:35:25 node1 syslog-ng[5207]: Termination requested via signal, terminating;
Jan  9 22:35:25 node1 syslog-ng[5207]: syslog-ng shutting down; version='2.0.9'
Jan 10 11:44:56 node1 syslog-ng[2412]: syslog-ng starting up; version='2.0.9'
Jan 10 11:44:57 node1 rchal: CPU frequency scaling is not supported by your processor.
Jan 10 11:44:57 node1 rchal: boot with 'CPUFREQ=no' in to avoid this warning.
Jan 10 11:44:57 node1 rchal: Cannot load cpufreq governors - No cpufreq driver available
Jan 10 11:44:57 node1 ifup:     lo        
Jan 10 11:44:58 node1 ifup:     lo        
Jan 10 11:44:58 node1 ifup: IP address: 127.0.0.1/8  
Jan 10 11:44:58 node1 ifup:  
Jan 10 11:44:58 node1 ifup:               
Jan 10 11:44:58 node1 ifup: IP address: 127.0.0.2/8  
Jan 10 11:44:58 node1 ifup:  
Jan 10 11:44:58 node1 ifup:     eth0      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan 10 11:44:58 node1 ifup:     eth0      
Jan 10 11:44:58 node1 ifup: IP address: 10.0.0.10/24  
Jan 10 11:44:58 node1 ifup:  
Jan 10 11:44:59 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan 10 11:44:59 node1 ifup:     eth1      device: Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet (rev 20)
Jan 10 11:45:00 node1 ifup:     eth1      
Jan 10 11:45:00 node1 ifup: IP address: 10.0.0.11/24  
Jan 10 11:45:00 node1 ifup:  
Jan 10 11:45:00 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan 10 11:45:00 node1 ifup:     eth2      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan 10 11:45:00 node1 ifup:     eth2      
Jan 10 11:45:00 node1 ifup: IP address: 192.168.1.150/24  
Jan 10 11:45:00 node1 ifup:  
Jan 10 11:45:01 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan 10 11:45:01 node1 ifup:     eth3      device: Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Jan 10 11:45:01 node1 kernel: klogd 1.4.1, log source = /proc/kmsg started.
Jan 10 11:45:01 node1 kernel: IA-32 Microcode Update Driver: v1.14a-xen <tigran at aivazian.fsnet.co.uk>
Jan 10 11:45:01 node1 kernel: firmware: requesting intel-ucode/06-1e-05
Jan 10 11:45:01 node1 kernel: bnx2: eth0: using MSIX
Jan 10 11:45:01 node1 kernel: bnx2: eth1: using MSIX
Jan 10 11:45:01 node1 kernel: bnx2: eth2: using MSIX
Jan 10 11:45:01 node1 kernel: bnx2: eth3: using MSIX
Jan 10 11:45:01 node1 ifup:     eth3      
Jan 10 11:45:01 node1 ifup: IP address: 192.168.1.151/24  
Jan 10 11:45:01 node1 ifup:  
Jan 10 11:45:01 node1 kernel: bnx2: eth0 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan 10 11:45:02 node1 SuSEfirewall2: SuSEfirewall2 not active
Jan 10 11:45:02 node1 kernel: Loading iSCSI transport class v2.0-870.
Jan 10 11:45:02 node1 kernel: iscsi: registered transport (tcp)
Jan 10 11:45:03 node1 rpcbind: cannot create socket for udp6
Jan 10 11:45:03 node1 kernel: bnx2: eth1 NIC Copper Link is Up, 1000 Mbps full duplex, receive & transmit flow control ON
Jan 10 11:45:03 node1 rpcbind: cannot create socket for tcp6
Jan 10 11:45:03 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan 10 11:45:03 node1 kernel: device-mapper: ioctl: error adding target to table
Jan 10 11:45:03 node1 kernel: iscsi: registered transport (iser)
Jan 10 11:45:03 node1 iscsid: iSCSI logger with pid=3621 started!
Jan 10 11:45:03 node1 kernel: device-mapper: table: 253:0: multipath: error getting device
Jan 10 11:45:03 node1 kernel: device-mapper: ioctl: error adding target to table
Jan 10 11:45:03 node1 kernel: scsi4 : iSCSI Initiator over TCP/IP
Jan 10 11:45:03 node1 kernel: scsi5 : iSCSI Initiator over TCP/IP
Jan 10 11:45:04 node1 kernel: bnx2: eth2 NIC Copper Link is Up, 1000 Mbps full duplex
Jan 10 11:45:04 node1 kernel: scsi 4:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: scsi 5:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel:  sdb:<5>sd 5:0:0:0: [sdc] 104872257 512-byte hardware sectors: (53.6GB/50.0GiB)
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel:  sdc: unknown partition table
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: [sdb] Attached SCSI disk
Jan 10 11:45:04 node1 kernel: sd 4:0:0:0: Attached scsi generic sg2 type 0
Jan 10 11:45:04 node1 kernel: scsi 4:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan 10 11:45:04 node1 kernel:  unknown partition table
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: [sdc] Attached SCSI disk
Jan 10 11:45:04 node1 kernel: sd 5:0:0:0: Attached scsi generic sg3 type 0
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel: scsi 5:0:0:1: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel:  sdd:<5>sd 5:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] 125853210 512-byte hardware sectors: (64.4GB/60.0GiB)
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Write Protect is off
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Mode Sense: 77 00 00 08
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Jan 10 11:45:04 node1 kernel:  sde: unknown partition table
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: [sde] Attached SCSI disk
Jan 10 11:45:04 node1 kernel: sd 5:0:0:1: Attached scsi generic sg4 type 0
Jan 10 11:45:04 node1 kernel:  unknown partition table
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: [sdd] Attached SCSI disk
Jan 10 11:45:04 node1 kernel: sd 4:0:0:1: Attached scsi generic sg5 type 0
Jan 10 11:45:04 node1 iscsid: transport class version 2.0-870. iscsid version 2.0-870
Jan 10 11:45:04 node1 iscsid: iSCSI daemon with pid=3622 started!
Jan 10 11:45:04 node1 iscsid: connection1:0 is operational now
Jan 10 11:45:04 node1 iscsid: connection2:0 is operational now
Jan 10 11:45:04 node1 multipathd: 149455400000000000000000001000000990500000f000000: event checker started
Jan 10 11:45:04 node1 multipathd: sde path added to devmap 149455400000000000000000001000000990500000f000000
Jan 10 11:45:05 node1 kernel: device-mapper: table: device 8:32 too small for target
Jan 10 11:45:05 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan 10 11:45:05 node1 kernel: device-mapper: ioctl: error adding target to table
Jan 10 11:45:05 node1 multipathd: 1494554000000000000000000010000008c0500000f000000: event checker started
Jan 10 11:45:05 node1 multipathd: sdb path added to devmap 1494554000000000000000000010000008c0500000f000000
Jan 10 11:45:05 node1 kernel: device-mapper: table: device 253:3 too small for target
Jan 10 11:45:05 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan 10 11:45:05 node1 kernel: device-mapper: ioctl: error adding target to table
Jan 10 11:45:05 node1 multipathd: sdc path added to devmap 1494554000000000000000000010000008c0500000f000000
Jan 10 11:45:06 node1 multipathd: dm-3: mapname not found for 253:3
Jan 10 11:45:06 node1 sshd[4259]: Server listening on 0.0.0.0 port 22.
Jan 10 11:45:06 node1 smartd[4213]: smartd 5.39 2008-10-24 22:33 [x86_64-suse-linux-gnu] (openSUSE RPM) Copyright (C) 2002-8 by Bruce Allen, http://smartmontools.sourceforge.net
Jan 10 11:45:06 node1 smartd[4213]: Opened configuration file /etc/smartd.conf
Jan 10 11:45:06 node1 smartd[4213]: Drive: DEVICESCAN, implied '-a' Directive on line 26 of file /etc/smartd.conf
Jan 10 11:45:06 node1 smartd[4213]: Configuration file /etc/smartd.conf was parsed, found DEVICESCAN, scanning devices
Jan 10 11:45:06 node1 smartd[4213]: Device: /dev/sda, type changed from 'scsi' to 'sat'
Jan 10 11:45:06 node1 smartd[4213]: Device: /dev/sda [SAT], opened
Jan 10 11:45:06 node1 kernel: device-mapper: table: device 8:32 too small for target
Jan 10 11:45:06 node1 kernel: device-mapper: table: 253:2: linear: dm-linear: Device lookup failed
Jan 10 11:45:06 node1 kernel: device-mapper: ioctl: error adding target to table
Jan 10 11:45:06 node1 smartd[4213]: Device: /dev/sda [SAT], found in smartd database.
Jan 10 11:45:06 node1 multipathd: dm-3: mapname not found for 253:3
Jan 10 11:45:06 node1 multipathd: dm-3: remove map (uevent)
Jan 10 11:45:07 node1 smartd[4213]: Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list.
Jan 10 11:45:07 node1 xenstored: Checking store ...
Jan 10 11:45:07 node1 xenstored: Checking store complete.
Jan 10 11:45:07 node1 kernel: suspend: event channel 52
Jan 10 11:45:07 node1 smartd[4213]: Device: /dev/sda [SAT], state read from /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan 10 11:45:07 node1 smartd[4213]: Device: /dev/sdb, opened
Jan 10 11:45:07 node1 smartd[4213]: Device: /dev/sdb, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdb' to turn on SMART features
Jan 10 11:45:07 node1 smartd[4213]: Device: /dev/sdc, opened
Jan 10 11:45:07 node1 BLKTAPCTRL[4471]: blktapctrl.c:795: blktapctrl: v1.0.0
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [raw image (aio)]
Jan 10 11:45:08 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/Xservers
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [raw image (sync)]
Jan 10 11:45:08 node1 logger: /etc/init.d/xdm: No changes for /etc/X11/xdm/xdm-config
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [vmware image (vmdk)]
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [ramdisk image (ram)]
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [qcow disk (qcow)]
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [qcow2 disk (qcow2)]
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [ioemu disk]
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl.c:797: Found driver: [raw image (cdrom)]
Jan 10 11:45:07 node1 smartd[4213]: Device: /dev/sdc, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdc' to turn on SMART features
Jan 10 11:45:08 node1 kernel: Bridge firewalling registered
Jan 10 11:45:08 node1 BLKTAPCTRL[4471]: blktapctrl_linux.c:23: /dev/xen/blktap0 device already exists
Jan 10 11:45:08 node1 smartd[4213]: Device: /dev/sdd, opened
Jan 10 11:45:08 node1 smartd[4213]: Device: /dev/sdd, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sdd' to turn on SMART features
Jan 10 11:45:08 node1 smartd[4213]: Device: /dev/sde, opened
Jan 10 11:45:08 node1 smartd[4213]: Device: /dev/sde, IE (SMART) not enabled, skip device Try 'smartctl -s on /dev/sde' to turn on SMART features
Jan 10 11:45:08 node1 smartd[4213]: Monitoring 1 ATA and 0 SCSI devices
Jan 10 11:45:08 node1 smartd[4213]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 112 to 122
Jan 10 11:45:08 node1 smartd[4213]: Device: /dev/sda [SAT], state written to /var/lib/smartmontools/smartd.WDC_WD5002ABYS_18B1B0-WD_WCASY8429759.ata.state
Jan 10 11:45:08 node1 smartd[4713]: smartd has fork()ed into background mode. New PID=4713.
Jan 10 11:45:08 node1 openais[4695]: [MAIN ] AIS Executive Service RELEASE 'subrev 1152 version 0.80'
Jan 10 11:45:08 node1 openais[4695]: [MAIN ] Copyright (C) 2002-2006 MontaVista Software, Inc and contributors.
Jan 10 11:45:08 node1 openais[4695]: [MAIN ] Copyright (C) 2006 Red Hat, Inc.
Jan 10 11:45:08 node1 openais[4695]: [MAIN ] AIS Executive Service: started and ready to provide service.
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] Token Timeout (5000 ms) retransmit timeout (490 ms)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] token hold (382 ms) retransmits before loss (10 retrans)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] join (1000 ms) send_join (45 ms) consensus (2500 ms) merge (200 ms)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] downcheck (1000 ms) fail to recv const (50 msgs)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] seqno unchanged const (30 rotations) Maximum network MTU 1500
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] window size per rotation (50 messages) maximum messages per rotation (20 messages)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] send threads (0 threads)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] RRP token expired timeout (490 ms)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] RRP token problem counter (2000 ms)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] RRP threshold (10 problem count)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] RRP mode set to none.
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] heartbeat_failures_allowed (0)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] max_network_delay (50 ms)
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] Receive multicast socket recv buffer size (262142 bytes).
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] Transmit multicast socket send buffer size (262142 bytes).
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] The network interface [192.168.1.150] is now up.
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] Created or loaded sequence id 32.192.168.1.150 for this ring.
Jan 10 11:45:08 node1 openais[4695]: [TOTEM] entering GATHER state from 15.
Jan 10 11:45:09 node1 openais[4695]: [crm  ] info: process_ais_conf: Reading configure
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: config_find_next: Processing additional logging options...
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: get_config_opt: Found 'off' for option: debug
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: get_config_opt: Found 'yes' for option: to_syslog
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: get_config_opt: Found 'daemon' for option: syslog_facility
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: config_find_next: Processing additional service options...
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: get_config_opt: Found 'yes' for option: use_logd
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: get_config_opt: Found 'yes' for option: use_mgmtd
Jan 10 11:45:09 node1 openais[4695]: [crm  ] info: pcmk_plugin_init: CRM: Initialized
Jan 10 11:45:09 node1 openais[4695]: [crm  ] Logging: Initialized pcmk_plugin_init
Jan 10 11:45:09 node1 openais[4695]: [crm  ] info: pcmk_plugin_init: Service: 9
Jan 10 11:45:09 node1 openais[4695]: [crm  ] info: pcmk_plugin_init: Local node id: 369207488
Jan 10 11:45:09 node1 openais[4695]: [crm  ] info: pcmk_plugin_init: Local hostname: node1
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: update_member: Creating entry for node 369207488 born on 0
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: update_member: 0x771fa0 Node 369207488 now known as node1 (was: (null))
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: update_member: Node node1 now has 1 quorum votes (was 0)
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: update_member: Node 369207488/node1 is now: member
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4746 for process stonithd
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4747 for process cib
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4748 for process lrmd
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4749 for process attrd
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4750 for process pengine
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4751 for process crmd
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] info: spawn_child: Forked child 4752 for process mgmtd
Jan 10 11:45:09 node1 openais[4695]: [crm  ] info: pcmk_startup: CRM: Initialized
Jan 10 11:45:09 node1 openais[4695]: [MAIN ] Service initialized 'Pacemaker Cluster Manager'
Jan 10 11:45:09 node1 lrmd: [4748]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 lrmd: [4748]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jan 10 11:45:09 node1 mgmtd: [4752]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 mgmtd: [4752]: info: G_main_add_SignalHandler: Added signal handler for signal 15
Jan 10 11:45:09 node1 mgmtd: [4752]: debug: Enabling coredumps
Jan 10 11:45:09 node1 lrmd: [4748]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan 10 11:45:09 node1 attrd: [4749]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 10 11:45:09 node1 pengine: [4750]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 10 11:45:09 node1 crmd: [4751]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 10 11:45:09 node1 cib: [4747]: info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jan 10 11:45:09 node1 attrd: [4749]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 pengine: [4750]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 cib: [4747]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 attrd: [4749]: info: main: Starting up....
Jan 10 11:45:09 node1 lrmd: [4748]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan 10 11:45:09 node1 cib: [4747]: info: G_main_add_TriggerHandler: Added signal manual handler
Jan 10 11:45:09 node1 crmd: [4751]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 mgmtd: [4752]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan 10 11:45:09 node1 stonithd: [4746]: WARN: Initializing connection to logging daemon failed. Logging daemon may not be running
Jan 10 11:45:09 node1 pengine: [4750]: info: main: Starting pengine
Jan 10 11:45:10 node1 attrd: [4749]: info: init_ais_connection: Creating connection to our AIS plugin
Jan 10 11:45:10 node1 lrmd: [4748]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan 10 11:45:10 node1 lrmd: [4748]: info: Started.
Jan 10 11:45:10 node1 crmd: [4751]: info: main: CRM Hg Version: 0080ec086ae9c20ad5c4c3562000c0ad68374f0a
Jan 10 11:45:10 node1 mgmtd: [4752]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan 10 11:45:10 node1 stonithd: [4746]: info: G_main_add_SignalHandler: Added signal handler for signal 10
Jan 10 11:45:10 node1 attrd: [4749]: info: init_ais_connection: AIS connection established
Jan 10 11:45:09 node1 openais[4695]: [SERV ] Service initialized 'openais extended virtual synchrony service'
Jan 10 11:45:10 node1 cib: [4747]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan 10 11:45:10 node1 crmd: [4751]: info: crmd_init: Starting crmd
Jan 10 11:45:10 node1 stonithd: [4746]: info: G_main_add_SignalHandler: Added signal handler for signal 12
Jan 10 11:45:10 node1 attrd: [4749]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan 10 11:45:10 node1 attrd: [4749]: info: crm_new_peer: Node node1 now has id: 369207488
Jan 10 11:45:10 node1 mgmtd: [4752]: info: init_crm
Jan 10 11:45:10 node1 crmd: [4751]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan 10 11:45:10 node1 stonithd: [4746]: info: init_ais_connection: Creating connection to our AIS plugin
Jan 10 11:45:10 node1 cib: [4747]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Jan 10 11:45:10 node1 attrd: [4749]: info: crm_new_peer: Node 369207488 is now known as node1
Jan 10 11:45:10 node1 mgmtd: [4752]: info: login to cib: 0, ret:-10
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais cluster membership service B.01.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais availability management framework B.01.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais checkpoint service B.01.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais event service B.01.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais distributed locking service B.01.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais message service B.01.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais configuration service'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais cluster closed process group service v1.01'
Jan 10 11:45:10 node1 openais[4695]: [SERV ] Service initialized 'openais cluster config database access v1.01'
Jan 10 11:45:10 node1 openais[4695]: [SYNC ] Not using a virtual synchrony filter.
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] Creating commit token because I am the rep.
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] Saving state aru 0 high seq received 0
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] Storing new sequence id for ring 24
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] entering COMMIT state.
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] entering RECOVERY state.
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] position [0] member 192.168.1.150:
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] previous ring seq 32 rep 192.168.1.150
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] aru 0 high delivered 0 received flag 1
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] Did not need to originate any messages in recovery.
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] Sending initial ORF token
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] CLM CONFIGURATION CHANGE
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] New Configuration:
Jan 10 11:45:10 node1 stonithd: [4746]: info: init_ais_connection: AIS connection established
Jan 10 11:45:10 node1 stonithd: [4746]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan 10 11:45:10 node1 stonithd: [4746]: info: crm_new_peer: Node node1 now has id: 369207488
Jan 10 11:45:10 node1 stonithd: [4746]: info: crm_new_peer: Node 369207488 is now known as node1
Jan 10 11:45:10 node1 stonithd: [4746]: notice: /usr/lib64/heartbeat/stonithd start up successfully.
Jan 10 11:45:10 node1 stonithd: [4746]: info: G_main_add_SignalHandler: Added signal handler for signal 17
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] Members Left:
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] Members Joined:
Jan 10 11:45:10 node1 openais[4695]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 36: memb=0, new=0, lost=0
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] CLM CONFIGURATION CHANGE
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] New Configuration:
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] Members Left:
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] Members Joined:
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan 10 11:45:10 node1 openais[4695]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 36: memb=1, new=1, lost=0
Jan 10 11:45:10 node1 openais[4695]: [crm  ] info: pcmk_peer_update: NEW:  node1 369207488
Jan 10 11:45:10 node1 openais[4695]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan 10 11:45:10 node1 openais[4695]: [MAIN ] info: update_member: Node node1 now has process list: 00000000000000000000000000053312 (340754)
Jan 10 11:45:10 node1 openais[4695]: [SYNC ] This node is within the primary component and will provide service.
Jan 10 11:45:10 node1 openais[4695]: [TOTEM] entering OPERATIONAL state.
Jan 10 11:45:10 node1 openais[4695]: [CLM  ] got nodejoin message 192.168.1.150
Jan 10 11:45:10 node1 openais[4695]: [crm  ] info: pcmk_ipc: Recorded connection 0x77dc30 for attrd/4749
Jan 10 11:45:10 node1 openais[4695]: [crm  ] info: pcmk_ipc: Recorded connection 0x77d790 for stonithd/4746
Jan 10 11:45:10 node1 /usr/sbin/cron[4798]: (CRON) STARTUP (V5.0)
Jan 10 11:45:10 node1 cib: [4747]: info: startCib: CIB Initialization completed successfully
Jan 10 11:45:10 node1 cib: [4747]: info: init_ais_connection: Creating connection to our AIS plugin
Jan 10 11:45:10 node1 cib: [4747]: info: init_ais_connection: AIS connection established
Jan 10 11:45:10 node1 openais[4695]: [crm  ] info: pcmk_ipc: Recorded connection 0x77f030 for cib/4747
Jan 10 11:45:10 node1 openais[4695]: [crm  ] info: pcmk_ipc: Sending membership update 36 to cib
Jan 10 11:45:10 node1 cib: [4747]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan 10 11:45:10 node1 cib: [4747]: info: crm_new_peer: Node node1 now has id: 369207488
Jan 10 11:45:10 node1 cib: [4747]: info: crm_new_peer: Node 369207488 is now known as node1
Jan 10 11:45:10 node1 cib: [4747]: info: cib_init: Starting cib mainloop
Jan 10 11:45:10 node1 cib: [4747]: info: ais_dispatch: Membership 36: quorum still lost
Jan 10 11:45:10 node1 cib: [4747]: info: crm_update_peer: Node node1: id=369207488 state=member (new) addr=r(0) ip(192.168.1.150)  (new) votes=1 (new) born=0 seen=36 proc=00000000000000000000000000053312 (new)
Jan 10 11:45:10 node1 cib: [4800]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-21.raw
Jan 10 11:45:11 node1 cib: [4800]: info: write_cib_contents: Wrote version 0.23.0 of the CIB to disk (digest: 26d20aa2df1d6a58eff4a1ab8b4c4fa3)
Jan 10 11:45:11 node1 cib: [4800]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.MhB0wv (digest: /var/lib/heartbeat/crm/cib.jgfkuP)
Jan 10 11:45:11 node1 crmd: [4751]: info: do_cib_control: CIB connection established
Jan 10 11:45:11 node1 crmd: [4751]: info: init_ais_connection: Creating connection to our AIS plugin
Jan 10 11:45:11 node1 crmd: [4751]: info: init_ais_connection: AIS connection established
Jan 10 11:45:11 node1 openais[4695]: [crm  ] info: pcmk_ipc: Recorded connection 0x77ce00 for crmd/4751
Jan 10 11:45:11 node1 crmd: [4751]: info: get_ais_nodeid: Server details: id=369207488 uname=node1
Jan 10 11:45:11 node1 crmd: [4751]: info: crm_new_peer: Node node1 now has id: 369207488
Jan 10 11:45:11 node1 crmd: [4751]: info: crm_new_peer: Node 369207488 is now known as node1
Jan 10 11:45:11 node1 crmd: [4751]: info: do_ha_control: Connected to the cluster
Jan 10 11:45:11 node1 openais[4695]: [crm  ] info: pcmk_ipc: Sending membership update 36 to crmd
Jan 10 11:45:11 node1 crmd: [4751]: info: do_started: Delaying start, CCM (0000000000100000) not connected
Jan 10 11:45:11 node1 crmd: [4751]: info: crmd_init: Starting crmd's mainloop
Jan 10 11:45:11 node1 crmd: [4751]: info: config_query_callback: Checking for expired actions every 900000ms
Jan 10 11:45:11 node1 openais[4695]: [crm  ] info: update_expected_votes: Expected quorum votes 1024 -> 2
Jan 10 11:45:11 node1 crmd: [4751]: info: ais_dispatch: Membership 36: quorum still lost
Jan 10 11:45:11 node1 crmd: [4751]: info: crm_update_peer: Node node1: id=369207488 state=member (new) addr=r(0) ip(192.168.1.150)  (new) votes=1 (new) born=0 seen=36 proc=00000000000000000000000000053312 (new)
Jan 10 11:45:11 node1 crmd: [4751]: info: do_started: The local CRM is operational
Jan 10 11:45:11 node1 crmd: [4751]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jan 10 11:45:12 node1 mgmtd: [4752]: debug: main: run the loop...
Jan 10 11:45:12 node1 mgmtd: [4752]: info: Started.
Jan 10 11:45:12 node1 crmd: [4751]: info: ais_dispatch: Membership 36: quorum still lost
Jan 10 11:45:20 node1 attrd: [4749]: info: main: Sending full refresh
Jan 10 11:45:20 node1 attrd: [4749]: info: main: Starting mainloop...
Jan 10 11:45:21 node1 gdm-simple-greeter[4938]: libglade-WARNING: Unexpected element <requires-version> inside <glade-interface>.
Jan 10 11:45:22 node1 crmd: [4751]: info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped!
Jan 10 11:45:22 node1 crmd: [4751]: WARN: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jan 10 11:45:22 node1 crmd: [4751]: info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jan 10 11:45:22 node1 crmd: [4751]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jan 10 11:45:22 node1 crmd: [4751]: info: do_te_control: Registering TE UUID: 5e27d565-64bd-4eb5-b656-d9f821eff632
Jan 10 11:45:22 node1 crmd: [4751]: WARN: cib_client_add_notify_callback: Callback already present
Jan 10 11:45:22 node1 crmd: [4751]: info: set_graph_functions: Setting custom graph functions
Jan 10 11:45:22 node1 crmd: [4751]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses
Jan 10 11:45:22 node1 crmd: [4751]: info: do_dc_takeover: Taking over DC status for this partition
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_readwrite: We are now in R/W mode
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/6, version=0.23.0): ok (rc=0)
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/7, version=0.23.0): ok (rc=0)
Jan 10 11:45:22 node1 crmd: [4751]: info: join_make_offer: Making join offers based on membership 36
Jan 10 11:45:22 node1 crmd: [4751]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jan 10 11:45:22 node1 crmd: [4751]: info: ais_dispatch: Membership 36: quorum still lost
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/9, version=0.23.0): ok (rc=0)
Jan 10 11:45:22 node1 crmd: [4751]: info: config_query_callback: Checking for expired actions every 900000ms
Jan 10 11:45:22 node1 crmd: [4751]: info: update_dc: Set DC to node1 (3.0.1)
Jan 10 11:45:22 node1 crmd: [4751]: info: ais_dispatch: Membership 36: quorum still lost
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/12, version=0.23.0): ok (rc=0)
Jan 10 11:45:22 node1 crmd: [4751]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 10 11:45:22 node1 crmd: [4751]: info: do_state_transition: All 1 cluster nodes responded to the join offer.
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/15, version=0.23.0): ok (rc=0)
Jan 10 11:45:22 node1 crmd: [4751]: info: do_dc_join_finalize: join-1: Syncing the CIB from node1 to the rest of the cluster
Jan 10 11:45:22 node1 crmd: [4751]: info: te_connect_stonith: Attempting connection to fencing daemon...
Jan 10 11:45:22 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/16, version=0.23.0): ok (rc=0)
Jan 10 11:45:23 node1 crmd: [4751]: info: te_connect_stonith: Connected
Jan 10 11:45:23 node1 crmd: [4751]: info: update_attrd: Connecting to attrd...
Jan 10 11:45:23 node1 crmd: [4751]: info: update_attrd: Updating terminate=<none> via attrd for node1
Jan 10 11:45:23 node1 crmd: [4751]: info: update_attrd: Updating shutdown=<none> via attrd for node1
Jan 10 11:45:23 node1 attrd: [4749]: info: find_hash_entry: Creating hash entry for terminate
Jan 10 11:45:23 node1 attrd: [4749]: info: find_hash_entry: Creating hash entry for shutdown
Jan 10 11:45:23 node1 crmd: [4751]: info: do_dc_join_ack: join-1: Updating node state to member for node1
Jan 10 11:45:23 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/17, version=0.23.0): ok (rc=0)
Jan 10 11:45:23 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=local/crmd/18, version=0.23.0): ok (rc=0)
Jan 10 11:45:23 node1 crmd: [4751]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0)
Jan 10 11:45:23 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/19, version=0.23.0): ok (rc=0)
Jan 10 11:45:23 node1 crmd: [4751]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan 10 11:45:23 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/20, version=0.23.0): ok (rc=0)
Jan 10 11:45:23 node1 crmd: [4751]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan 10 11:45:23 node1 crmd: [4751]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 10 11:45:23 node1 crmd: [4751]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan 10 11:45:23 node1 crmd: [4751]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan 10 11:45:23 node1 crmd: [4751]: info: crm_update_quorum: Updating quorum status to false (call=24)
Jan 10 11:45:23 node1 attrd: [4749]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan 10 11:45:23 node1 crmd: [4751]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan 10 11:45:23 node1 attrd: [4749]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan 10 11:45:23 node1 crmd: [4751]: info: do_pe_invoke: Query 25: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:45:23 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/22, version=0.23.1): ok (rc=0)
Jan 10 11:45:23 node1 cib: [4747]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="23" num_updates="1" />
Jan 10 11:45:23 node1 cib: [4747]: info: log_data_element: cib:diff: + <cib dc-uuid="node1" admin_epoch="0" epoch="24" num_updates="1" />
Jan 10 11:45:23 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/24, version=0.24.1): ok (rc=0)
Jan 10 11:45:23 node1 crmd: [4751]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan 10 11:45:23 node1 crmd: [4751]: info: need_abort: Aborting on change to admin_epoch
Jan 10 11:45:23 node1 crmd: [4751]: info: do_pe_invoke: Query 26: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:45:23 node1 attrd: [4749]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan 10 11:45:23 node1 crmd: [4751]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263120323-7, seq=36, quorate=0
Jan 10 11:45:24 node1 pengine: [4750]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan 10 11:45:24 node1 pengine: [4750]: info: determine_online_status: Node node1 is online
Jan 10 11:45:24 node1 pengine: [4750]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan 10 11:45:24 node1 pengine: [4750]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan 10 11:45:24 node1 pengine: [4750]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan 10 11:45:24 node1 pengine: [4750]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node2 on node1
Jan 10 11:45:24 node1 pengine: [4750]: WARN: stage6: Node node2 is unclean!
Jan 10 11:45:24 node1 pengine: [4750]: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore)
Jan 10 11:45:24 node1 pengine: [4750]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan 10 11:45:24 node1 pengine: [4750]: notice: LogActions: Start STONITH-node2	(node1)
Jan 10 11:45:24 node1 crmd: [4751]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 10 11:45:24 node1 crmd: [4751]: info: unpack_graph: Unpacked transition 0: 6 actions in 6 synapses
Jan 10 11:45:24 node1 crmd: [4751]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1263120323-7) derived from /var/lib/pengine/pe-warn-23.bz2
Jan 10 11:45:24 node1 crmd: [4751]: info: te_rsc_command: Initiating action 4: monitor STONITH-node1_monitor_0 on node1 (local)
Jan 10 11:45:24 node1 lrmd: [4748]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan 10 11:45:24 node1 crmd: [4751]: info: do_lrm_rsc_op: Performing key=4:0:7:5e27d565-64bd-4eb5-b656-d9f821eff632 op=STONITH-node1_monitor_0 )
Jan 10 11:45:24 node1 lrmd: [4748]: info: rsc:STONITH-node1: monitor
Jan 10 11:45:24 node1 crmd: [4751]: info: te_rsc_command: Initiating action 5: monitor STONITH-node2_monitor_0 on node1 (local)
Jan 10 11:45:24 node1 lrmd: [4748]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Jan 10 11:45:24 node1 crmd: [4751]: info: do_lrm_rsc_op: Performing key=5:0:7:5e27d565-64bd-4eb5-b656-d9f821eff632 op=STONITH-node2_monitor_0 )
Jan 10 11:45:24 node1 lrmd: [4748]: info: rsc:STONITH-node2: monitor
Jan 10 11:45:24 node1 crmd: [4751]: info: process_lrm_event: LRM operation STONITH-node1_monitor_0 (call=2, rc=7, cib-update=27, confirmed=true) complete not running
Jan 10 11:45:24 node1 cib: [4945]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-22.raw
Jan 10 11:45:24 node1 pengine: [4750]: WARN: process_pe_message: Transition 0: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-23.bz2
Jan 10 11:45:24 node1 pengine: [4750]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan 10 11:45:24 node1 crmd: [4751]: info: process_lrm_event: LRM operation STONITH-node2_monitor_0 (call=3, rc=7, cib-update=28, confirmed=true) complete not running
Jan 10 11:45:24 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node1_monitor_0 (4) confirmed on node1 (rc=0)
Jan 10 11:45:24 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node2_monitor_0 (5) confirmed on node1 (rc=0)
Jan 10 11:45:24 node1 crmd: [4751]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on node1 (local) - no waiting
Jan 10 11:45:24 node1 crmd: [4751]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan 10 11:45:24 node1 crmd: [4751]: info: te_rsc_command: Initiating action 6: start STONITH-node2_start_0 on node1 (local)
Jan 10 11:45:24 node1 crmd: [4751]: info: do_lrm_rsc_op: Performing key=6:0:0:5e27d565-64bd-4eb5-b656-d9f821eff632 op=STONITH-node2_start_0 )
Jan 10 11:45:24 node1 lrmd: [4748]: info: rsc:STONITH-node2: start
Jan 10 11:45:24 node1 lrmd: [4950]: info: Try to start STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan 10 11:45:24 node1 cib: [4945]: info: write_cib_contents: Wrote version 0.24.0 of the CIB to disk (digest: 7ac6cb2c999ee37b4a51f8fbfa32b59f)
Jan 10 11:45:24 node1 cib: [4945]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.fofEvg (digest: /var/lib/heartbeat/crm/cib.WzytT8)
Jan 10 11:45:26 node1 stonithd: [4968]: info: external_run_cmd: Calling '/usr/lib64/stonith/plugins/external/drac5 status' returned 65280
Jan 10 11:45:26 node1 stonithd: [4746]: WARN: start STONITH-node2 failed, because its hostlist is empty
Jan 10 11:45:26 node1 lrmd: [4748]: debug: stonithRA plugin: provider attribute is not needed and will be ignored.
Jan 10 11:45:26 node1 crmd: [4751]: info: process_lrm_event: LRM operation STONITH-node2_start_0 (call=4, rc=1, cib-update=32, confirmed=true) complete unknown error
Jan 10 11:45:26 node1 crmd: [4751]: WARN: status_from_rc: Action 6 (STONITH-node2_start_0) on node1 failed (target: 0 vs. rc: 1): Error
Jan 10 11:45:26 node1 crmd: [4751]: WARN: update_failcount: Updating failcount for STONITH-node2 on node1 after failed start: rc=1 (update=INFINITY, time=1263120326)
Jan 10 11:45:26 node1 crmd: [4751]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node2_start_0, magic=0:1;6:0:0:5e27d565-64bd-4eb5-b656-d9f821eff632) : Event failed
Jan 10 11:45:26 node1 crmd: [4751]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan 10 11:45:26 node1 crmd: [4751]: info: update_abort_priority: Abort action done superceeded by restart
Jan 10 11:45:26 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node2_start_0 (6) confirmed on node1 (rc=4)
Jan 10 11:45:26 node1 crmd: [4751]: info: run_graph: ====================================================
Jan 10 11:45:26 node1 crmd: [4751]: notice: run_graph: Transition 0 (Complete=5, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-23.bz2): Stopped
Jan 10 11:45:26 node1 crmd: [4751]: info: te_graph_trigger: Transition 0 is now complete
Jan 10 11:45:26 node1 crmd: [4751]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 10 11:45:26 node1 crmd: [4751]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.
Jan 10 11:45:26 node1 crmd: [4751]: info: do_pe_invoke: Query 39: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:45:26 node1 crmd: [4751]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263120326-12, seq=36, quorate=0
Jan 10 11:45:26 node1 pengine: [4750]: WARN: cluster_status: We do not have quorum - fencing and resource management disabled
Jan 10 11:45:26 node1 pengine: [4750]: info: determine_online_status: Node node1 is online
Jan 10 11:45:26 node1 pengine: [4750]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan 10 11:45:26 node1 pengine: [4750]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan 10 11:45:26 node1 pengine: [4750]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan 10 11:45:26 node1 pengine: [4750]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Started node1 FAILED
Jan 10 11:45:26 node1 pengine: [4750]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan 10 11:45:26 node1 pengine: [4750]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan 10 11:45:26 node1 pengine: [4750]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan 10 11:45:26 node1 pengine: [4750]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan 10 11:45:26 node1 pengine: [4750]: WARN: stage6: Node node2 is unclean!
Jan 10 11:45:26 node1 pengine: [4750]: notice: stage6: Cannot fence unclean nodes until quorum is attained (or no-quorum-policy is set to ignore)
Jan 10 11:45:26 node1 pengine: [4750]: notice: LogActions: Leave resource STONITH-node1	(Stopped)
Jan 10 11:45:26 node1 pengine: [4750]: notice: LogActions: Stop resource STONITH-node2	(node1)
Jan 10 11:45:26 node1 crmd: [4751]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 10 11:45:26 node1 crmd: [4751]: info: unpack_graph: Unpacked transition 1: 2 actions in 2 synapses
Jan 10 11:45:26 node1 crmd: [4751]: info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1263120326-12) derived from /var/lib/pengine/pe-warn-24.bz2
Jan 10 11:45:26 node1 crmd: [4751]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan 10 11:45:26 node1 crmd: [4751]: info: te_rsc_command: Initiating action 1: stop STONITH-node2_stop_0 on node1 (local)
Jan 10 11:45:26 node1 crmd: [4751]: info: do_lrm_rsc_op: Performing key=1:1:0:5e27d565-64bd-4eb5-b656-d9f821eff632 op=STONITH-node2_stop_0 )
Jan 10 11:45:26 node1 lrmd: [4748]: info: rsc:STONITH-node2: stop
Jan 10 11:45:27 node1 pengine: [4750]: WARN: process_pe_message: Transition 1: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-24.bz2
Jan 10 11:45:27 node1 pengine: [4750]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan 10 11:45:27 node1 lrmd: [5039]: info: Try to stop STONITH resource <rsc_id=STONITH-node2> : Device=external/drac5
Jan 10 11:45:27 node1 stonithd: [4746]: notice: try to stop a resource STONITH-node2 who is not in started resource queue.
Jan 10 11:45:27 node1 crmd: [4751]: info: process_lrm_event: LRM operation STONITH-node2_stop_0 (call=5, rc=0, cib-update=40, confirmed=true) complete ok
Jan 10 11:45:27 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node2_stop_0 (1) confirmed on node1 (rc=0)
Jan 10 11:45:27 node1 crmd: [4751]: info: run_graph: ====================================================
Jan 10 11:45:27 node1 crmd: [4751]: notice: run_graph: Transition 1 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-24.bz2): Complete
Jan 10 11:45:27 node1 crmd: [4751]: info: te_graph_trigger: Transition 1 is now complete
Jan 10 11:45:27 node1 crmd: [4751]: info: notify_crmd: Transition 1 status: done - <null>
Jan 10 11:45:27 node1 crmd: [4751]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 10 11:45:27 node1 crmd: [4751]: info: do_state_transition: Starting PEngine Recheck Timer
Jan 10 11:46:44 node1 openais[4695]: [TOTEM] entering GATHER state from 11.
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] Creating commit token because I am the rep.
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] Saving state aru 28 high seq received 28
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] Storing new sequence id for ring 28
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] entering COMMIT state.
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] entering RECOVERY state.
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] position [0] member 192.168.1.150:
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] previous ring seq 36 rep 192.168.1.150
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] aru 28 high delivered 28 received flag 1
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] position [1] member 192.168.1.160:
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] previous ring seq 32 rep 192.168.1.160
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] aru a high delivered a received flag 1
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] Did not need to originate any messages in recovery.
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] Sending initial ORF token
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] CLM CONFIGURATION CHANGE
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] New Configuration:
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] Members Left:
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] Members Joined:
Jan 10 11:46:45 node1 openais[4695]: [crm  ] notice: pcmk_peer_update: Transitional membership event on ring 40: memb=1, new=0, lost=0
Jan 10 11:46:45 node1 openais[4695]: [crm  ] info: pcmk_peer_update: memb: node1 369207488
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] CLM CONFIGURATION CHANGE
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] New Configuration:
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] 	r(0) ip(192.168.1.150) 
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan 10 11:46:45 node1 cib: [4747]: notice: ais_dispatch: Membership 40: quorum aquired
Jan 10 11:46:45 node1 crmd: [4751]: notice: ais_dispatch: Membership 40: quorum aquired
Jan 10 11:46:45 node1 cib: [4747]: info: crm_new_peer: Node <null> now has id: 536979648
Jan 10 11:46:45 node1 crmd: [4751]: info: crm_new_peer: Node <null> now has id: 536979648
Jan 10 11:46:45 node1 cib: [4747]: info: crm_update_peer: Node (null): id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=0 born=0 seen=40 proc=00000000000000000000000000000000
Jan 10 11:46:45 node1 crmd: [4751]: info: crm_update_peer: Node (null): id=536979648 state=member (new) addr=r(0) ip(192.168.1.160)  votes=0 born=0 seen=40 proc=00000000000000000000000000000000
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] Members Left:
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] Members Joined:
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] 	r(0) ip(192.168.1.160) 
Jan 10 11:46:45 node1 openais[4695]: [crm  ] notice: pcmk_peer_update: Stable membership event on ring 40: memb=2, new=1, lost=0
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: Creating entry for node 536979648 born on 40
Jan 10 11:46:45 node1 cib: [4747]: info: ais_dispatch: Membership 40: quorum retained
Jan 10 11:46:45 node1 crmd: [4751]: info: crm_update_quorum: Updating quorum status to true (call=43)
Jan 10 11:46:45 node1 cib: [4747]: info: crm_get_peer: Node 536979648 is now known as node2
Jan 10 11:46:45 node1 cib: [4747]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 (new) born=40 seen=40 proc=00000000000000000000000000053312 (new)
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: Node 536979648/unknown is now: member
Jan 10 11:46:45 node1 openais[4695]: [crm  ] info: pcmk_peer_update: NEW:  .pending. 536979648
Jan 10 11:46:45 node1 openais[4695]: [crm  ] info: pcmk_peer_update: MEMB: node1 369207488
Jan 10 11:46:45 node1 openais[4695]: [crm  ] info: pcmk_peer_update: MEMB: .pending. 536979648
Jan 10 11:46:45 node1 openais[4695]: [crm  ] info: send_member_notification: Sending membership update 40 to 2 children
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: 0x771fa0 Node 369207488 ((null)) born on: 40
Jan 10 11:46:45 node1 openais[4695]: [SYNC ] This node is within the primary component and will provide service.
Jan 10 11:46:45 node1 openais[4695]: [TOTEM] entering OPERATIONAL state.
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: 0x77a670 Node 536979648 (node2) born on: 40
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: 0x77a670 Node 536979648 now known as node2 (was: (null))
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: Node node2 now has process list: 00000000000000000000000000053312 (340754)
Jan 10 11:46:45 node1 openais[4695]: [MAIN ] info: update_member: Node node2 now has 1 quorum votes (was 0)
Jan 10 11:46:45 node1 openais[4695]: [crm  ] info: send_member_notification: Sending membership update 40 to 2 children
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] got nodejoin message 192.168.1.150
Jan 10 11:46:45 node1 openais[4695]: [CLM  ] got nodejoin message 192.168.1.160
Jan 10 11:46:46 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/41, version=0.24.8): ok (rc=0)
Jan 10 11:46:46 node1 cib: [4747]: info: log_data_element: cib:diff: - <cib have-quorum="0" admin_epoch="0" epoch="24" num_updates="8" />
Jan 10 11:46:46 node1 cib: [4747]: info: log_data_element: cib:diff: + <cib have-quorum="1" admin_epoch="0" epoch="25" num_updates="1" />
Jan 10 11:46:46 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/43, version=0.25.1): ok (rc=0)
Jan 10 11:46:46 node1 crmd: [4751]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Jan 10 11:46:46 node1 crmd: [4751]: info: need_abort: Aborting on change to have-quorum
Jan 10 11:46:46 node1 crmd: [4751]: info: ais_dispatch: Membership 40: quorum retained
Jan 10 11:46:46 node1 crmd: [4751]: info: crm_get_peer: Node 536979648 is now known as node2
Jan 10 11:46:46 node1 crmd: [4751]: info: ais_status_callback: status: node2 is now member
Jan 10 11:46:46 node1 crmd: [4751]: info: crm_update_peer: Node node2: id=536979648 state=member addr=r(0) ip(192.168.1.160)  votes=1 (new) born=40 seen=40 proc=00000000000000000000000000053312 (new)
Jan 10 11:46:46 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/45, version=0.25.1): ok (rc=0)
Jan 10 11:46:46 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/46, version=0.25.1): ok (rc=0)
Jan 10 11:46:46 node1 crmd: [4751]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jan 10 11:46:46 node1 crmd: [4751]: info: do_state_transition: Membership changed: 36 -> 40 - join restart
Jan 10 11:46:46 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/49, version=0.25.2): ok (rc=0)
Jan 10 11:46:46 node1 crmd: [4751]: info: do_pe_invoke: Query 50: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:46:46 node1 crmd: [4751]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=do_state_transition ]
Jan 10 11:46:46 node1 crmd: [4751]: info: update_dc: Unset DC node1
Jan 10 11:46:46 node1 crmd: [4751]: info: join_make_offer: Making join offers based on membership 40
Jan 10 11:46:46 node1 crmd: [4751]: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks
Jan 10 11:46:46 node1 crmd: [4751]: info: update_dc: Set DC to node1 (3.0.1)
Jan 10 11:46:46 node1 cib: [5043]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-23.raw
Jan 10 11:46:46 node1 cib: [5043]: info: write_cib_contents: Wrote version 0.25.0 of the CIB to disk (digest: bb17643887c24490d9c26c86574315ec)
Jan 10 11:46:46 node1 cib: [5043]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.Y62n1F (digest: /var/lib/heartbeat/crm/cib.EEZyA0)
Jan 10 11:46:48 node1 crmd: [4751]: info: update_dc: Unset DC node1
Jan 10 11:46:48 node1 crmd: [4751]: info: do_dc_join_offer_all: A new node joined the cluster
Jan 10 11:46:48 node1 crmd: [4751]: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks
Jan 10 11:46:48 node1 crmd: [4751]: info: update_dc: Set DC to node1 (3.0.1)
Jan 10 11:46:48 node1 crmd: [4751]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 10 11:46:48 node1 crmd: [4751]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
Jan 10 11:46:48 node1 crmd: [4751]: info: do_dc_join_finalize: join-3: Syncing the CIB from node1 to the rest of the cluster
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/53, version=0.25.2): ok (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: do_dc_join_ack: join-3: Updating node state to member for node2
Jan 10 11:46:48 node1 crmd: [4751]: info: do_dc_join_ack: join-3: Updating node state to member for node1
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/54, version=0.25.2): ok (rc=0)
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/55, version=0.25.2): ok (rc=0)
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/56, version=0.25.2): ok (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jan 10 11:46:48 node1 crmd: [4751]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan 10 11:46:48 node1 crmd: [4751]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
Jan 10 11:46:48 node1 crmd: [4751]: info: crm_update_quorum: Updating quorum status to true (call=62)
Jan 10 11:46:48 node1 crmd: [4751]: info: abort_transition_graph: do_te_invoke:190 - Triggered transition abort (complete=1) : Peer Cancelled
Jan 10 11:46:48 node1 attrd: [4749]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Jan 10 11:46:48 node1 crmd: [4751]: info: do_pe_invoke: Query 63: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/58, version=0.25.4): ok (rc=0)
Jan 10 11:46:48 node1 attrd: [4749]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate
Jan 10 11:46:48 node1 crmd: [4751]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=STONITH-node1_monitor_0, magic=0:7;4:0:7:5e27d565-64bd-4eb5-b656-d9f821eff632) : Resource op removal
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/transient_attributes (origin=node2/crmd/6, version=0.25.4): ok (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0)
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=node2/crmd/7, version=0.25.5): ok (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: do_pe_invoke: Query 64: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:46:48 node1 crmd: [4751]: info: te_update_diff: Detected LRM refresh - 2 resources updated: Skipping all resource events
Jan 10 11:46:48 node1 crmd: [4751]: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA) : LRM Refresh
Jan 10 11:46:48 node1 crmd: [4751]: info: do_pe_invoke: Query 65: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/60, version=0.25.6): ok (rc=0)
Jan 10 11:46:48 node1 cib: [4747]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/62, version=0.25.6): ok (rc=0)
Jan 10 11:46:48 node1 attrd: [4749]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown
Jan 10 11:46:48 node1 crmd: [4751]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263120408-25, seq=40, quorate=1
Jan 10 11:46:48 node1 pengine: [4750]: info: determine_online_status: Node node1 is online
Jan 10 11:46:48 node1 pengine: [4750]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan 10 11:46:48 node1 pengine: [4750]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan 10 11:46:48 node1 pengine: [4750]: info: determine_online_status: Node node2 is online
Jan 10 11:46:48 node1 pengine: [4750]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Stopped 
Jan 10 11:46:48 node1 pengine: [4750]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan 10 11:46:48 node1 pengine: [4750]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan 10 11:46:48 node1 pengine: [4750]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan 10 11:46:48 node1 pengine: [4750]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan 10 11:46:48 node1 pengine: [4750]: notice: RecurringOp:  Start recurring monitor (15s) for STONITH-node1 on node2
Jan 10 11:46:48 node1 pengine: [4750]: notice: LogActions: Start STONITH-node1	(node2)
Jan 10 11:46:48 node1 pengine: [4750]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan 10 11:46:48 node1 crmd: [4751]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 10 11:46:48 node1 crmd: [4751]: info: unpack_graph: Unpacked transition 2: 6 actions in 6 synapses
Jan 10 11:46:48 node1 crmd: [4751]: info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1263120408-25) derived from /var/lib/pengine/pe-warn-25.bz2
Jan 10 11:46:48 node1 crmd: [4751]: info: te_rsc_command: Initiating action 5: monitor STONITH-node1_monitor_0 on node2
Jan 10 11:46:48 node1 crmd: [4751]: info: te_rsc_command: Initiating action 6: monitor STONITH-node2_monitor_0 on node2
Jan 10 11:46:48 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node1_monitor_0 (5) confirmed on node2 (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node2_monitor_0 (6) confirmed on node2 (rc=0)
Jan 10 11:46:48 node1 crmd: [4751]: info: te_rsc_command: Initiating action 4: probe_complete probe_complete on node2 - no waiting
Jan 10 11:46:48 node1 crmd: [4751]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan 10 11:46:48 node1 crmd: [4751]: info: te_rsc_command: Initiating action 7: start STONITH-node1_start_0 on node2
Jan 10 11:46:48 node1 pengine: [4750]: WARN: process_pe_message: Transition 2: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-25.bz2
Jan 10 11:46:48 node1 pengine: [4750]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan 10 11:46:51 node1 crmd: [4751]: WARN: status_from_rc: Action 7 (STONITH-node1_start_0) on node2 failed (target: 0 vs. rc: 1): Error
Jan 10 11:46:51 node1 crmd: [4751]: WARN: update_failcount: Updating failcount for STONITH-node1 on node2 after failed start: rc=1 (update=INFINITY, time=1263120411)
Jan 10 11:46:51 node1 crmd: [4751]: info: abort_transition_graph: match_graph_event:282 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=STONITH-node1_start_0, magic=0:1;7:2:0:5e27d565-64bd-4eb5-b656-d9f821eff632) : Event failed
Jan 10 11:46:51 node1 crmd: [4751]: info: update_abort_priority: Abort priority upgraded from 0 to 1
Jan 10 11:46:51 node1 crmd: [4751]: info: update_abort_priority: Abort action done superceeded by restart
Jan 10 11:46:51 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node1_start_0 (7) confirmed on node2 (rc=4)
Jan 10 11:46:51 node1 crmd: [4751]: info: run_graph: ====================================================
Jan 10 11:46:51 node1 crmd: [4751]: notice: run_graph: Transition 2 (Complete=5, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-warn-25.bz2): Stopped
Jan 10 11:46:51 node1 crmd: [4751]: info: te_graph_trigger: Transition 2 is now complete
Jan 10 11:46:51 node1 crmd: [4751]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 10 11:46:51 node1 crmd: [4751]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Jan 10 11:46:51 node1 crmd: [4751]: info: do_pe_invoke: Query 72: Requesting the current CIB: S_POLICY_ENGINE
Jan 10 11:46:51 node1 crmd: [4751]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1263120411-30, seq=40, quorate=1
Jan 10 11:46:51 node1 pengine: [4750]: info: determine_online_status: Node node1 is online
Jan 10 11:46:51 node1 pengine: [4750]: info: unpack_rsc_op: STONITH-node2_start_0 on node1 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan 10 11:46:51 node1 pengine: [4750]: WARN: unpack_rsc_op: Processing failed op STONITH-node2_start_0 on node1: unknown error
Jan 10 11:46:51 node1 pengine: [4750]: info: determine_online_status: Node node2 is online
Jan 10 11:46:51 node1 pengine: [4750]: info: unpack_rsc_op: STONITH-node1_start_0 on node2 returned 1 (unknown error) instead of the expected value: 0 (ok)
Jan 10 11:46:51 node1 pengine: [4750]: WARN: unpack_rsc_op: Processing failed op STONITH-node1_start_0 on node2: unknown error
Jan 10 11:46:51 node1 pengine: [4750]: notice: native_print: STONITH-node1	(stonith:external/drac5):	Started node2 FAILED
Jan 10 11:46:51 node1 pengine: [4750]: notice: native_print: STONITH-node2	(stonith:external/drac5):	Stopped 
Jan 10 11:46:51 node1 pengine: [4750]: info: get_failcount: STONITH-node2 has failed 1000000 times on node1
Jan 10 11:46:51 node1 pengine: [4750]: WARN: common_apply_stickiness: Forcing STONITH-node2 away from node1 after 1000000 failures (max=1000000)
Jan 10 11:46:51 node1 pengine: [4750]: info: get_failcount: STONITH-node1 has failed 1000000 times on node2
Jan 10 11:46:51 node1 pengine: [4750]: WARN: common_apply_stickiness: Forcing STONITH-node1 away from node2 after 1000000 failures (max=1000000)
Jan 10 11:46:51 node1 pengine: [4750]: WARN: native_color: Resource STONITH-node1 cannot run anywhere
Jan 10 11:46:51 node1 pengine: [4750]: WARN: native_color: Resource STONITH-node2 cannot run anywhere
Jan 10 11:46:51 node1 pengine: [4750]: notice: LogActions: Stop resource STONITH-node1	(node2)
Jan 10 11:46:51 node1 pengine: [4750]: notice: LogActions: Leave resource STONITH-node2	(Stopped)
Jan 10 11:46:51 node1 crmd: [4751]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jan 10 11:46:51 node1 crmd: [4751]: info: unpack_graph: Unpacked transition 3: 2 actions in 2 synapses
Jan 10 11:46:51 node1 crmd: [4751]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1263120411-30) derived from /var/lib/pengine/pe-warn-26.bz2
Jan 10 11:46:51 node1 crmd: [4751]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
Jan 10 11:46:51 node1 crmd: [4751]: info: te_rsc_command: Initiating action 1: stop STONITH-node1_stop_0 on node2
Jan 10 11:46:51 node1 pengine: [4750]: WARN: process_pe_message: Transition 3: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-26.bz2
Jan 10 11:46:51 node1 pengine: [4750]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Jan 10 11:46:51 node1 crmd: [4751]: info: match_graph_event: Action STONITH-node1_stop_0 (1) confirmed on node2 (rc=0)
Jan 10 11:46:51 node1 crmd: [4751]: info: run_graph: ====================================================
Jan 10 11:46:51 node1 crmd: [4751]: notice: run_graph: Transition 3 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-26.bz2): Complete
Jan 10 11:46:51 node1 crmd: [4751]: info: te_graph_trigger: Transition 3 is now complete
Jan 10 11:46:51 node1 crmd: [4751]: info: notify_crmd: Transition 3 status: done - <null>
Jan 10 11:46:51 node1 crmd: [4751]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jan 10 11:46:51 node1 crmd: [4751]: info: do_state_transition: Starting PEngine Recheck Timer
Jan 10 11:46:56 node1 attrd: [4749]: info: crm_new_peer: Node node2 now has id: 536979648
Jan 10 11:46:56 node1 attrd: [4749]: info: crm_new_peer: Node 536979648 is now known as node2
Jan 10 11:47:30 node1 sshd[5046]: Accepted keyboard-interactive/pam for root from 192.168.1.61 port 53903 ssh2
Jan 10 11:53:04 node1 sshd[5101]: Accepted keyboard-interactive/pam for root from 192.168.1.61 port 33050 ssh2
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cluster.xml
Type: application/xml
Size: 8429 bytes
Desc: not available
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100110/00cefa18/attachment.wsdl>


More information about the Pacemaker mailing list