[ClusterLabs] unable to start fence_scsi
Marco A. Carcano
marco.carcano at itc4u.ch
Wed May 18 10:21:37 UTC 2016
Hi Ken,
by the way I’ve just also tried with pacemaker 1.1.14 (I builded it from sources into a new RPM) but it doesn’t work
> On 18 May 2016, at 11:29, Marco A. Carcano <marco.carcano at itc4u.ch> wrote:
>
> Hi Ken,
>
> thank you for the reply
>
> I tried as you suggested, and now the stonith devices tries to start but fails.
>
> I tried this
>
> pcs stonith create scsi fence_scsi pcmk_host_list="apache-up001.ring0 apache-up002.ring0 apache-up003.ring0" pcmk_host_map="apache-up001.ring1=apache-up001.ring0; apache-up002.ring1=apache-up002.ring0; apache-up003.ring1=apache-up003.ring0" pcmk_reboot_action="off" devices="/dev/mapper/36001405973e201b3fdb4a999175b942f" meta provides="unfencing" op monitor interval=60s
>
> and even this, adding pcmk_monitor_action="metadata” as suggested in a post on RH knowledge base (even if the error was quite different)
>
> pcs stonith create scsi fence_scsi pcmk_host_list="apache-up001.ring0 apache-up002.ring0 apache-up003.ring0" pcmk_host_map="apache-up001.ring1=apache-up001.ring0; apache-up002.ring1=apache-up002.ring0; apache-up003.ring1=apache-up003.ring0" pcmk_reboot_action="off" devices="/dev/mapper/36001405973e201b3fdb4a999175b942f" meta provides="unfencing" pcmk_monitor_action="metadata" op monitor interval=60s
>
> I’m using CentOS 7.2, pacemaker-1.1.13-10 resource-agents-3.9.5-54 and fence-agents-scsi-4.0.11-27
>
> the error message are Couldn't find anyone to fence (on) apache-up003.ring0 with any device and error: Operation on of apache-up003.ring0 by <no-one> for crmd.15918 at apache-up001.ring0.0599387e: No such device
>
> Thanks
>
> Marco
>
>
> May 18 10:37:03 apache-up001 crmd[15918]: notice: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> May 18 10:37:03 apache-up001 pengine[15917]: notice: On loss of CCM Quorum: Ignore
> May 18 10:37:03 apache-up001 pengine[15917]: notice: Unfencing apache-up001.ring0: node discovery
> May 18 10:37:03 apache-up001 pengine[15917]: notice: Unfencing apache-up002.ring0: node discovery
> May 18 10:37:03 apache-up001 pengine[15917]: notice: Unfencing apache-up003.ring0: node discovery
> May 18 10:37:03 apache-up001 pengine[15917]: notice: Start scsia#011(apache-up001.ring0)
> May 18 10:37:03 apache-up001 pengine[15917]: notice: Calculated Transition 11: /var/lib/pacemaker/pengine/pe-input-95.bz2
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Executing on fencing operation (11) on apache-up003.ring0 (timeout=60000)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Initiating action 9: probe_complete probe_complete-apache-up003.ring0 on apache-up003.ring0 - no waiting
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Executing on fencing operation (8) on apache-up002.ring0 (timeout=60000)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Initiating action 6: probe_complete probe_complete-apache-up002.ring0 on apache-up002.ring0 - no waiting
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Executing on fencing operation (5) on apache-up001.ring0 (timeout=60000)
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Client crmd.15918.697c495e wants to fence (on) 'apache-up003.ring0' with device '(any)'
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Initiating remote operation on for apache-up003.ring0: 0599387e-0a30-4e1b-b641-adea5ba2a4ad (0)
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Client crmd.15918.697c495e wants to fence (on) 'apache-up002.ring0' with device '(any)'
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Initiating remote operation on for apache-up002.ring0: 76aba815-280e-491a-bd17-40776c8169e9 (0)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Initiating action 3: probe_complete probe_complete-apache-up001.ring0 on apache-up001.ring0 (local) - no waiting
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Client crmd.15918.697c495e wants to fence (on) 'apache-up001.ring0' with device '(any)'
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Initiating remote operation on for apache-up001.ring0: e50d7e16-9578-4964-96a3-7b36bdcfba46 (0)
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Couldn't find anyone to fence (on) apache-up003.ring0 with any device
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Couldn't find anyone to fence (on) apache-up002.ring0 with any device
> May 18 10:37:03 apache-up001 stonith-ng[15914]: error: Operation on of apache-up003.ring0 by <no-one> for crmd.15918 at apache-up001.ring0.0599387e: No such device
> May 18 10:37:03 apache-up001 stonith-ng[15914]: error: Operation on of apache-up002.ring0 by <no-one> for crmd.15918 at apache-up001.ring0.76aba815: No such device
> May 18 10:37:03 apache-up001 stonith-ng[15914]: notice: Couldn't find anyone to fence (on) apache-up001.ring0 with any device
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Stonith operation 5/11:11:0:8248cebf-c198-4ff2-bd43-7415533ce50f: No such device (-19)
> May 18 10:37:03 apache-up001 stonith-ng[15914]: error: Operation on of apache-up001.ring0 by <no-one> for crmd.15918 at apache-up001.ring0.e50d7e16: No such device
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Stonith operation 5 for apache-up003.ring0 failed (No such device): aborting transition.
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)
> May 18 10:37:03 apache-up001 crmd[15918]: error: Unfencing of apache-up003.ring0 by <anyone> failed: No such device (-19)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Stonith operation 6/8:11:0:8248cebf-c198-4ff2-bd43-7415533ce50f: No such device (-19)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Stonith operation 6 for apache-up002.ring0 failed (No such device): aborting transition.
> May 18 10:37:03 apache-up001 crmd[15918]: error: Unfencing of apache-up002.ring0 by <anyone> failed: No such device (-19)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Stonith operation 7/5:11:0:8248cebf-c198-4ff2-bd43-7415533ce50f: No such device (-19)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Stonith operation 7 for apache-up001.ring0 failed (No such device): aborting transition.
> May 18 10:37:03 apache-up001 crmd[15918]: error: Unfencing of apache-up001.ring0 by <anyone> failed: No such device (-19)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Initiating action 10: monitor scsia_monitor_0 on apache-up003.ring0
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Initiating action 7: monitor scsia_monitor_0 on apache-up002.ring0
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Initiating action 4: monitor scsia_monitor_0 on apache-up001.ring0 (local)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Operation scsia_monitor_0: not running (node=apache-up001.ring0, call=19, rc=7, cib-update=59, confirmed=true)
> May 18 10:37:03 apache-up001 crmd[15918]: notice: Transition 11 (Complete=10, Pending=0, Fired=0, Skipped=1, Incomplete=2, Source=/var/lib/pacemaker/pengine/pe-input-95.bz2): Stopped
> May 18 10:37:03 apache-up001 crmd[15918]: notice: No devices found in cluster to fence apache-up001.ring0, giving up
> May 18 10:37:03 apache-up001 crmd[15918]: notice: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
>
>
>
>
>
>
>
>> On 16 May 2016, at 16:22, Ken Gaillot <kgaillot at redhat.com> wrote:
>>
>> On 05/14/2016 08:54 AM, Marco A. Carcano wrote:
>>> I hope to find here someone who can help me:
>>>
>>> I have a 3 node cluster and I’m struggling to create a GFSv2 shared storage. The weird thing is that despite cluster seems OK, I’m not able to have the fence_scsi stonith device managed, and this prevent CLVMD and GFSv2 to start.
>>>
>>> I’m using CentOS 7.1, selinux and firewall disabled
>>>
>>> I created the stonith device with the following command
>>>
>>> pcs stonith create scsi fence_scsi pcmk_host_list="apache-up001.ring0 apache-up002.ring0 apache-up003.ring0 apache-up001.ring1 apache-up002.ring1 apache-up003.ring1”
>>> pcmk_reboot_action="off" devices="/dev/mapper/36001405973e201b3fdb4a999175b942f" meta provides="unfencing" —force
>>>
>>> Notice that is a 3 node cluster with a redundant ring: hosts with .ring1 suffix are the same of the ones with .ring0 suffix, but with a different IP address
>>
>> pcmk_host_list only needs the names of the nodes as specified in the
>> Pacemaker configuration. It allows the cluster to answer the question,
>> "What device can I use to fence this particular node?"
>>
>> Sometimes the fence device itself needs to identify the node by a
>> different name than the one used by Pacemaker. In that case, use
>> pcmk_host_map, which maps each cluster node name to a fence device node
>> name.
>>
>> The one thing your command is missing is an "op monitor". I'm guessing
>> that's why it required "--force" (which shouldn't be necessary) and why
>> the cluster is treating it as unmanaged.
>>
>>> /dev/mapper/36001405973e201b3fdb4a999175b942f is a multipath device for /dev/sda and /dev/sdb
>>>
>>> in log files everything seems right. However pcs status reports the following:
>>>
>>> Cluster name: apache-0
>>> Last updated: Sat May 14 15:35:56 2016 Last change: Sat May 14 15:18:17 2016 by root via cibadmin on apache-up001.ring0
>>> Stack: corosync
>>> Current DC: apache-up003.ring0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
>>> 3 nodes and 7 resources configured
>>>
>>> Online: [ apache-up001.ring0 apache-up002.ring0 apache-up003.ring0 ]
>>>
>>> Full list of resources:
>>>
>>> scsi (stonith:fence_scsi): Stopped (unmanaged)
>>>
>>> PCSD Status:
>>> apache-up001.ring0: Online
>>> apache-up002.ring0: Online
>>> apache-up003.ring0: Online
>>>
>>> Daemon Status:
>>> corosync: active/enabled
>>> pacemaker: active/enabled
>>> pcsd: active/enabled
>>>
>>> However SCSI fencing and persistent id reservation seems right:
>>>
>>> sg_persist -n -i -r -d /dev/mapper/36001405973e201b3fdb4a999175b942f
>>> PR generation=0x37, Reservation follows:
>>> Key=0x9b0e0000
>>> scope: LU_SCOPE, type: Write Exclusive, registrants only
>>>
>>> sg_persist -n -i -k -d /dev/mapper/36001405973e201b3fdb4a999175b942f
>>> PR generation=0x37, 6 registered reservation keys follow:
>>> 0x9b0e0000
>>> 0x9b0e0000
>>> 0x9b0e0001
>>> 0x9b0e0001
>>> 0x9b0e0002
>>> 0x9b0e0002
>>>
>>> if I manually fence the second node:
>>>
>>> pcs stonith fence apache-up002.ring0
>>>
>>> I got as expected
>>>
>>> sg_persist -n -i -k -d /dev/mapper/36001405973e201b3fdb4a999175b942f
>>> PR generation=0x38, 4 registered reservation keys follow:
>>> 0x9b0e0000
>>> 0x9b0e0000
>>> 0x9b0e0002
>>> 0x9b0e0002
>>>
>>> Cluster configuration seems OK
>>>
>>> crm_verify -L -V reports no errors neither warnings,
>>>
>>> corosync-cfgtool -s
>>>
>>> Printing ring status.
>>> Local node ID 1
>>> RING ID 0
>>> id = 192.168.15.9
>>> status = ring 0 active with no faults
>>> RING ID 1
>>> id = 192.168.16.9
>>> status = ring 1 active with no faults
>>>
>>> corosync-quorumtool -s
>>>
>>> Quorum information
>>> ------------------
>>> Date: Sat May 14 15:42:38 2016
>>> Quorum provider: corosync_votequorum
>>> Nodes: 3
>>> Node ID: 1
>>> Ring ID: 820
>>> Quorate: Yes
>>>
>>> Votequorum information
>>> ----------------------
>>> Expected votes: 3
>>> Highest expected: 3
>>> Total votes: 3
>>> Quorum: 2
>>> Flags: Quorate
>>>
>>> Membership information
>>> ----------------------
>>> Nodeid Votes Name
>>> 3 1 apache-up003.ring0
>>> 2 1 apache-up002.ring0
>>> 1 1 apache-up001.ring0 (local)
>>>
>>>
>>> corosync-cmapctl | grep members
>>> runtime.totem.pg.mrp.srp.members.1.config_version (u64) = 0
>>> runtime.totem.pg.mrp.srp.members.1.ip (str) = r(0) ip(192.168.15.9) r(1) ip(192.168.16.9)
>>> runtime.totem.pg.mrp.srp.members.1.join_count (u32) = 1
>>> runtime.totem.pg.mrp.srp.members.1.status (str) = joined
>>> runtime.totem.pg.mrp.srp.members.2.config_version (u64) = 0
>>> runtime.totem.pg.mrp.srp.members.2.ip (str) = r(0) ip(192.168.15.8) r(1) ip(192.168.16.8)
>>> runtime.totem.pg.mrp.srp.members.2.join_count (u32) = 1
>>> runtime.totem.pg.mrp.srp.members.2.status (str) = joined
>>> runtime.totem.pg.mrp.srp.members.3.config_version (u64) = 0
>>> runtime.totem.pg.mrp.srp.members.3.ip (str) = r(0) ip(192.168.15.7) r(1) ip(192.168.16.7)
>>> runtime.totem.pg.mrp.srp.members.3.join_count (u32) = 1
>>> runtime.totem.pg.mrp.srp.members.3.status (str) = joined
>>>
>>> here are logs at cluster start
>>>
>>> pcs cluster start --all
>>> apache-up003.ring0: Starting Cluster...
>>> apache-up001.ring0: Starting Cluster...
>>> apache-up002.ring0: Starting Cluster...
>>>
>>>
>>> cat /var/log/messages
>>> May 14 15:46:59 apache-up001 systemd: Starting Corosync Cluster Engine...
>>> May 14 15:46:59 apache-up001 corosync[18934]: [MAIN ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
>>> May 14 15:46:59 apache-up001 corosync[18934]: [MAIN ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] Initializing transport (UDP/IP Unicast).
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] Initializing transport (UDP/IP Unicast).
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] The network interface [192.168.15.9] is now up.
>>> May 14 15:46:59 apache-up001 corosync[18935]: [SERV ] Service engine loaded: corosync configuration map access [0]
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QB ] server name: cmap
>>> May 14 15:46:59 apache-up001 corosync[18935]: [SERV ] Service engine loaded: corosync configuration service [1]
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QB ] server name: cfg
>>> May 14 15:46:59 apache-up001 corosync[18935]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QB ] server name: cpg
>>> May 14 15:46:59 apache-up001 corosync[18935]: [SERV ] Service engine loaded: corosync profile loading service [4]
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QUORUM] Using quorum provider corosync_votequorum
>>> May 14 15:46:59 apache-up001 corosync[18935]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QB ] server name: votequorum
>>> May 14 15:46:59 apache-up001 corosync[18935]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QB ] server name: quorum
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] adding new UDPU member {192.168.15.9}
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] adding new UDPU member {192.168.15.8}
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] adding new UDPU member {192.168.15.7}
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] The network interface [192.168.16.9] is now up.
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] adding new UDPU member {192.168.16.9}
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] adding new UDPU member {192.168.16.8}
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] adding new UDPU member {192.168.16.7}
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] A new membership (192.168.15.9:824) was formed. Members joined: 1
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QUORUM] Members[1]: 1
>>> May 14 15:46:59 apache-up001 corosync[18935]: [MAIN ] Completed service synchronization, ready to provide service.
>>> May 14 15:46:59 apache-up001 corosync[18935]: [TOTEM ] A new membership (192.168.15.7:836) was formed. Members joined: 3 2
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QUORUM] This node is within the primary component and will provide service.
>>> May 14 15:46:59 apache-up001 corosync[18935]: [QUORUM] Members[3]: 3 2 1
>>> May 14 15:46:59 apache-up001 corosync[18935]: [MAIN ] Completed service synchronization, ready to provide service.
>>> May 14 15:46:59 apache-up001 corosync: Starting Corosync Cluster Engine (corosync): [ OK ]
>>> May 14 15:46:59 apache-up001 systemd: Started Corosync Cluster Engine.
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: Additional logging available in /var/log/pacemaker.log
>>> May 14 15:46:59 apache-up001 systemd: Started Pacemaker High Availability Cluster Manager.
>>> May 14 15:46:59 apache-up001 systemd: Starting Pacemaker High Availability Cluster Manager...
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: Switching to /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: Configured corosync to accept connections from group 189: OK (1)
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: Starting Pacemaker 1.1.13-10.el7_2.2 (Build: 44eb2dd): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc upstart systemd nagios corosync-native atomic-attrd acls
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: Quorum acquired
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: pcmk_quorum_notification: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: pcmk_quorum_notification: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 pacemakerd[18950]: notice: pcmk_quorum_notification: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 attrd[18954]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 attrd[18954]: notice: Connecting to cluster infrastructure: corosync
>>> May 14 15:46:59 apache-up001 crmd[18956]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 crmd[18956]: notice: CRM Git Version: 1.1.13-10.el7_2.2 (44eb2dd)
>>> May 14 15:46:59 apache-up001 cib[18951]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 pengine[18955]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 lrmd[18953]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 stonith-ng[18952]: notice: Additional logging available in /var/log/cluster/corosync.log
>>> May 14 15:46:59 apache-up001 stonith-ng[18952]: notice: Connecting to cluster infrastructure: corosync
>>> May 14 15:46:59 apache-up001 cib[18951]: notice: Connecting to cluster infrastructure: corosync
>>> May 14 15:46:59 apache-up001 attrd[18954]: notice: crm_update_peer_proc: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 stonith-ng[18952]: notice: crm_update_peer_proc: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 cib[18951]: notice: crm_update_peer_proc: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 cib[18951]: notice: crm_update_peer_proc: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:46:59 apache-up001 cib[18951]: notice: crm_update_peer_proc: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: Connecting to cluster infrastructure: corosync
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: Quorum acquired
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: pcmk_quorum_notification: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: pcmk_quorum_notification: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: pcmk_quorum_notification: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: Notifications disabled
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: The local CRM is operational
>>> May 14 15:47:00 apache-up001 crmd[18956]: notice: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
>>> May 14 15:47:00 apache-up001 attrd[18954]: notice: crm_update_peer_proc: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 stonith-ng[18952]: notice: Watching for stonith topology changes
>>> May 14 15:47:00 apache-up001 attrd[18954]: notice: crm_update_peer_proc: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 stonith-ng[18952]: notice: crm_update_peer_proc: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 apache-up001 stonith-ng[18952]: notice: crm_update_peer_proc: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:47:01 apache-up001 stonith-ng[18952]: notice: Added 'scsi' to the device list (1 active devices)
>>> May 14 15:47:21 apache-up001 crmd[18956]: notice: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
>>> May 14 15:47:22 apache-up001 stonith-ng[18952]: notice: scsi can fence (on) apache-up001.ring0: static-list
>>> May 14 15:47:22 apache-up001 stonith-ng[18952]: notice: scsi can fence (on) apache-up001.ring0: static-list
>>> May 14 15:47:22 apache-up001 kernel: sda: unknown partition table
>>> May 14 15:47:22 apache-up001 kernel: sdb: unknown partition table
>>> May 14 15:47:22 apache-up001 stonith-ng[18952]: notice: Operation on of apache-up003.ring0 by apache-up003.ring0 for crmd.15120 at apache-up002.ring0.44c5a0b6: OK
>>> May 14 15:47:22 apache-up001 crmd[18956]: notice: apache-up003.ring0 was successfully unfenced by apache-up003.ring0 (at the request of apache-up002.ring0)
>>> May 14 15:47:22 apache-up001 stonith-ng[18952]: notice: Operation on of apache-up002.ring0 by apache-up002.ring0 for crmd.15120 at apache-up002.ring0.e4b17672: OK
>>> May 14 15:47:22 apache-up001 crmd[18956]: notice: apache-up002.ring0 was successfully unfenced by apache-up002.ring0 (at the request of apache-up002.ring0)
>>> May 14 15:47:23 apache-up001 stonith-ng[18952]: notice: Operation 'on' [19052] (call 4 from crmd.15120) for host 'apache-up001.ring0' with device 'scsi' returned: 0 (OK)
>>> May 14 15:47:23 apache-up001 stonith-ng[18952]: notice: Operation on of apache-up001.ring0 by apache-up001.ring0 for crmd.15120 at apache-up002.ring0.a682d19f: OK
>>> May 14 15:47:23 apache-up001 crmd[18956]: notice: apache-up001.ring0 was successfully unfenced by apache-up001.ring0 (at the request of apache-up002.ring0)
>>> May 14 15:47:23 apache-up001 systemd: Device dev-disk-by\x2did-scsi\x2d36001405973e201b3fdb4a999175b942f.device appeared twice with different sysfs paths /sys/devices/platform/host3/session2/target3:0:0/3:0:0:1/block/sda and /sys/devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb
>>> May 14 15:47:23 apache-up001 systemd: Device dev-disk-by\x2did-wwn\x2d0x6001405973e201b3fdb4a999175b942f.device appeared twice with different sysfs paths /sys/devices/platform/host3/session2/target3:0:0/3:0:0:1/block/sda and /sys/devices/platform/host2/session1/target2:0:0/2:0:0:1/block/sdb
>>> May 14 15:47:25 apache-up001 crmd[18956]: notice: Operation scsi_monitor_0: not running (node=apache-up001.ring0, call=5, rc=7, cib-update=12, confirmed=true)
>>>
>>>
>>>
>>>
>>>
>>>
>>> cat /var/log/cluster/corosync.log
>>> [18934] apache-up001.itc4u.local corosyncnotice [MAIN ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
>>> [18934] apache-up001.itc4u.local corosyncinfo [MAIN ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] Initializing transport (UDP/IP Unicast).
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] Initializing transport (UDP/IP Unicast).
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] The network interface [192.168.15.9] is now up.
>>> [18934] apache-up001.itc4u.local corosyncnotice [SERV ] Service engine loaded: corosync configuration map access [0]
>>> [18934] apache-up001.itc4u.local corosyncinfo [QB ] server name: cmap
>>> [18934] apache-up001.itc4u.local corosyncnotice [SERV ] Service engine loaded: corosync configuration service [1]
>>> [18934] apache-up001.itc4u.local corosyncinfo [QB ] server name: cfg
>>> [18934] apache-up001.itc4u.local corosyncnotice [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
>>> [18934] apache-up001.itc4u.local corosyncinfo [QB ] server name: cpg
>>> [18934] apache-up001.itc4u.local corosyncnotice [SERV ] Service engine loaded: corosync profile loading service [4]
>>> [18934] apache-up001.itc4u.local corosyncnotice [QUORUM] Using quorum provider corosync_votequorum
>>> [18934] apache-up001.itc4u.local corosyncnotice [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
>>> [18934] apache-up001.itc4u.local corosyncinfo [QB ] server name: votequorum
>>> [18934] apache-up001.itc4u.local corosyncnotice [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
>>> [18934] apache-up001.itc4u.local corosyncinfo [QB ] server name: quorum
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] adding new UDPU member {192.168.15.9}
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] adding new UDPU member {192.168.15.8}
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] adding new UDPU member {192.168.15.7}
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] The network interface [192.168.16.9] is now up.
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] adding new UDPU member {192.168.16.9}
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] adding new UDPU member {192.168.16.8}
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] adding new UDPU member {192.168.16.7}
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] A new membership (192.168.15.9:824) was formed. Members joined: 1
>>> [18934] apache-up001.itc4u.local corosyncnotice [QUORUM] Members[1]: 1
>>> [18934] apache-up001.itc4u.local corosyncnotice [MAIN ] Completed service synchronization, ready to provide service.
>>> [18934] apache-up001.itc4u.local corosyncnotice [TOTEM ] A new membership (192.168.15.7:836) was formed. Members joined: 3 2
>>> [18934] apache-up001.itc4u.local corosyncnotice [QUORUM] This node is within the primary component and will provide service.
>>> [18934] apache-up001.itc4u.local corosyncnotice [QUORUM] Members[3]: 3 2 1
>>> [18934] apache-up001.itc4u.local corosyncnotice [MAIN ] Completed service synchronization, ready to provide service.
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: mcp_read_config: Configured corosync to accept connections from group 189: OK (1)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: main: Starting Pacemaker 1.1.13-10.el7_2.2 (Build: 44eb2dd): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc upstart systemd nagios corosync-native atomic-attrd acls
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: main: Maximum core file size is: 18446744073709551615
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: qb_ipcs_us_publish: server name: pacemakerd
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Created entry 2ce0a451-fca7-407d-82d6-cf16b2d9059e/0x1213720 for node apache-up001.ring0/1 (1 total)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Node 1 is now known as apache-up001.ring0
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Node 1 has uuid 1
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_update_peer_proc: cluster_connect_cpg: Node apache-up001.ring0[1] - corosync-cpg is now online
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: cluster_connect_quorum: Quorum acquired
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Created entry 9fc4b33e-ee75-4ebb-ab2e-e7ead18e083d/0x1214b80 for node apache-up002.ring0/2 (2 total)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Node 2 is now known as apache-up002.ring0
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Node 2 has uuid 2
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Created entry 54d08100-982e-42c3-b364-57017d8c2f14/0x1215070 for node apache-up003.ring0/3 (3 total)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Node 3 is now known as apache-up003.ring0
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_get_peer: Node 3 has uuid 3
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Using uid=189 and group=189 for process cib
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Forked child 18951 for process cib
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Forked child 18952 for process stonith-ng
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Forked child 18953 for process lrmd
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Using uid=189 and group=189 for process attrd
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Forked child 18954 for process attrd
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Using uid=189 and group=189 for process pengine
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Forked child 18955 for process pengine
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Using uid=189 and group=189 for process crmd
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: start_child: Forked child 18956 for process crmd
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: main: Starting mainloop
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_quorum_notification: Membership 836: quorum retained (3)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: crm_update_peer_state_iter: pcmk_quorum_notification: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: crm_update_peer_state_iter: pcmk_quorum_notification: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: crm_update_peer_state_iter: pcmk_quorum_notification: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 1 joined group pacemakerd (counter=0.0)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 1 still member of group pacemakerd (peer=apache-up001.ring0, counter=0.0)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 3 still member of group pacemakerd (peer=apache-up003.ring0, counter=0.1)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up003.ring0[3] - corosync-cpg is now online
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 2 joined group pacemakerd (counter=1.0)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 1 still member of group pacemakerd (peer=apache-up001.ring0, counter=1.0)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 2 still member of group pacemakerd (peer=apache-up002.ring0, counter=1.1)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up002.ring0[2] - corosync-cpg is now online
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: pcmk_cpg_membership: Node 3 still member of group pacemakerd (peer=apache-up003.ring0, counter=1.2)
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_cpg_deliver: Ignoring process list sent by peer for local node
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/hacluster
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: main: Starting up
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: get_cluster_type: Verifying cluster type: 'corosync'
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: get_cluster_type: Assuming an active 'corosync' cluster
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
>>> May 14 15:46:59 [18956] apache-up001.itc4u.local crmd: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/hacluster
>>> May 14 15:46:59 [18956] apache-up001.itc4u.local crmd: notice: main: CRM Git Version: 1.1.13-10.el7_2.2 (44eb2dd)
>>> May 14 15:46:59 [18956] apache-up001.itc4u.local crmd: info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
>>> May 14 15:46:59 [18956] apache-up001.itc4u.local crmd: info: get_cluster_type: Verifying cluster type: 'corosync'
>>> May 14 15:46:59 [18956] apache-up001.itc4u.local crmd: info: get_cluster_type: Assuming an active 'corosync' cluster
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/hacluster
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: get_cluster_type: Verifying cluster type: 'corosync'
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: get_cluster_type: Assuming an active 'corosync' cluster
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: retrieveCib: Reading cluster configuration file /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: validate_with_relaxng: Creating RNG parser context
>>> May 14 15:46:59 [18955] apache-up001.itc4u.local pengine: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/hacluster
>>> May 14 15:46:59 [18955] apache-up001.itc4u.local pengine: info: qb_ipcs_us_publish: server name: pengine
>>> May 14 15:46:59 [18955] apache-up001.itc4u.local pengine: info: main: Starting pengine
>>> May 14 15:46:59 [18953] apache-up001.itc4u.local lrmd: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/root
>>> May 14 15:46:59 [18953] apache-up001.itc4u.local lrmd: info: qb_ipcs_us_publish: server name: lrmd
>>> May 14 15:46:59 [18953] apache-up001.itc4u.local lrmd: info: main: Starting
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/root
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: get_cluster_type: Verifying cluster type: 'corosync'
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: get_cluster_type: Assuming an active 'corosync' cluster
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Created entry 2e6a7f6f-877e-4eba-93dc-7e2f13a48c31/0x8b1cd0 for node apache-up001.ring0/1 (1 total)
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Node 1 is now known as apache-up001.ring0
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: startCib: CIB Initialization completed successfully
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Node 1 has uuid 1
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: crm_update_peer_proc: cluster_connect_cpg: Node apache-up001.ring0[1] - corosync-cpg is now online
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: init_cs_connection_once: Connection to 'corosync': established
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Created entry ed676779-f16b-4ebe-8bf2-a80c08001e4b/0x22e71d0 for node apache-up001.ring0/1 (1 total)
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Node 1 is now known as apache-up001.ring0
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: main: Cluster connection active
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: qb_ipcs_us_publish: server name: attrd
>>> May 14 15:46:59 [18954] apache-up001.itc4u.local attrd: info: main: Accepting attribute updates
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Created entry 5ca16226-9bac-40aa-910f-b1825e1f505b/0x1828af0 for node apache-up001.ring0/1 (1 total)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Node 1 is now known as apache-up001.ring0
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Node 1 has uuid 1
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: crm_update_peer_proc: cluster_connect_cpg: Node apache-up001.ring0[1] - corosync-cpg is now online
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 [18952] apache-up001.itc4u.local stonith-ng: info: init_cs_connection_once: Connection to 'corosync': established
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Node 1 has uuid 1
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_update_peer_proc: cluster_connect_cpg: Node apache-up001.ring0[1] - corosync-cpg is now online
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: init_cs_connection_once: Connection to 'corosync': established
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: qb_ipcs_us_publish: server name: cib_ro
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: qb_ipcs_us_publish: server name: cib_rw
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: qb_ipcs_us_publish: server name: cib_shm
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: cib_init: Starting cib mainloop
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 1 joined group cib (counter=0.0)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 1 still member of group cib (peer=apache-up001.ring0, counter=0.0)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Created entry 32607e56-5e7f-42d6-91c9-3d9ee2fa152f/0x182b820 for node apache-up003.ring0/3 (2 total)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Node 3 is now known as apache-up003.ring0
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Node 3 has uuid 3
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 3 still member of group cib (peer=apache-up003.ring0, counter=0.1)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up003.ring0[3] - corosync-cpg is now online
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 2 joined group cib (counter=1.0)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 1 still member of group cib (peer=apache-up001.ring0, counter=1.0)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Created entry 9d8bed61-2324-42b4-8010-5b2736c21534/0x182b910 for node apache-up002.ring0/2 (3 total)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Node 2 is now known as apache-up002.ring0
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_get_peer: Node 2 has uuid 2
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 2 still member of group cib (peer=apache-up002.ring0, counter=1.1)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up002.ring0[2] - corosync-cpg is now online
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: pcmk_cpg_membership: Node 3 still member of group cib (peer=apache-up003.ring0, counter=1.2)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-69.raw
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: cib_file_write_with_digest: Wrote version 0.98.0 of the CIB to disk (digest: 262eb42d23bff917f27a0914467d7218)
>>> May 14 15:46:59 [18951] apache-up001.itc4u.local cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.8fxTts (digest: /var/lib/pacemaker/cib/cib.Ls9hSP)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_cib_control: CIB connection established
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Created entry 0e0a8dc1-17df-42f7-83f7-55fbee944173/0x24e2c20 for node apache-up001.ring0/1 (1 total)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Node 1 is now known as apache-up001.ring0
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: apache-up001.ring0 is now in unknown state
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Node 1 has uuid 1
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_update_peer_proc: cluster_connect_cpg: Node apache-up001.ring0[1] - corosync-cpg is now online
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: Client apache-up001.ring0/peer now has status [online] (DC=<null>, changed=4000000)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: init_cs_connection_once: Connection to 'corosync': established
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: cluster_connect_quorum: Quorum acquired
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Created entry 5476fb11-1e94-4908-92e5-d27a3e5a29b2/0x24e5130 for node apache-up002.ring0/2 (2 total)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Node 2 is now known as apache-up002.ring0
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: apache-up002.ring0 is now in unknown state
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Node 2 has uuid 2
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Created entry c50be947-f189-4c64-a7a2-523593eafac8/0x24e5390 for node apache-up003.ring0/3 (3 total)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Node 3 is now known as apache-up003.ring0
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: apache-up003.ring0 is now in unknown state
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: crm_get_peer: Node 3 has uuid 3
>>> May 14 15:47:00 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up003.ring0/crmd/6, version=0.98.0)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_ha_control: Connected to the cluster
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: lrmd_ipc_connect: Connecting to lrmd
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_lrm_control: LRM connection established
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_started: Delaying start, no membership data (0000000000100000)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_started: Delaying start, no membership data (0000000000100000)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: pcmk_quorum_notification: Membership 836: quorum retained (3)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: crm_update_peer_state_iter: pcmk_quorum_notification: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: apache-up003.ring0 is now member (was in unknown state)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: crm_update_peer_state_iter: pcmk_quorum_notification: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: apache-up002.ring0 is now member (was in unknown state)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: crm_update_peer_state_iter: pcmk_quorum_notification: Node apache-up001.ring0[1] - state is now member (was (null))
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: apache-up001.ring0 is now member (was in unknown state)
>>> May 14 15:47:00 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section nodes to master (origin=local/crmd/6)
>>> May 14 15:47:00 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up001.ring0/crmd/6, version=0.98.0)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_connect: Connected to the CIB after 2 attempts
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: main: CIB connection active
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 1 joined group attrd (counter=0.0)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 1 still member of group attrd (peer=apache-up001.ring0, counter=0.0)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_started: Delaying start, Config not read (0000000000000040)
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: crmd_enable_notifications: Notifications disabled
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: qb_ipcs_us_publish: server name: crmd
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: do_started: The local CRM is operational
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
>>> May 14 15:47:00 [18956] apache-up001.itc4u.local crmd: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Created entry eb16028a-9e18-4b07-b9bf-d29dc04177bd/0x8b4450 for node apache-up003.ring0/3 (2 total)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Node 3 is now known as apache-up003.ring0
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Node 3 has uuid 3
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 3 still member of group attrd (peer=apache-up003.ring0, counter=0.1)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up003.ring0[3] - corosync-cpg is now online
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 2 joined group attrd (counter=1.0)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 1 still member of group attrd (peer=apache-up001.ring0, counter=1.0)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: notice: setup_cib: Watching for stonith topology changes
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: qb_ipcs_us_publish: server name: stonith-ng
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: main: Starting stonith-ng mainloop
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: pcmk_cpg_membership: Node 1 joined group stonith-ng (counter=0.0)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: pcmk_cpg_membership: Node 1 still member of group stonith-ng (peer=apache-up001.ring0, counter=0.0)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Created entry 8e6e690d-b0f3-4894-8ae2-663543a34c55/0x8b4ec0 for node apache-up002.ring0/2 (3 total)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Node 2 is now known as apache-up002.ring0
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_get_peer: Node 2 has uuid 2
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 2 still member of group attrd (peer=apache-up002.ring0, counter=1.1)
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up002.ring0[2] - corosync-cpg is now online
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 [18954] apache-up001.itc4u.local attrd: info: pcmk_cpg_membership: Node 3 still member of group attrd (peer=apache-up003.ring0, counter=1.2)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Created entry 51a26c10-f43f-4b9b-a7a5-71049ebacdf0/0x22e88c0 for node apache-up002.ring0/2 (2 total)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Node 2 is now known as apache-up002.ring0
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Node 2 has uuid 2
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: pcmk_cpg_membership: Node 2 still member of group stonith-ng (peer=apache-up002.ring0, counter=0.1)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up002.ring0[2] - corosync-cpg is now online
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up002.ring0[2] - state is now member (was (null))
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Created entry 9fd39304-24fa-48bd-899e-8d33c3994ecf/0x22e8a10 for node apache-up003.ring0/3 (3 total)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Node 3 is now known as apache-up003.ring0
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_get_peer: Node 3 has uuid 3
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: pcmk_cpg_membership: Node 3 still member of group stonith-ng (peer=apache-up003.ring0, counter=0.2)
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up003.ring0[3] - corosync-cpg is now online
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: notice: crm_update_peer_state_iter: crm_update_peer_proc: Node apache-up003.ring0[3] - state is now member (was (null))
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: init_cib_cache_cb: Updating device list from the cib: init
>>> May 14 15:47:00 [18952] apache-up001.itc4u.local stonith-ng: info: cib_devices_update: Updating devices to version 0.98.0
>>> May 14 15:47:00 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up002.ring0/crmd/6, version=0.98.0)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 1 joined group crmd (counter=0.0)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 1 still member of group crmd (peer=apache-up001.ring0, counter=0.0)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 3 still member of group crmd (peer=apache-up003.ring0, counter=0.1)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up003.ring0[3] - corosync-cpg is now online
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: Client apache-up003.ring0/peer now has status [online] (DC=<null>, changed=4000000)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 2 joined group crmd (counter=1.0)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 1 still member of group crmd (peer=apache-up001.ring0, counter=1.0)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 2 still member of group crmd (peer=apache-up002.ring0, counter=1.1)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node apache-up002.ring0[2] - corosync-cpg is now online
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: peer_update_callback: Client apache-up002.ring0/peer now has status [online] (DC=<null>, changed=4000000)
>>> May 14 15:47:01 [18956] apache-up001.itc4u.local crmd: info: pcmk_cpg_membership: Node 3 still member of group crmd (peer=apache-up003.ring0, counter=1.2)
>>> May 14 15:47:01 [18952] apache-up001.itc4u.local stonith-ng: info: build_device_from_xml: The fencing device 'scsi' requires unfencing
>>> May 14 15:47:01 [18952] apache-up001.itc4u.local stonith-ng: info: build_device_from_xml: The fencing device 'scsi' requires actions (on) to be executed on the target node
>>> May 14 15:47:01 [18952] apache-up001.itc4u.local stonith-ng: notice: stonith_device_register: Added 'scsi' to the device list (1 active devices)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: election_count_vote: Election 1 (owner: 3) lost: vote from apache-up003.ring0 (Uptime)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: election_count_vote: Election 1 (owner: 2) lost: vote from apache-up002.ring0 (Uptime)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=apache-up002.ring0/crmd/10, version=0.98.0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=apache-up002.ring0/crmd/12, version=0.98.0)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: update_dc: Set DC to apache-up002.ring0 (3.0.10)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: crm_update_peer_expected: update_dc: Node apache-up002.ring0[2] - expected state is now member (was (null))
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=apache-up002.ring0/crmd/14, version=0.98.0)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: election_count_vote: Election 2 (owner: 2) lost: vote from apache-up002.ring0 (Uptime)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: update_dc: Unset DC. Was apache-up002.ring0
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: do_log: FSA: Input I_PENDING from do_election_count_vote() received in state S_PENDING
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: update_dc: Set DC to apache-up002.ring0 (3.0.10)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=apache-up002.ring0/crmd/16, version=0.98.0)
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='apache-up001.ring0']/transient_attributes
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: update_attrd_helper: Connecting to attribute manager ... 5 retries remaining
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_replace: Digest matched on replace from apache-up002.ring0: 10bfa46e2d338e958e6864a0b202f034
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_replace: Replaced 0.98.0 with 0.98.0 from apache-up002.ring0
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_replace operation for section 'all': OK (rc=0, origin=apache-up002.ring0/crmd/20, version=0.98.0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_delete operation for section //node_state[@uname='apache-up001.ring0']/transient_attributes to master (origin=local/crmd/11)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='apache-up001.ring0']/transient_attributes: OK (rc=0, origin=apache-up001.ring0/crmd/11, version=0.98.0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='apache-up003.ring0']/transient_attributes: OK (rc=0, origin=apache-up003.ring0/crmd/12, version=0.98.0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_client_update: Starting an election to determine the writer
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: info: do_log: FSA: Input I_NOT_DC from do_cl_join_finalize_respond() received in state S_PENDING
>>> May 14 15:47:21 [18956] apache-up001.itc4u.local crmd: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_file_backup: Archived previous version as /var/lib/pacemaker/cib/cib-70.raw
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up002.ring0/crmd/21, version=0.98.0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up002.ring0/crmd/22, version=0.98.0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up002.ring0/crmd/23, version=0.98.0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: election_count_vote: Election 1 (owner: 2) pass: vote from apache-up002.ring0 (Uptime)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_peer_update: Setting shutdown[apache-up002.ring0]: (null) -> 0 from apache-up002.ring0
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: election_count_vote: Election 1 (owner: 3) pass: vote from apache-up003.ring0 (Uptime)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_peer_update: Setting shutdown[apache-up003.ring0]: (null) -> 0 from apache-up003.ring0
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: election_count_vote: Election 2 (owner: 3) pass: vote from apache-up003.ring0 (Uptime)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_client_refresh: Updating all attributes
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='apache-up002.ring0']/transient_attributes: OK (rc=0, origin=apache-up002.ring0/crmd/24, version=0.98.0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Sent update 2 with 2 changes for shutdown, id=<n/a>, set=(null)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Sent update 3 with 2 changes for terminate, id=<n/a>, set=(null)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_peer_update: Setting shutdown[apache-up001.ring0]: (null) -> 0 from apache-up001.ring0
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_file_write_with_digest: Wrote version 0.98.0 of the CIB to disk (digest: 088b40b257e579e23dcbd0047454c8a9)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='apache-up002.ring0']/lrm: OK (rc=0, origin=apache-up002.ring0/crmd/25, version=0.98.0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.0 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.1 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=1
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status: <node_state id="2" uname="apache-up002.ring0" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm id="2">
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm_resources/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </lrm>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </node_state>
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: election_complete: Election election-attrd complete
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Write out of 'shutdown' delayed: update 2 in progress
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Write out of 'terminate' delayed: update 3 in progress
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_file_write_with_digest: Reading cluster configuration file /var/lib/pacemaker/cib/cib.JoSONl (digest: /var/lib/pacemaker/cib/cib.tSlLmC)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/crmd/26, version=0.98.1)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='apache-up001.ring0']/lrm: OK (rc=0, origin=apache-up002.ring0/crmd/27, version=0.98.1)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.1 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.2 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status: <node_state id="1" uname="apache-up001.ring0" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm id="1">
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm_resources/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </lrm>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </node_state>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/crmd/28, version=0.98.2)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='apache-up003.ring0']/lrm: OK (rc=0, origin=apache-up002.ring0/crmd/29, version=0.98.2)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.2 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.3 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=3
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status: <node_state id="3" uname="apache-up003.ring0" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm id="3">
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm_resources/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </lrm>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </node_state>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/crmd/30, version=0.98.3)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/2)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/3)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.3 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.4 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=4
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='2']: <transient_attributes id="2"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <instance_attributes id="status-2">
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <nvpair id="status-2-shutdown" name="shutdown" value="0"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </instance_attributes>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </transient_attributes>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']: <transient_attributes id="1"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <instance_attributes id="status-1">
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <nvpair id="status-1-shutdown" name="shutdown" value="0"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </instance_attributes>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </transient_attributes>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3']: <transient_attributes id="3"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <instance_attributes id="status-3">
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <nvpair id="status-3-shutdown" name="shutdown" value="0"/>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </instance_attributes>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </transient_attributes>
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/attrd/2, version=0.98.4)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.4 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.5 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=5
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/attrd/3, version=0.98.5)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/attrd/2, version=0.98.5)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 2 for shutdown: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 2 for shutdown[apache-up001.ring0]=(null): OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 2 for shutdown[apache-up002.ring0]=0: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 2 for shutdown[apache-up003.ring0]=0: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Sent update 4 with 3 changes for shutdown, id=<n/a>, set=(null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.5 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.6 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=6
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/attrd/3, version=0.98.6)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 3 for terminate: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 3 for terminate[apache-up001.ring0]=(null): OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 3 for terminate[apache-up002.ring0]=(null): OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 3 for terminate[apache-up003.ring0]=(null): OK (0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=apache-up002.ring0/crmd/34, version=0.98.6)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.6 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.7 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=7
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib/status/node_state[@id='2']: @crm-debug-origin=do_state_transition
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib/status/node_state[@id='1']: @crm-debug-origin=do_state_transition
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib/status/node_state[@id='3']: @crm-debug-origin=do_state_transition
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/crmd/35, version=0.98.7)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/4)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/attrd/4, version=0.98.7)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 4 for shutdown: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 4 for shutdown[apache-up001.ring0]=0: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 4 for shutdown[apache-up002.ring0]=0: OK (0)
>>> May 14 15:47:21 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 4 for shutdown[apache-up003.ring0]=0: OK (0)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.7 2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.8 (null)
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=8, @dc-uuid=2
>>> May 14 15:47:21 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=apache-up002.ring0/crmd/36, version=0.98.8)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_peer_update: Setting probe_complete[apache-up003.ring0]: (null) -> true from apache-up003.ring0
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Sent update 5 with 1 changes for probe_complete, id=<n/a>, set=(null)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_peer_update: Setting probe_complete[apache-up002.ring0]: (null) -> true from apache-up002.ring0
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Write out of 'probe_complete' delayed: update 5 in progress
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/5)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.8 2
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.9 (null)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=9
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3']/transient_attributes[@id='3']/instance_attributes[@id='status-3']: <nvpair id="status-3-probe_complete" name="probe_complete" value="true"/>
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/attrd/5, version=0.98.9)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 5 for probe_complete: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 5 for probe_complete[apache-up002.ring0]=(null): OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 5 for probe_complete[apache-up003.ring0]=true: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Sent update 6 with 2 changes for probe_complete, id=<n/a>, set=(null)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/6)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_peer_update: Setting probe_complete[apache-up001.ring0]: (null) -> true from apache-up001.ring0
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Write out of 'probe_complete' delayed: update 6 in progress
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.9 2
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.10 (null)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=10
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']: <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/>
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/attrd/6, version=0.98.10)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 6 for probe_complete: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 6 for probe_complete[apache-up001.ring0]=(null): OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 6 for probe_complete[apache-up002.ring0]=true: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 6 for probe_complete[apache-up003.ring0]=true: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: write_attribute: Sent update 7 with 3 changes for probe_complete, id=<n/a>, set=(null)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/attrd/7)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.10 2
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.11 (null)
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=11
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1']: <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/>
>>> May 14 15:47:22 [18952] apache-up001.itc4u.local stonith-ng: notice: can_fence_host_with_device: scsi can fence (on) apache-up001.ring0: static-list
>>> May 14 15:47:22 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/attrd/7, version=0.98.11)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 7 for probe_complete: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 7 for probe_complete[apache-up001.ring0]=true: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 7 for probe_complete[apache-up002.ring0]=true: OK (0)
>>> May 14 15:47:22 [18954] apache-up001.itc4u.local attrd: info: attrd_cib_callback: Update 7 for probe_complete[apache-up003.ring0]=true: OK (0)
>>> May 14 15:47:22 [18952] apache-up001.itc4u.local stonith-ng: notice: can_fence_host_with_device: scsi can fence (on) apache-up001.ring0: static-list
>>> May 14 15:47:22 [18952] apache-up001.itc4u.local stonith-ng: info: stonith_fence_get_devices_cb: Found 1 matching devices for 'apache-up001.ring0'
>>> May 14 15:47:22 [18952] apache-up001.itc4u.local stonith-ng: notice: remote_op_done: Operation on of apache-up003.ring0 by apache-up003.ring0 for crmd.15120 at apache-up002.ring0.44c5a0b6: OK
>>> May 14 15:47:22 [18956] apache-up001.itc4u.local crmd: notice: tengine_stonith_notify: apache-up003.ring0 was successfully unfenced by apache-up003.ring0 (at the request of apache-up002.ring0)
>>> May 14 15:47:22 [18952] apache-up001.itc4u.local stonith-ng: notice: remote_op_done: Operation on of apache-up002.ring0 by apache-up002.ring0 for crmd.15120 at apache-up002.ring0.e4b17672: OK
>>> May 14 15:47:22 [18956] apache-up001.itc4u.local crmd: notice: tengine_stonith_notify: apache-up002.ring0 was successfully unfenced by apache-up002.ring0 (at the request of apache-up002.ring0)
>>> May 14 15:47:22 [18952] apache-up001.itc4u.local stonith-ng: notice: log_operation: Operation 'on' [19052] (call 4 from crmd.15120) for host 'apache-up001.ring0' with device 'scsi' returned: 0 (OK)
>>> May 14 15:47:23 [18952] apache-up001.itc4u.local stonith-ng: notice: remote_op_done: Operation on of apache-up001.ring0 by apache-up001.ring0 for crmd.15120 at apache-up002.ring0.a682d19f: OK
>>> May 14 15:47:23 [18956] apache-up001.itc4u.local crmd: notice: tengine_stonith_notify: apache-up001.ring0 was successfully unfenced by apache-up001.ring0 (at the request of apache-up002.ring0)
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.11 2
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.12 (null)
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=12
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib/status/node_state[@id='2']: @crm-debug-origin=do_update_resource
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='2']/lrm[@id='2']/lrm_resources: <lrm_resource id="scsi" type="fence_scsi" class="stonith"/>
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm_rsc_op id="scsi_last_0" operation_key="scsi_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="6:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2" transition-magic="0:7;6:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2" on_node="apache-up002.ring0" call-id="5" rc-code="7" op-status="0" interval="0" last-run=
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </lrm_resource>
>>> May 14 15:47:24 [18953] apache-up001.itc4u.local lrmd: info: process_lrmd_get_rsc_info: Resource 'scsi' not found (0 active resources)
>>> May 14 15:47:24 [18953] apache-up001.itc4u.local lrmd: info: process_lrmd_rsc_register: Added 'scsi' to the rsc list (1 active resources)
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up002.ring0/crmd/39, version=0.98.12)
>>> May 14 15:47:24 [18956] apache-up001.itc4u.local crmd: info: do_lrm_rsc_op: Performing key=3:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2 op=scsi_monitor_0
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.12 2
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.13 (null)
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=13
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib/status/node_state[@id='3']: @crm-debug-origin=do_update_resource
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='3']/lrm[@id='3']/lrm_resources: <lrm_resource id="scsi" type="fence_scsi" class="stonith"/>
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm_rsc_op id="scsi_last_0" operation_key="scsi_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="9:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2" transition-magic="0:7;9:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2" on_node="apache-up003.ring0" call-id="5" rc-code="7" op-status="0" interval="0" last-run=
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </lrm_resource>
>>> May 14 15:47:24 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up003.ring0/crmd/13, version=0.98.13)
>>> May 14 15:47:25 [18956] apache-up001.itc4u.local crmd: notice: process_lrm_event: Operation scsi_monitor_0: not running (node=apache-up001.ring0, call=5, rc=7, cib-update=12, confirmed=true)
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Forwarding cib_modify operation for section status to master (origin=local/crmd/12)
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: --- 0.98.13 2
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: Diff: +++ 0.98.14 (null)
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib: @num_updates=14
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: + /cib/status/node_state[@id='1']: @crm-debug-origin=do_update_resource
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ /cib/status/node_state[@id='1']/lrm[@id='1']/lrm_resources: <lrm_resource id="scsi" type="fence_scsi" class="stonith"/>
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ <lrm_rsc_op id="scsi_last_0" operation_key="scsi_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="3:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2" transition-magic="0:7;3:0:7:6b7d5189-b033-453b-b1a3-a851c1bd46c2" on_node="apache-up001.ring0" call-id="5" rc-code="7" op-status="0" interval="0" last-run=
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_perform_op: ++ </lrm_resource>
>>> May 14 15:47:25 [18951] apache-up001.itc4u.local cib: info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=apache-up001.ring0/crmd/12, version=0.98.14)
>>> May 14 15:47:30 [18951] apache-up001.itc4u.local cib: info: cib_process_ping: Reporting our current digest to apache-up002.ring0: e1c4fabedccaa4621f5d737327d9a8d5 for 0.98.14 (0x18c3300 0)
>>> May 14 15:47:30 [18956] apache-up001.itc4u.local crmd: info: throttle_send_command: New throttle mode: 0000 (was ffffffff)
>>>
>>>
>>>
>>>
>>> cat /var/log/pacemaker.log
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: crm_log_init: Changed active directory to /var/lib/pacemaker/cores/root
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: get_cluster_type: Detected an active 'corosync' cluster
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: info: mcp_read_config: Reading configure for stack: corosync
>>> May 14 15:46:59 [18950] apache-up001.itc4u.local pacemakerd: notice: crm_add_logfile: Switching to /var/log/cluster/corosync.log
>>>
>>> Can anyone help me please? This is really driving me crazy
>>>
>>> Kind regards
>>>
>>> Marco
>>>
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list