<html><body><p>Added an appropriate subject line (was blank). Thanks... <br><br><br>Scott Greenlese ... IBM z/BX Solutions Test, Poughkeepsie, N.Y.<br> INTERNET: swgreenl@us.ibm.com <br> PHONE: 8/293-7301 (845-433-7301) M/S: POK 42HA/P966<br><br><font size="2" color="#800080">----- Forwarded by Scott Greenlese/Poughkeepsie/IBM</font><font size="2" color="#800080"> on 08/30/2016 03:59 PM</font><font size="2" color="#800080"> -----</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">Scott Greenlese/Poughkeepsie/IBM@IBMUS</font><br><font size="2" color="#5F5F5F">To: </font><font size="2">Cluster Labs - All topics related to open-source clustering welcomed <users@clusterlabs.org></font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">08/29/2016 06:36 PM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">[ClusterLabs] (no subject)</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><font size="4">Hi folks, <br><br>I'm assigned to system test Pacemaker/Corosync on the KVM on System Z platform <br>with pacemaker-1.1.13-10 and corosync-2.3.4-7 . <br><br>I have a cluster with 5 KVM hosts, and a total of 200 ocf:pacemakerVirtualDomain resources defined to run<br>across the 5 cluster nodes (symmertical is true for this cluster). <br><br>The heartbeat network is communicating over vlan1293, which is hung off a network device, 0230 . <br><br>In general, pacemaker does a good job of distributing my virtual guest resources evenly across the hypervisors<br>in the cluster. These resource are a mixed bag: <br><br>- "opaque" and remote "guest nodes" managed by the cluster. <br>- allow-migrate=false and allow-migrate=true<br>- qcow2 (file based) guests and LUN based guests<br>- Sles and Ubuntu OS<br><br>[root@zs95kj ]# pcs status |less<br>Cluster name: test_cluster_2<br>Last updated: Mon Aug 29 17:02:08 2016 Last change: Mon Aug 29 16:37:31 2016 by root via crm_resource on zs93kjpcs1<br>Stack: corosync<br>Current DC: zs95kjpcs1 (version 1.1.13-10.el7_2.ibm.1-44eb2dd) - partition with quorum<br>103 nodes and 300 resources configured<br><br>Node zs90kppcs1: standby<br>Online: [ zs93KLpcs1 zs93kjpcs1 zs95KLpcs1 zs95kjpcs1 ]<br><br>This morning, our system admin team performed a "non-disruptive" (concurrent) microcode code load on the OSA, which<br>(to our surprise) dropped the network connection for 13 seconds on the S93 CEC, from 11:18:34am to 11:18:47am , to be exact.<br>This temporary outage caused the two cluster nodes on S93 (zs93kjpcs1 and zs93KLpcs1) to drop out of the cluster, <br>as expected. <br><br>However, pacemaker didn't handle this too well. The end result was numerous VirtualDomain resources in FAILED state: <br><br>[root@zs95kj log]# date;pcs status |grep VirtualD |grep zs93 |grep FAILED<br>Mon Aug 29 12:33:32 EDT 2016<br>zs95kjg110104_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110092_res (ocf::heartbeat:VirtualDomain): FAILED zs93KLpcs1<br>zs95kjg110099_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110102_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110106_res (ocf::heartbeat:VirtualDomain): FAILED zs93KLpcs1<br>zs95kjg110112_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110115_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110118_res (ocf::heartbeat:VirtualDomain): FAILED zs93KLpcs1<br>zs95kjg110124_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110127_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110130_res (ocf::heartbeat:VirtualDomain): FAILED zs93KLpcs1<br>zs95kjg110136_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110139_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110142_res (ocf::heartbeat:VirtualDomain): FAILED zs93KLpcs1<br>zs95kjg110148_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110152_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110155_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110161_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110164_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110167_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110173_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110176_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110179_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg110185_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br>zs95kjg109106_res (ocf::heartbeat:VirtualDomain): FAILED zs93kjpcs1<br><br><br>As well as, several VirtualDomain resources showing "Started" on two cluster nodes: <br><br>zs95kjg110079_res (ocf::heartbeat:VirtualDomain): Started[ zs93kjpcs1 zs93KLpcs1 ]<br>zs95kjg110108_res (ocf::heartbeat:VirtualDomain): Started[ zs93kjpcs1 zs93KLpcs1 ]<br>zs95kjg110186_res (ocf::heartbeat:VirtualDomain): Started[ zs93kjpcs1 zs93KLpcs1 ]<br>zs95kjg110188_res (ocf::heartbeat:VirtualDomain): Started[ zs93kjpcs1 zs93KLpcs1 ]<br>zs95kjg110198_res (ocf::heartbeat:VirtualDomain): Started[ zs93kjpcs1 zs93KLpcs1 ]<br><br><br>The virtual machines themselves were in fact, "running" on both hosts. For example:<br><br>[root@zs93kl ~]# virsh list |grep zs95kjg110079<br>70 zs95kjg110079 running<br><br>[root@zs93kj cli]# virsh list |grep zs95kjg110079<br>18 zs95kjg110079 running<br><br><br>On this particular VM, here was file corruption of this file-based qcow2 guest's image, such that you could not ping or ssh, <br>and if you open a virsh console, you get "initramfs" prompt. <br><br>To recover, we had to mount the volume on another VM and then run fsck to recover it.<br><br>I walked through the system log on the two S93 hosts to see how zs95kjg110079 ended up running <br>on two cluster nodes. (some entries were omitted, I saved logs for future reference): <br><br></font><b><font size="4"><br>zs93kjpcs1 </font></b><font size="4">system log - (shows membership changes after the network failure at 11:18:34)<br></font><font size="4" face="Courier New"><br>Aug 29 11:18:33 zs93kl kernel: qeth 0.0.0230: The qeth device driver failed to recover an error on the device<br>Aug 29 11:18:33 zs93kl kernel: qeth: irb 00000000: 00 c2 40 17 01 51 90 38 00 04 00 00 00 00 00 00 ..@..Q.8........<br>Aug 29 11:18:33 zs93kl kernel: qeth: irb 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................<br>Aug 29 11:18:33 zs93kl kernel: qeth: irb 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................<br>Aug 29 11:18:33 zs93kl kernel: qeth: irb 00000030: 00 00 00 00 00 00 00 00 00 00 00 34 00 1f 00 07 ...........4....<br>Aug 29 11:18:33 zs93kl kernel: qeth 0.0.0230: A recovery process has been started for the device<br>Aug 29 11:18:33 zs93kl corosync[19281]: [TOTEM ] The token was lost in the OPERATIONAL state.<br>Aug 29 11:18:33 zs93kl corosync[19281]: [TOTEM ] A processor failed, forming new configuration.<br>Aug 29 11:18:33 zs93kl corosync[19281]: [TOTEM ] entering GATHER state from 2(The token was lost in the OPERATIONAL state.).<br>Aug 29 11:18:34 zs93kl kernel: qeth 0.0.0230: The qeth device driver failed to recover an error on the device<br>Aug 29 11:18:34 zs93kl kernel: qeth: irb 00000000: 00 00 11 01 00 00 00 00 00 04 00 00 00 00 00 00 ................<br>Aug 29 11:18:34 zs93kl kernel: qeth: irb 00000010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................<br>Aug 29 11:18:34 zs93kl kernel: qeth: irb 00000020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................<br>Aug 29 11:18:34 zs93kl kernel: qeth: irb 00000030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................</font><font size="4"><br><br></font><font size="4" face="Courier New"><br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: crm_update_peer_proc: Node zs95kjpcs1[2] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Removing all zs95kjpcs1 attributes for attrd_peer_change_cb<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: crm_update_peer_proc: Node zs95kjpcs1[2] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: Removing zs95kjpcs1/2 from the membership list<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: Purged 1 peers with id=2 and/or uname=zs95kjpcs1 from the membership cache<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Removing zs95kjpcs1/2 from the membership list<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: crm_update_peer_proc: Node zs95KLpcs1[3] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Purged 1 peers with id=2 and/or uname=zs95kjpcs1 from the membership cache<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: Removing zs95KLpcs1/3 from the membership list<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: crm_update_peer_proc: Node zs95KLpcs1[3] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: Purged 1 peers with id=3 and/or uname=zs95KLpcs1 from the membership cache<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: crm_update_peer_proc: Node zs93KLpcs1[5] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: Removing zs93KLpcs1/5 from the membership list<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] entering GATHER state from 0(consensus timeout).<br>Aug 29 11:18:37 zs93kj cib[21397]: notice: Purged 1 peers with id=5 and/or uname=zs93KLpcs1 from the membership cache<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Creating commit token because I am the rep.<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Saving state aru 32 high seq received 32<br>Aug 29 11:18:37 zs93kj corosync[20562]: [MAIN ] Storing new sequence id for ring 300<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] entering COMMIT state.<br>Aug 29 11:18:37 zs93kj crmd[21402]: notice: Membership 768: quorum lost (1)<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] got commit token<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Removing all zs95KLpcs1 attributes for attrd_peer_change_cb<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Removing zs95KLpcs1/3 from the membership list<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] entering RECOVERY state.<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] TRANS [0] member 10.20.93.11:<br>Aug 29 11:18:37 zs93kj pacemakerd[21143]: notice: Membership 768: quorum lost (1)<br>Aug 29 11:18:37 zs93kj stonith-ng[21398]: notice: crm_update_peer_proc: Node zs95kjpcs1[2] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj crmd[21402]: notice: crm_reap_unseen_nodes: Node zs95KLpcs1[3] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj crmd[21402]: warning: No match for shutdown action on 3<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Purged 1 peers with id=3 and/or uname=zs95KLpcs1 from the membership cache<br>Aug 29 11:18:37 zs93kj stonith-ng[21398]: notice: Removing zs95kjpcs1/2 from the membership list<br>Aug 29 11:18:37 zs93kj crmd[21402]: notice: Stonith/shutdown of zs95KLpcs1 not matched<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] position [0] member 10.20.93.11:<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: crm_update_peer_proc: Node zs93KLpcs1[5] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj stonith-ng[21398]: notice: Purged 1 peers with id=2 and/or uname=zs95kjpcs1 from the membership cache<br>Aug 29 11:18:37 zs93kj crmd[21402]: notice: crm_reap_unseen_nodes: Node zs95kjpcs1[2] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] previous ring seq 2fc rep 10.20.93.11<br>Aug 29 11:18:37 zs93kj attrd[21400]: notice: Removing all zs93KLpcs1 attributes for attrd_peer_change_cb<br>Aug 29 11:18:37 zs93kj stonith-ng[21398]: notice: crm_update_peer_proc: Node zs95KLpcs1[3] - state is now lost (was member)<br>Aug 29 11:18:37 zs93kj crmd[21402]: warning: No match for shutdown action on 2<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] aru 32 high delivered 32 received flag 1<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Did not need to originate any messages in recovery.<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] got commit token<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Sending initial ORF token<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] install seq 0 aru 0 high seq received 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] install seq 0 aru 0 high seq received 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] install seq 0 aru 0 high seq received 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] install seq 0 aru 0 high seq received 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Resetting old ring state<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] recovery to regular 1-0<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Marking UDPU member 10.20.93.12 inactive<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Marking UDPU member 10.20.93.13 inactive<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Marking UDPU member 10.20.93.14 inactive<br>Aug 29 11:18:37 zs93kj corosync[20562]: [MAIN ] Member left: r(0) ip(10.20.93.12)<br>Aug 29 11:18:37 zs93kj corosync[20562]: [MAIN ] Member left: r(0) ip(10.20.93.13)<br>Aug 29 11:18:37 zs93kj corosync[20562]: [MAIN ] Member left: r(0) ip(10.20.93.14)<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] waiting_trans_ack changed to 1<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] entering OPERATIONAL state.<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] A new membership (10.20.93.11:768) was formed. Members left: 2 5 3<br>Aug 29 11:18:37 zs93kj corosync[20562]: [TOTEM ] Failed to receive the leave message. failed: 2 5 3<br>Aug 29 11:18:37 zs93kj corosync[20562]: [SYNC ] Committing synchronization for corosync configuration map access<br>Aug 29 11:18:37 zs93kj corosync[20562]: [CMAP ] Not first sync -> no action<br>Aug 29 11:18:37 zs93kj corosync[20562]: [CPG ] comparing: sender r(0) ip(10.20.93.11) ; members(old:4 left:3)<br>Aug 29 11:18:37 zs93kj corosync[20562]: [CPG ] chosen downlist: sender r(0) ip(10.20.93.11) ; members(old:4 left:3)</font><font size="4"><br><br></font><font size="4" face="Courier New"><br>Aug 29 11:18:43 zs93kj corosync[20562]: [TOTEM ] Marking UDPU member 10.20.93.12 active<br>Aug 29 11:18:43 zs93kj corosync[20562]: [TOTEM ] Marking UDPU member 10.20.93.14 active<br>Aug 29 11:18:43 zs93kj corosync[20562]: [MAIN ] Member joined: r(0) ip(10.20.93.12)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [MAIN ] Member joined: r(0) ip(10.20.93.14)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [TOTEM ] entering OPERATIONAL state.<br>Aug 29 11:18:43 zs93kj corosync[20562]: [TOTEM ] A new membership (10.20.93.11:772) was formed. Members joined: 2 3<br>Aug 29 11:18:43 zs93kj corosync[20562]: [SYNC ] Committing synchronization for corosync configuration map access<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CMAP ] Not first sync -> no action<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] got joinlist message from node 0x1<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] got joinlist message from node 0x2<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] comparing: sender r(0) ip(10.20.93.14) ; members(old:2 left:0)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] comparing: sender r(0) ip(10.20.93.12) ; members(old:2 left:0)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] comparing: sender r(0) ip(10.20.93.11) ; members(old:1 left:0)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] chosen downlist: sender r(0) ip(10.20.93.12) ; members(old:2 left:0)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] got joinlist message from node 0x3<br>Aug 29 11:18:43 zs93kj corosync[20562]: [SYNC ] Committing synchronization for corosync cluster closed process group service v1.01<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[0] group:crmd\x00, ip:r(0) ip(10.20.93.14) , pid:21491<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[1] group:attrd\x00, ip:r(0) ip(10.20.93.14) , pid:21489<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[2] group:stonith-ng\x00, ip:r(0) ip(10.20.93.14) , pid:21487<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[3] group:cib\x00, ip:r(0) ip(10.20.93.14) , pid:21486<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[4] group:pacemakerd\x00, ip:r(0) ip(10.20.93.14) , pid:21485<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[5] group:crmd\x00, ip:r(0) ip(10.20.93.12) , pid:24499<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[6] group:attrd\x00, ip:r(0) ip(10.20.93.12) , pid:24497<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[7] group:stonith-ng\x00, ip:r(0) ip(10.20.93.12) , pid:24495<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[8] group:cib\x00, ip:r(0) ip(10.20.93.12) , pid:24494<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[9] group:pacemakerd\x00, ip:r(0) ip(10.20.93.12) , pid:24491<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[10] group:crmd\x00, ip:r(0) ip(10.20.93.11) , pid:21402<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[11] group:attrd\x00, ip:r(0) ip(10.20.93.11) , pid:21400<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[12] group:stonith-ng\x00, ip:r(0) ip(10.20.93.11) , pid:21398<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[13] group:cib\x00, ip:r(0) ip(10.20.93.11) , pid:21397<br>Aug 29 11:18:43 zs93kj corosync[20562]: [CPG ] joinlist_messages[14] group:pacemakerd\x00, ip:r(0) ip(10.20.93.11) , pid:21143<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] flags: quorate: No Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No<br>Aug 29 11:18:43 zs93kj corosync[20562]: [QB ] IPC credentials authenticated (20562-21400-28)<br>Aug 29 11:18:43 zs93kj corosync[20562]: [QB ] connecting to client [21400]<br>Aug 29 11:18:43 zs93kj corosync[20562]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168<br>Aug 29 11:18:43 zs93kj corosync[20562]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168<br>Aug 29 11:18:43 zs93kj pacemakerd[21143]: notice: Membership 772: quorum acquired (3)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] quorum regained, resuming activity<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] got nodeinfo message from cluster node 3<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] nodeinfo message[0]: votes: 0, expected: 0 flags: 0<br>Aug 29 11:18:43 zs93kj corosync[20562]: [SYNC ] Committing synchronization for corosync vote quorum service v1.0<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] total_votes=3, expected_votes=5<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] node 1 state=1, votes=1, expected=5<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] node 2 state=1, votes=1, expected=5<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] node 3 state=1, votes=1, expected=5<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] node 4 state=2, votes=1, expected=5<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] node 5 state=2, votes=1, expected=5<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] lowest node id: 1 us: 1<br>Aug 29 11:18:43 zs93kj corosync[20562]: [VOTEQ ] highest node id: 3 us: 1<br>Aug 29 11:18:43 zs93kj corosync[20562]: [QUORUM] This node is within the primary component and will provide service.<br>Aug 29 11:18:43 zs93kj pacemakerd[21143]: notice: pcmk_quorum_notification: Node zs95KLpcs1[3] - state is now member (was lost)<br>Aug 29 11:18:43 zs93kj attrd[21400]: notice: crm_update_peer_proc: Node zs95KLpcs1[3] - state is now member (was (null))<br>Aug 29 11:18:43 zs93kj corosync[20562]: [QUORUM] Members[3]: 1 2 3<br>Aug 29 11:18:43 zs93kj stonith-ng[21398]: warning: Node names with capitals are discouraged, consider changing 'zs95KLpcs1' to something else<br>Aug 29 11:18:43 zs93kj corosync[20562]: [MAIN ] Completed service synchronization, ready to provide service.<br>Aug 29 11:18:43 zs93kj stonith-ng[21398]: notice: crm_update_peer_proc: Node zs95KLpcs1[3] - state is now member (was (null))<br>Aug 29 11:18:43 zs93kj attrd[21400]: notice: crm_update_peer_proc: Node zs95kjpcs1[2] - state is now member (was (null))</font><font size="4"><br><br><br><br></font><b><font size="4"><br>The story of zs95kjg110079 starts on ZS93KL when it seemed to be already running on ZS93KJ - </font></b><font size="4"><br></font><b><font size="4" face="Courier New"><br>System log on zs93KLpcs1:</font></b><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:20:58 zs93kl pengine[19997]: notice: Start zs95kjg110079_res#011(zs93KLpcs1)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:21:56 zs93kl crmd[20001]: notice: Initiating action 520: start zs95kjg110079_res_start_0 on zs93KLpcs1 (local)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:21:56 zs93kl systemd-machined: New machine qemu-70-zs95kjg110079.<br>Aug 29 11:21:56 zs93kl systemd: Started Virtual Machine qemu-70-zs95kjg110079.<br>Aug 29 11:21:56 zs93kl systemd: Starting Virtual Machine qemu-70-zs95kjg110079.</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:21:59 zs93kl crmd[20001]: notice: Operation zs95kjg110079_res_start_0: ok (node=zs93KLpcs1, call=1036, rc=0, cib-update=735, confirmed=true)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:22:07 zs93kl crmd[20001]: warning: Action 238 (zs95kjg110079_res_monitor_0) on zs93kjpcs1 failed (target: 7 vs. rc: 0): Error<br>Aug 29 11:22:07 zs93kl crmd[20001]: notice: Transition aborted by zs95kjg110079_res_monitor_0 'create' on zs93kjpcs1: Event failed (magic=0:0;238:13:7:236d078a-9063-4092-9660-cfae048f3627, cib=0.2437.3212, source=match_graph_event:381, 0)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:22:15 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:22:15 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:22:15 zs93kl pengine[19997]: notice: Restart zs95kjg110079_res#011(Started zs93kjpcs1)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:22:23 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:22:23 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:22:23 zs93kl pengine[19997]: notice: Restart zs95kjg110079_res#011(Started zs93kjpcs1)</font><font size="4"><br><br></font><font size="4" face="Courier New"><br>Aug 29 11:30:31 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:30:31 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:30:31 zs93kl pengine[19997]: error: Resource zs95kjg110108_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:30:31 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:55:41 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:55:41 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:55:41 zs93kl pengine[19997]: error: Resource zs95kjg110108_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:55:41 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:55:41 zs93kl pengine[19997]: error: Resource zs95kjg110186_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:55:41 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:58:53 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:58:53 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:58:53 zs93kl pengine[19997]: error: Resource zs95kjg110108_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:58:53 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:58:53 zs93kl pengine[19997]: error: Resource zs95kjg110186_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:58:53 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 11:58:53 zs93kl pengine[19997]: error: Resource zs95kjg110188_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 11:58:53 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.</font><font size="4"><br><br></font><font size="4" face="Courier New"><br>Aug 29 12:00:00 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:00:00 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:00:00 zs93kl pengine[19997]: error: Resource zs95kjg110108_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:00:00 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:00:00 zs93kl pengine[19997]: error: Resource zs95kjg110186_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:00:00 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:00:00 zs93kl pengine[19997]: error: Resource zs95kjg110188_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:00:00 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:00:00 zs93kl pengine[19997]: error: Resource zs95kjg110198_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:00:00 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 12:03:24 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:03:24 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:03:24 zs93kl pengine[19997]: error: Resource zs95kjg110108_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:03:24 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:03:24 zs93kl pengine[19997]: error: Resource zs95kjg110186_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:03:24 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:03:24 zs93kl pengine[19997]: error: Resource zs95kjg110188_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:03:24 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:03:24 zs93kl pengine[19997]: error: Resource zs95kjg110198_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:03:24 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:03:24 zs93kl pengine[19997]: notice: Restart zs95kjg110079_res#011(Started zs93kjpcs1)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 12:36:27 zs93kl pengine[19997]: error: Resource zs95kjg110079_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:36:27 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:36:27 zs93kl pengine[19997]: error: Resource zs95kjg110108_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:36:27 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:36:27 zs93kl pengine[19997]: error: Resource zs95kjg110186_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:36:27 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:36:27 zs93kl pengine[19997]: error: Resource zs95kjg110188_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:36:27 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:36:27 zs93kl pengine[19997]: error: Resource zs95kjg110198_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:36:27 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:36:27 zs93kl pengine[19997]: error: Resource zs95kjg110210_res (ocf::VirtualDomain) is active on 2 nodes attempting recovery<br>Aug 29 12:36:27 zs93kl pengine[19997]: warning: See </font><a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active"><u><font size="4" color="#0000FF" face="Courier New">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</font></u></a><font size="4" face="Courier New"> for more information.<br>Aug 29 12:36:27 zs93kl pengine[19997]: notice: Restart zs95kjg110079_res#011(Started zs93kjpcs1)</font><font size="4"><br><br></font><font size="4" face="Courier New"><br>Aug 29 12:44:41 zs93kl crmd[20001]: warning: Transition 84 (Complete=108, Pending=0, Fired=0, Skipped=0, Incomplete=77, Source=/var/lib/pacemaker/pengine/pe-error-106.bz2): Terminated<br>Aug 29 12:44:41 zs93kl crmd[20001]: warning: Transition failed: terminated<br>Aug 29 12:44:41 zs93kl crmd[20001]: notice: Graph 84 with 185 actions: batch-limit=185 jobs, network-delay=0ms<br>Aug 29 12:44:41 zs93kl crmd[20001]: notice: [Action 410]: Pending rsc op zs95kjg110079_res_monitor_30000 on zs93kjpcs1 (priority: 0, waiting: 409)<br>Aug 29 12:44:41 zs93kl crmd[20001]: notice: [Action 409]: Pending rsc op zs95kjg110079_res_start_0 on zs93kjpcs1 (priority: 0, waiting: 408)<br>Aug 29 12:44:41 zs93kl crmd[20001]: notice: [Action 408]: Pending pseudo op zs95kjg110079_res_stop_0 on N/A (priority: 0, waiting: 439 470 496 521 546)<br>Aug 29 12:44:41 zs93kl crmd[20001]: notice: [Action 407]: Completed pseudo op zs95kjg110079_res_stop_0 on N/A (priority: 0, waiting: none)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 12:59:42 zs93kl crmd[20001]: notice: Initiating action 428: stop zs95kjg110079_res_stop_0 on zs93kjpcs1<br>Aug 29 12:59:42 zs93kl crmd[20001]: notice: Initiating action 495: stop zs95kjg110108_res_stop_0 on zs93kjpcs1<br>Aug 29 12:59:44 zs93kl crmd[20001]: notice: Initiating action 660: stop zs95kjg110186_res_stop_0 on zs93kjpcs1</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 13:00:04 zs93kl crmd[20001]: notice: [Action 431]: Pending rsc op zs95kjg110079_res_monitor_30000 on zs93kjpcs1 (priority: 0, waiting: 430)<br>Aug 29 13:00:04 zs93kl crmd[20001]: notice: [Action 430]: Pending rsc op zs95kjg110079_res_start_0 on zs93kjpcs1 (priority: 0, waiting: 429)<br>Aug 29 13:00:04 zs93kl crmd[20001]: notice: [Action 429]: Pending pseudo op zs95kjg110079_res_stop_0 on N/A (priority: 0, waiting: 460 491 517 542 567)<br>Aug 29 13:00:04 zs93kl crmd[20001]: notice: [Action 428]: Completed rsc op zs95kjg110079_res_stop_0 on zs93kjpcs1 (priority: 0, waiting: none)</font><font size="4"><br><br><br><br></font><b><font size="4" face="Courier New"><br>System log on zs93kjpcs1</font></b><font size="4" face="Courier New">: </font><font size="4"><br><br></font><font size="4" face="Courier New"><br>Aug 29 11:20:48 zs93kj crmd[21402]: notice: Recurring action zs95kjg110079_res:817 (zs95kjg110079_res_monitor_30000) incomplete at shutdown</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 11:22:07 zs93kj crmd[259639]: notice: Operation zs95kjg110079_res_monitor_0: ok (node=zs93kjpcs1, call=1223, rc=0, cib-update=104, confirmed=true)</font><font size="4"><br></font><font size="4" face="Courier New"><br>Aug 29 12:59:42 zs93kj VirtualDomain(zs95kjg110079_res)[9148]: INFO: Issuing graceful shutdown request for domain zs95kjg110079.</font><font size="4"><br></font><b><font size="4"><br>Finally </font></b><b><font size="4" face="Courier New">zs95kjg110079</font></b><b><font size="4"> shuts down on ZS93KJ at 12:59</font></b><font size="4"><br><br><br>===================<br><br>Does this "active on two nodes" recovery process look right? <br><br>What is the recommended procedure to "undo" the resource failures and dual host assignments? It took several hours (short of stopping/starting the entire cluster)<br>to recover them... resource disable, cleanup, enable was the basis ... but it seemed that I would fix one resource and two more would fall out. <br><br>This seems to be one of the pitfalls of configuring resources in symmetrical mode. <br><br>I would appreciate any best practice guidelines you have to offer. I saved the system logs on all hosts in case anyone needs more detailed information. <br>I also have pacemaker.log logs. <br><br>Thanks in advance!<br><br><br><br>Scott Greenlese ... IBM z/BX Solutions Test, Poughkeepsie, N.Y.<br>INTERNET: swgreenl@us.ibm.com <br>PHONE: 8/293-7301 (845-433-7301) M/S: POK 42HA/P966<br></font><tt>_______________________________________________<br>Users mailing list: Users@clusterlabs.org<br></tt><tt><a href="http://clusterlabs.org/mailman/listinfo/users">http://clusterlabs.org/mailman/listinfo/users</a></tt><tt><br><br>Project Home: </tt><tt><a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a></tt><tt><br>Getting started: </tt><tt><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></tt><tt><br>Bugs: </tt><tt><a href="http://bugs.clusterlabs.org">http://bugs.clusterlabs.org</a></tt><tt><br></tt><BR>
</body></html>