<div dir="ltr">So I've switched my cluster to be asymmetric...<div><br></div><div>The remote nodes don't appear (although the VMs start, and pacemaker_remote runs on them). This is a problem.</div><div><br></div><div>
Temporarily switch to symmetric without restarting everything, and the remote nodes appear in the cluster status. (But I'd have to do more reconfiguration here.) Seems like the "remote node" resource won't start on the VM without some imaginary location constraint to help it?</div>
<div><br></div><div>Switch back to asymmetric, and the remote nodes are "offline" or "UNCLEAN", and pengine is dumping core. I didn't really expect it to work well, and such drastic changes should probably be done in maintenance mode, at least, but in case it is of help in diagnosing the original problem: "pcs status" starts like this:</div>
<div><br></div><div><div><font face="courier new, monospace"># pcs status</font></div><div><font face="courier new, monospace">Last updated: Thu Jul 11 12:53:00 2013</font></div><div><font face="courier new, monospace">Last change: Thu Jul 11 12:43:18 2013 via crmd on cvmh02</font></div>
<div><font face="courier new, monospace">Stack: cman</font></div><div><font face="courier new, monospace">Current DC: cvmh03 - partition with quorum</font></div><div><font face="courier new, monospace">Version: 1.1.10-3.el6.ccni-bead5ad</font></div>
<div><font face="courier new, monospace">11 Nodes configured, unknown expected votes</font></div><div><font face="courier new, monospace">69 Resources configured.</font></div><div><font face="courier new, monospace"><br></font></div>
<div><font face="courier new, monospace"><br></font></div><div><font face="courier new, monospace">Node db02:vm-db02: UNCLEAN (offline)</font></div><div><font face="courier new, monospace">Node ldap01: UNCLEAN (offline)</font></div>
<div><font face="courier new, monospace">Node ldap02: UNCLEAN (offline)</font></div><div><font face="courier new, monospace">Node swbuildsl6: UNCLEAN (offline)</font></div><div><font face="courier new, monospace">Online: [ cvmh01 cvmh02 cvmh03 cvmh04 db02 ]</font></div>
<div><font face="courier new, monospace">OFFLINE: [ ldap01:vm-ldap01 ldap02:vm-ldap02 swbuildsl6:vm-swbuildsl6 ]</font></div><div><br></div></div><div><br></div><div>Traceback:</div><div><br></div><div><div><font face="courier new, monospace"># gdb /usr/libexec/pacemaker/pengine /var/lib/heartbeat/cores/cor</font></div>
<div><font face="courier new, monospace">e.22873</font></div><div><font face="courier new, monospace">GNU gdb (GDB) Red Hat Enterprise Linux (7.2-60.el6_4.1)</font></div><div><font face="courier new, monospace">Copyright (C) 2010 Free Software Foundation, Inc.</font></div>
<div><font face="courier new, monospace">License GPLv3+: GNU GPL version 3 or later <<a href="http://gnu.org/licenses/gpl.html">http://gnu.org/licenses/gpl.html</a>></font></div><div><font face="courier new, monospace">This is free software: you are free to change and redistribute it.</font></div>
<div><font face="courier new, monospace">There is NO WARRANTY, to the extent permitted by law. Type "show copying"</font></div><div><font face="courier new, monospace">and "show warranty" for details.</font></div>
<div><font face="courier new, monospace">This GDB was configured as "x86_64-redhat-linux-gnu". </font></div><div><font face="courier new, monospace">For bug reporting instructions, please see:</font></div><div><font face="courier new, monospace"><<a href="http://www.gnu.org/software/gdb/bugs/">http://www.gnu.org/software/gdb/bugs/</a>>...</font></div>
<div><font face="courier new, monospace">Reading symbols from /usr/libexec/pacemaker/pengine...Reading symbols from /usr/</font></div><div><font face="courier new, monospace">lib/debug/usr/libexec/pacemaker/pengine.debug...done. </font></div>
<div><font face="courier new, monospace">done.</font></div><div><font face="courier new, monospace">[New Thread 22873]</font></div><div><font face="courier new, monospace">...</font></div></div><div><font face="courier new, monospace"><br>
</font></div><div><div><font face="courier new, monospace">(gdb) where</font></div><div><font face="courier new, monospace">#0 0x0000003f5b00d6ec in sort_rsc_process_order (a=<value optimized out>,</font></div><div>
<font face="courier new, monospace"> b=<value optimized out>, data=<value optimized out>) at allocate.c:1043</font></div><div><font face="courier new, monospace">#1 0x0000003b67a36979 in ?? () from /lib64/libglib-2.0.so.0</font></div>
<div><font face="courier new, monospace">#2 0x0000003b67a3691d in ?? () from /lib64/libglib-2.0.so.0</font></div><div><font face="courier new, monospace">#3 0x0000003b67a3691d in ?? () from /lib64/libglib-2.0.so.0</font></div>
<div><font face="courier new, monospace">#4 0x0000003b67a3691d in ?? () from /lib64/libglib-2.0.so.0</font></div><div><font face="courier new, monospace">#5 0x0000003b67a3692e in ?? () from /lib64/libglib-2.0.so.0</font></div>
<div><font face="courier new, monospace">#6 0x0000003f5b01267a in stage5 (data_set=0x7fff67a6b260) at allocate.c:1149</font></div><div><font face="courier new, monospace">#7 0x0000003f5b009b71 in do_calculations (data_set=0x7fff67a6b260,</font></div>
<div><font face="courier new, monospace"> xml_input=<value optimized out>, now=<value optimized out>)</font></div><div><font face="courier new, monospace"> at pengine.c:252</font></div><div><font face="courier new, monospace">#8 0x0000003f5b00a7b2 in process_pe_message (msg=0xbeb710, xml_data=0xbec0b0,</font></div>
<div><font face="courier new, monospace"> sender=0xbe2f10) at pengine.c:126</font></div></div><div><div><font face="courier new, monospace">#9 0x000000000040142f in pe_ipc_dispatch (qbc=<value optimized out>,</font></div>
<div><font face="courier new, monospace"> data=<value optimized out>, size=29292) at main.c:79</font></div><div><font face="courier new, monospace">#10 0x0000003b6ae0e874 in ?? () from /usr/lib64/libqb.so.0</font></div>
<div><font face="courier new, monospace">#11 0x0000003b6ae0ebc4 in qb_ipcs_dispatch_connection_request ()</font></div><div><font face="courier new, monospace"> from /usr/lib64/libqb.so.0</font></div><div><font face="courier new, monospace">#12 0x0000003f5982b0a0 in gio_read_socket (gio=<value optimized out>,</font></div>
<div><font face="courier new, monospace"> condition=G_IO_IN, data=0xbe8540) at mainloop.c:453</font></div><div><font face="courier new, monospace">#13 0x0000003b67a38f0e in g_main_context_dispatch () </font></div><div>
<font face="courier new, monospace"> from /lib64/libglib-2.0.so.0</font></div></div><div><div><font face="courier new, monospace">#14 0x0000003b67a3c938 in ?? () from /lib64/libglib-2.0.so.0</font></div><div><font face="courier new, monospace">#15 0x0000003b67a3cd55 in g_main_loop_run () from /lib64/libglib-2.0.so.0</font></div>
<div><font face="courier new, monospace">#16 0x0000000000401738 in main (argc=1, argv=0x7fff67a6b858) at main.c:182</font></div></div><div><br></div><div><br></div><div>After a reboot, I had to remove the nodes ldap01, ldap02, swbuildsl6. My cluster is again working in asymmetric mode, except the remote nodes are not appearing online:</div>
<div><br></div><div><div><font face="courier new, monospace"># pcs status</font></div><div><font face="courier new, monospace">Last updated: Thu Jul 11 13:08:22 2013</font></div><div><font face="courier new, monospace">Last change: Thu Jul 11 13:01:39 2013 via crm_node on cvmh01</font></div>
<div><font face="courier new, monospace">Stack: cman</font></div><div><font face="courier new, monospace">Current DC: cvmh02 - partition with quorum</font></div><div><font face="courier new, monospace">Version: 1.1.10-3.el6.ccni-bead5ad</font></div>
<div><font face="courier new, monospace">8 Nodes configured, unknown expected votes</font></div><div><font face="courier new, monospace">54 Resources configured.</font></div><div><font face="courier new, monospace"><br></font></div>
<div><font face="courier new, monospace"><br></font></div><div><font face="courier new, monospace">Online: [ cvmh01 cvmh02 cvmh03 cvmh04 ]</font></div><div><font face="courier new, monospace">OFFLINE: [ db02:vm-db02 ldap01:vm-ldap01 ldap02:vm-ldap02 swbuildsl6:vm-swbuildsl6 ]</font></div>
<div><font face="courier new, monospace"><br></font></div><div><font face="courier new, monospace">Full list of resources:</font></div><div><font face="courier new, monospace"><br></font></div><div><font face="courier new, monospace"> fence-cvmh01 (stonith:fence_ipmilan): Started cvmh04 </font></div>
<div><font face="courier new, monospace"> fence-cvmh02 (stonith:fence_ipmilan): Started cvmh03 </font></div><div><font face="courier new, monospace"> fence-cvmh03 (stonith:fence_ipmilan): Started cvmh04 </font></div>
<div><font face="courier new, monospace"> fence-cvmh04 (stonith:fence_ipmilan): Started cvmh01 </font></div><div><font face="courier new, monospace"> Clone Set: c-fs-libvirt-VM-xcm [fs-libvirt-VM-xcm]</font></div>
<div><font face="courier new, monospace"> Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]</font></div><div><font face="courier new, monospace"> Clone Set: c-p-libvirtd [p-libvirtd]</font></div><div><font face="courier new, monospace"> Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]</font></div>
<div><font face="courier new, monospace"> Clone Set: c-fs-bind-libvirt-VM-cvmh [fs-bind-libvirt-VM-cvmh]</font></div><div><font face="courier new, monospace"> Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]</font></div><div>
<font face="courier new, monospace"> Clone Set: c-watch-ib0 [p-watch-ib0]</font></div><div><font face="courier new, monospace"> Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]</font></div><div><font face="courier new, monospace"> Clone Set: c-fs-gpfs [p-fs-gpfs]</font></div>
<div><font face="courier new, monospace"> Started: [ cvmh01 cvmh02 cvmh03 cvmh04 ]</font></div><div><font face="courier new, monospace"> vm-compute-test (ocf::ccni:xcatVirtualDomain): Started cvmh01 </font></div>
<div><font face="courier new, monospace"> vm-swbuildsl6 (ocf::ccni:xcatVirtualDomain): Started cvmh01 </font></div><div><font face="courier new, monospace"> vm-db02 (ocf::ccni:xcatVirtualDomain): Started cvmh02 </font></div>
<div><font face="courier new, monospace"> vm-ldap01 (ocf::ccni:xcatVirtualDomain): Started cvmh03 </font></div><div><font face="courier new, monospace"> vm-ldap02 (ocf::ccni:xcatVirtualDomain): Started cvmh04 </font></div>
<div><font face="courier new, monospace"> DummyOnVM (ocf::pacemaker:Dummy): Stopped </font></div><div><br></div></div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 10, 2013 at 6:43 PM, Lindsay Todd <span dir="ltr"><<a href="mailto:rltodd.ml1@gmail.com" target="_blank">rltodd.ml1@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Yes, it avoids the crashes. Thanks! But I am still seeing spurious VM migrations/shutdowns when I stop/start a VM with a remote pacemaker (similar to my last update, only no core dumped while fencing, nor indeed does any fencing happen, even though I've now verified that fence_node works again.</div>
<div class="HOEnZb"><div class="h5">
<div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jul 10, 2013 at 2:12 PM, David Vossel <span dir="ltr"><<a href="mailto:dvossel@redhat.com" target="_blank">dvossel@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>----- Original Message -----<br>
> From: "Lindsay Todd" <<a href="mailto:rltodd.ml1@gmail.com" target="_blank">rltodd.ml1@gmail.com</a>><br>
> To: "The Pacemaker cluster resource manager" <<a href="mailto:pacemaker@oss.clusterlabs.org" target="_blank">pacemaker@oss.clusterlabs.org</a>><br>
</div><div>> Sent: Wednesday, July 10, 2013 12:11:00 PM<br>
> Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes<br>
><br>
</div><div>> Hmm, I'll still submit the bug report, but it seems like crmd is dumping core<br>
> while attempting to fence a node. If I use fence_node to fence a real<br>
> cluster node, that also causes crmd to dump core. But apart from that, I<br>
> don't really see why pacemaker is trying to fence anything.<br>
<br>
</div>This should solve the crashes you are seeing.<br>
<br>
<a href="https://github.com/ClusterLabs/pacemaker/commit/97dd3b05db867c4674fa4780802bba54c63bd06d" target="_blank">https://github.com/ClusterLabs/pacemaker/commit/97dd3b05db867c4674fa4780802bba54c63bd06d</a><br>
<span><font color="#888888"><br>
-- Vossel<br>
</font></span><div><div><br>
><br>
><br>
> On Wed, Jul 10, 2013 at 12:42 PM, Lindsay Todd < <a href="mailto:rltodd.ml1@gmail.com" target="_blank">rltodd.ml1@gmail.com</a> ><br>
> wrote:<br>
><br>
><br>
><br>
> Thanks! But there is still a problem.<br>
><br>
> I am now working from the master branch and building RPMs (well, I have to<br>
> also rebuild from the srpm to change the build number, since the RPMs built<br>
> directly are always 1.1.10-1). The patch is in the git log, and indeed<br>
> things are better ... But I still see the spurious VMs shutting down. What<br>
> is much improved is that they do get restarted, and basically I end up in<br>
> the state I want to be. Can almost live with this, and I was going to start<br>
> changing my cluster config to be asymmetric when I noticed the in the midst<br>
> of the spurious transitions, crmd is dumping core.<br>
><br>
> So I'll append another crm_report to bug 5164, as well as a gdb traceback.<br>
><br>
><br>
> On Fri, Jul 5, 2013 at 5:06 PM, David Vossel < <a href="mailto:dvossel@redhat.com" target="_blank">dvossel@redhat.com</a> > wrote:<br>
><br>
><br>
><br>
> ----- Original Message -----<br>
> > From: "David Vossel" < <a href="mailto:dvossel@redhat.com" target="_blank">dvossel@redhat.com</a> ><br>
> > To: "The Pacemaker cluster resource manager" <<br>
> > <a href="mailto:pacemaker@oss.clusterlabs.org" target="_blank">pacemaker@oss.clusterlabs.org</a> ><br>
> > Sent: Wednesday, July 3, 2013 4:20:37 PM<br>
> > Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes<br>
> ><br>
> > ----- Original Message -----<br>
> > > From: "Lindsay Todd" < <a href="mailto:rltodd.ml1@gmail.com" target="_blank">rltodd.ml1@gmail.com</a> ><br>
> > > To: "The Pacemaker cluster resource manager"<br>
> > > < <a href="mailto:pacemaker@oss.clusterlabs.org" target="_blank">pacemaker@oss.clusterlabs.org</a> ><br>
> > > Sent: Wednesday, July 3, 2013 2:12:05 PM<br>
> > > Subject: Re: [Pacemaker] Pacemaker remote nodes, naming, and attributes<br>
> > ><br>
> > > Well, I'm not getting failures right now simply with attributes, but I<br>
> > > can<br>
> > > induce a failure by stopping the vm-db02 (it puts db02 into an unclean<br>
> > > state, and attempts to migrate the unrelated vm-compute-test). I've<br>
> > > collected the commands from my latest interactions, a crm_report, and a<br>
> > > gdb<br>
> > > traceback from the core file that crmd dumped, into bug 5164.<br>
> ><br>
> ><br>
> > Thanks, hopefully I can start investigating this Friday<br>
> ><br>
> > -- Vossel<br>
><br>
> Yeah, this is a bad one. Adding the node attributes using crm_attribute for<br>
> the remote-node did some unexpected things to the crmd component. Somehow<br>
> the remote-node was getting entered into the cluster node cache... which<br>
> made it look like we had both a cluster-node and remote-node named the same<br>
> thing... not good.<br>
><br>
> I think I got that part worked out. Try this patch.<br>
><br>
> <a href="https://github.com/ClusterLabs/pacemaker/commit/67dfff76d632f1796c9ded8fd367aa49258c8c32" target="_blank">https://github.com/ClusterLabs/pacemaker/commit/67dfff76d632f1796c9ded8fd367aa49258c8c32</a><br>
><br>
> Rather than trying to patch RCs, it might be worth trying out the master<br>
> branch on github (which already has this patch). If you aren't already, use<br>
> rpms to make your life easier. Running 'make rpm' in the source directory<br>
> will generate them for you.<br>
><br>
> There was another bug fixed recently in pacemaker_remote involving the<br>
> directory created for resource agents to store their temporary data (stuff<br>
> like pid files). I believe the fix was not introduced until 1.1.10rc6.<br>
><br>
> -- Vossel<br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>