[ClusterLabs] ip clustering strange behaviour
Ken Gaillot
kgaillot at redhat.com
Wed Aug 31 15:25:05 UTC 2016
On 08/30/2016 01:52 AM, Gabriele Bulfon wrote:
> Sorry for reiterating, but my main question was:
>
> why does node 1 removes its own IP if I shut down node 2 abruptly?
> I understand that it does not take the node 2 IP (because the
> ssh-fencing has no clue about what happened on the 2nd node), but I
> wouldn't expect it to shut down its own IP...this would kill any service
> on both nodes...what am I wrong?
Assuming you're using corosync 2, be sure you have "two_node: 1" in
corosync.conf. That will tell corosync to pretend there is always
quorum, so pacemaker doesn't need any special quorum settings. See the
votequorum(5) man page for details. Of course, you need fencing in this
setup, to handle when communication between the nodes is broken but both
are still up.
> ----------------------------------------------------------------------------------------
> *Sonicle S.r.l. *: http://www.sonicle.com <http://www.sonicle.com/>
> *Music: *http://www.gabrielebulfon.com <http://www.gabrielebulfon.com/>
> *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon
>
> ------------------------------------------------------------------------
>
>
> *Da:* Gabriele Bulfon <gbulfon at sonicle.com>
> *A:* kwenning at redhat.com Cluster Labs - All topics related to
> open-source clustering welcomed <users at clusterlabs.org>
> *Data:* 29 agosto 2016 17.37.36 CEST
> *Oggetto:* Re: [ClusterLabs] ip clustering strange behaviour
>
>
> Ok, got it, I hadn't gracefully shut pacemaker on node2.
> Now I restarted, everything was up, stopped pacemaker service on
> host2 and I got host1 with both IPs configured. ;)
>
> But, though I understand that if I halt host2 with no grace shut of
> pacemaker, it will not move the IP2 to Host1, I don't expect host1
> to loose its own IP! Why?
>
> Gabriele
>
> ----------------------------------------------------------------------------------------
> *Sonicle S.r.l. *: http://www.sonicle.com <http://www.sonicle.com/>
> *Music: *http://www.gabrielebulfon.com <http://www.gabrielebulfon.com/>
> *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon
>
>
>
> ----------------------------------------------------------------------------------
>
> Da: Klaus Wenninger <kwenning at redhat.com>
> A: users at clusterlabs.org
> Data: 29 agosto 2016 17.26.49 CEST
> Oggetto: Re: [ClusterLabs] ip clustering strange behaviour
>
> On 08/29/2016 05:18 PM, Gabriele Bulfon wrote:
> > Hi,
> >
> > now that I have IPaddr work, I have a strange behaviour on my test
> > setup of 2 nodes, here is my configuration:
> >
> > ===STONITH/FENCING===
> >
> > primitive xstorage1-stonith stonith:external/ssh-sonicle op
> monitor
> > interval="25" timeout="25" start-delay="25" params
> hostlist="xstorage1"
> >
> > primitive xstorage2-stonith stonith:external/ssh-sonicle op
> monitor
> > interval="25" timeout="25" start-delay="25" params
> hostlist="xstorage2"
> >
> > location xstorage1-stonith-pref xstorage1-stonith -inf: xstorage1
> > location xstorage2-stonith-pref xstorage2-stonith -inf: xstorage2
> >
> > property stonith-action=poweroff
> >
> >
> >
> > ===IP RESOURCES===
> >
> >
> > primitive xstorage1_wan1_IP ocf:heartbeat:IPaddr params
> ip="1.2.3.4"
> > cidr_netmask="255.255.255.0" nic="e1000g1"
> > primitive xstorage2_wan2_IP ocf:heartbeat:IPaddr params
> ip="1.2.3.5"
> > cidr_netmask="255.255.255.0" nic="e1000g1"
> >
> > location xstorage1_wan1_IP_pref xstorage1_wan1_IP 100: xstorage1
> > location xstorage2_wan2_IP_pref xstorage2_wan2_IP 100: xstorage2
> >
> > ===================
> >
> > So I plumbed e1000g1 with unconfigured IP on both machines and
> started
> > corosync/pacemaker, and after some time I got all nodes online and
> > started, with IP configured as virtual interfaces (e1000g1:1 and
> > e1000g1:2) one in host1 and one in host2.
> >
> > Then I halted host2, and I expected to have host1 started with
> both
> > IPs configured on host1.
> > Instead, I got host1 started with the IP stopped and removed (only
> > e1000g1 unconfigured), host2 stopped saying IP started (!?).
> > Not exactly what I expected...
> > What's wrong?
>
> How did you stop host2? Graceful shutdown of pacemaker? If not ...
> Anyway ssh-fencing is just working if the machine is still
> running ...
> So it will stay unclean and thus pacemaker is thinking that
> the IP might still be running on it. So this is actually the
> expected
> behavior.
> You might add a watchdog via sbd if you don't have other fencing
> hardware at hand ...
> >
> > Here is the crm status after I stopped host 2:
> >
> > 2 nodes and 4 resources configured
> >
> > Node xstorage2: UNCLEAN (offline)
> > Online: [ xstorage1 ]
> >
> > Full list of resources:
> >
> > xstorage1-stonith (stonith:external/ssh-sonicle): Started
> xstorage2
> > (UNCLEAN)
> > xstorage2-stonith (stonith:external/ssh-sonicle): Stopped
> > xstorage1_wan1_IP (ocf::heartbeat:IPaddr): Stopped
> > xstorage2_wan2_IP (ocf::heartbeat:IPaddr): Started xstorage2
> (UNCLEAN)
> >
> >
> > Gabriele
> >
> >
> ----------------------------------------------------------------------------------------
> > *Sonicle S.r.l. *: http://www.sonicle.com
> <http://www.sonicle.com/>
> > *Music: *http://www.gabrielebulfon.com
> <http://www.gabrielebulfon.com/>
> > *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon
More information about the Users
mailing list