[ClusterLabs] ip clustering strange behaviour
Klaus Wenninger
kwenning at redhat.com
Tue Aug 30 08:53:38 UTC 2016
Then it is probably the default for no-quorum-policy (=stop)
On 08/30/2016 08:52 AM, Gabriele Bulfon wrote:
> Sorry for reiterating, but my main question was:
>
> why does node 1 removes its own IP if I shut down node 2 abruptly?
> I understand that it does not take the node 2 IP (because the
> ssh-fencing has no clue about what happened on the 2nd node), but I
> wouldn't expect it to shut down its own IP...this would kill any
> service on both nodes...what am I wrong?
>
> ----------------------------------------------------------------------------------------
> *Sonicle S.r.l. *: http://www.sonicle.com <http://www.sonicle.com/>
> *Music: *http://www.gabrielebulfon.com <http://www.gabrielebulfon.com/>
> *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon
>
> ------------------------------------------------------------------------
>
>
> *Da:* Gabriele Bulfon <gbulfon at sonicle.com>
> *A:* kwenning at redhat.com Cluster Labs - All topics related to
> open-source clustering welcomed <users at clusterlabs.org>
> *Data:* 29 agosto 2016 17.37.36 CEST
> *Oggetto:* Re: [ClusterLabs] ip clustering strange behaviour
>
>
> Ok, got it, I hadn't gracefully shut pacemaker on node2.
> Now I restarted, everything was up, stopped pacemaker service on
> host2 and I got host1 with both IPs configured. ;)
>
> But, though I understand that if I halt host2 with no grace shut
> of pacemaker, it will not move the IP2 to Host1, I don't expect
> host1 to loose its own IP! Why?
>
> Gabriele
>
> ----------------------------------------------------------------------------------------
> *Sonicle S.r.l. *: http://www.sonicle.com <http://www.sonicle.com/>
> *Music: *http://www.gabrielebulfon.com
> <http://www.gabrielebulfon.com/>
> *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon
>
>
>
> ----------------------------------------------------------------------------------
>
> Da: Klaus Wenninger <kwenning at redhat.com>
> A: users at clusterlabs.org
> Data: 29 agosto 2016 17.26.49 CEST
> Oggetto: Re: [ClusterLabs] ip clustering strange behaviour
>
> On 08/29/2016 05:18 PM, Gabriele Bulfon wrote:
> > Hi,
> >
> > now that I have IPaddr work, I have a strange behaviour on
> my test
> > setup of 2 nodes, here is my configuration:
> >
> > ===STONITH/FENCING===
> >
> > primitive xstorage1-stonith stonith:external/ssh-sonicle op
> monitor
> > interval="25" timeout="25" start-delay="25" params
> hostlist="xstorage1"
> >
> > primitive xstorage2-stonith stonith:external/ssh-sonicle op
> monitor
> > interval="25" timeout="25" start-delay="25" params
> hostlist="xstorage2"
> >
> > location xstorage1-stonith-pref xstorage1-stonith -inf:
> xstorage1
> > location xstorage2-stonith-pref xstorage2-stonith -inf:
> xstorage2
> >
> > property stonith-action=poweroff
> >
> >
> >
> > ===IP RESOURCES===
> >
> >
> > primitive xstorage1_wan1_IP ocf:heartbeat:IPaddr params
> ip="1.2.3.4"
> > cidr_netmask="255.255.255.0" nic="e1000g1"
> > primitive xstorage2_wan2_IP ocf:heartbeat:IPaddr params
> ip="1.2.3.5"
> > cidr_netmask="255.255.255.0" nic="e1000g1"
> >
> > location xstorage1_wan1_IP_pref xstorage1_wan1_IP 100: xstorage1
> > location xstorage2_wan2_IP_pref xstorage2_wan2_IP 100: xstorage2
> >
> > ===================
> >
> > So I plumbed e1000g1 with unconfigured IP on both machines
> and started
> > corosync/pacemaker, and after some time I got all nodes
> online and
> > started, with IP configured as virtual interfaces (e1000g1:1 and
> > e1000g1:2) one in host1 and one in host2.
> >
> > Then I halted host2, and I expected to have host1 started
> with both
> > IPs configured on host1.
> > Instead, I got host1 started with the IP stopped and removed
> (only
> > e1000g1 unconfigured), host2 stopped saying IP started (!?).
> > Not exactly what I expected...
> > What's wrong?
>
> How did you stop host2? Graceful shutdown of pacemaker? If not ...
> Anyway ssh-fencing is just working if the machine is still
> running ...
> So it will stay unclean and thus pacemaker is thinking that
> the IP might still be running on it. So this is actually the
> expected
> behavior.
> You might add a watchdog via sbd if you don't have other fencing
> hardware at hand ...
> >
> > Here is the crm status after I stopped host 2:
> >
> > 2 nodes and 4 resources configured
> >
> > Node xstorage2: UNCLEAN (offline)
> > Online: [ xstorage1 ]
> >
> > Full list of resources:
> >
> > xstorage1-stonith (stonith:external/ssh-sonicle): Started
> xstorage2
> > (UNCLEAN)
> > xstorage2-stonith (stonith:external/ssh-sonicle): Stopped
> > xstorage1_wan1_IP (ocf::heartbeat:IPaddr): Stopped
> > xstorage2_wan2_IP (ocf::heartbeat:IPaddr): Started xstorage2
> (UNCLEAN)
> >
> >
> > Gabriele
> >
> >
> ----------------------------------------------------------------------------------------
> > *Sonicle S.r.l. *: http://www.sonicle.com
> <http://www.sonicle.com/>
> > *Music: *http://www.gabrielebulfon.com
> <http://www.gabrielebulfon.com/>
> > *Quantum Mechanics : *http://www.cdbaby.com/cd/gabrielebulfon
> >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
More information about the Users
mailing list