[ClusterLabs] Users Digest, Vol 29, Issue 10
ashutosh tiwari
ashutosh.kvas at gmail.com
Fri Jun 9 10:06:43 EDT 2017
On 06/09/2017 07:43 AM, ashutosh tiwari wrote:
> Hi,
>
> We have two node cluster(ACTIVE/STANDBY).
> Recently we moved these nodes to KVM.
>
> When we create a private virtual network and use this vnet for
> assigning cluster interfaces then things work as expected and both the
> nodes are able to form the cluster.
>
> Nodes are not able to form cluster when we use macvtap(bridge)
> interfaces for cluster.
>
> came across this:
>
> https://github.com/vagrant-libvirt/vagrant-libvirt/issues/650
> <https://github.com/vagrant-libvirt/vagrant-libvirt/issues/650>
>
Seems to deal with multicast-issues...
Wouldn't using corosync with unicast be a possibility?
Regards,
Klaus
>>>Hi Klaus,
>>>Once sure that it can not be achieved with multicast, probably we can
look at using unicast for corosync communication.
>>>Regards,
>>>Ashutosh
On Fri, Jun 9, 2017 at 3:30 PM, <users-request at clusterlabs.org> wrote:
> Send Users mailing list submissions to
> users at clusterlabs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.clusterlabs.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-request at clusterlabs.org
>
> You can reach the person managing the list at
> users-owner at clusterlabs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
> 1. Re: Node attribute disappears when pacemaker is started
> (Ken Gaillot)
> 2. Re: IPMI and APC switched PDUs fencing agents
> (Jean-Francois Malouin)
> 3. cluster setup for nodes at KVM guest (ashutosh tiwari)
> 4. Re: cluster setup for nodes at KVM guest (Klaus Wenninger)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Thu, 8 Jun 2017 09:42:53 -0500
> From: Ken Gaillot <kgaillot at redhat.com>
> To: users at clusterlabs.org
> Subject: Re: [ClusterLabs] Node attribute disappears when pacemaker is
> started
> Message-ID: <5db7ebbe-be2a-c626-932e-27b56969b045 at redhat.com>
> Content-Type: text/plain; charset=utf-8
>
> Hi,
>
> Looking at the incident around May 26 16:40:00, here is what happens:
>
> You are setting the attribute for rhel73-2 from rhel73-1, while rhel73-2
> is not part of cluster from rhel73-1's point of view.
>
> The crm shell sets the node attribute for rhel73-2 with a CIB
> modification that starts like this:
>
> ++ /cib/configuration/nodes: <node uname="rhel73-2" id="rhel73-2"/>
>
> Note that the node ID is the same as its name. The CIB accepts the
> change (because you might be adding the proper node later). The crmd
> knows that this is not currently valid:
>
> May 26 16:39:39 rhel73-1 crmd[2908]: error: Invalid node id: rhel73-2
>
> When rhel73-2 joins the cluster, rhel73-1 learns its node ID, and it
> removes the existing (invalid) rhel73-2 entry, including its attributes,
> because it assumes that the entry is for an older node that has been
> removed.
>
> I believe attributes can be set for a node that's not in the cluster
> only if the node IDs are specified explicitly in corosync.conf.
>
> You may want to mention the issue to the crm shell developers. It should
> probably at least warn if the node isn't known.
>
>
> On 05/31/2017 09:35 PM, ?? ?? wrote:
> > Hi Ken,
> >
> > I'm sorry. Attachment size was too large.
> > I attached it to GitHub, so look at it.
> > https://github.com/inouekazu/pcmk_report/blob/master/pcmk-
> Fri-26-May-2017.tar.bz2
> >
> >> -----Original Message-----
> >> From: Ken Gaillot [mailto:kgaillot at redhat.com]
> >> Sent: Thursday, June 01, 2017 8:43 AM
> >> To: users at clusterlabs.org
> >> Subject: Re: [ClusterLabs] Node attribute disappears when pacemaker is
> started
> >>
> >> On 05/26/2017 03:21 AM, ?? ?? wrote:
> >>> Hi Ken,
> >>>
> >>> I got crm_report.
> >>>
> >>> Regards,
> >>> Kazunori INOUE
> >>
> >> I don't think it attached -- my mail client says it's 0 bytes.
> >>
> >>>> -----Original Message-----
> >>>> From: Ken Gaillot [mailto:kgaillot at redhat.com]
> >>>> Sent: Friday, May 26, 2017 4:23 AM
> >>>> To: users at clusterlabs.org
> >>>> Subject: Re: [ClusterLabs] Node attribute disappears when pacemaker
> is started
> >>>>
> >>>> On 05/24/2017 05:13 AM, ?? ?? wrote:
> >>>>> Hi,
> >>>>>
> >>>>> After loading the node attribute, when I start pacemaker of that
> node, the attribute disappears.
> >>>>>
> >>>>> 1. Start pacemaker on node1.
> >>>>> 2. Load configure containing node attribute of node2.
> >>>>> (I use multicast addresses in corosync, so did not set "nodelist
> {nodeid: }" in corosync.conf.)
> >>>>> 3. Start pacemaker on node2, the node attribute that should have
> been load disappears.
> >>>>> Is this specifications ?
> >>>>
> >>>> Hi,
> >>>>
> >>>> No, this should not happen for a permanent node attribute.
> >>>>
> >>>> Transient node attributes (status-attr in crm shell) are erased when
> the
> >>>> node starts, so it would be expected in that case.
> >>>>
> >>>> I haven't been able to reproduce this with a permanent node attribute.
> >>>> Can you attach logs from both nodes around the time node2 is started?
> >>>>
> >>>>>
> >>>>> 1.
> >>>>> [root at rhel73-1 ~]# systemctl start corosync;systemctl start
> pacemaker
> >>>>> [root at rhel73-1 ~]# crm configure show
> >>>>> node 3232261507: rhel73-1
> >>>>> property cib-bootstrap-options: \
> >>>>> have-watchdog=false \
> >>>>> dc-version=1.1.17-0.1.rc2.el7-524251c \
> >>>>> cluster-infrastructure=corosync
> >>>>>
> >>>>> 2.
> >>>>> [root at rhel73-1 ~]# cat rhel73-2.crm
> >>>>> node rhel73-2 \
> >>>>> utilization capacity="2" \
> >>>>> attributes attrname="attr2"
> >>>>>
> >>>>> [root at rhel73-1 ~]# crm configure load update rhel73-2.crm
> >>>>> [root at rhel73-1 ~]# crm configure show
> >>>>> node 3232261507: rhel73-1
> >>>>> node rhel73-2 \
> >>>>> utilization capacity=2 \
> >>>>> attributes attrname=attr2
> >>>>> property cib-bootstrap-options: \
> >>>>> have-watchdog=false \
> >>>>> dc-version=1.1.17-0.1.rc2.el7-524251c \
> >>>>> cluster-infrastructure=corosync
> >>>>>
> >>>>> 3.
> >>>>> [root at rhel73-1 ~]# ssh rhel73-2 'systemctl start corosync;systemctl
> start pacemaker'
> >>>>> [root at rhel73-1 ~]# crm configure show
> >>>>> node 3232261507: rhel73-1
> >>>>> node 3232261508: rhel73-2
> >>>>> property cib-bootstrap-options: \
> >>>>> have-watchdog=false \
> >>>>> dc-version=1.1.17-0.1.rc2.el7-524251c \
> >>>>> cluster-infrastructure=corosync
> >>>>>
> >>>>> Regards,
> >>>>> Kazunori INOUE
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 8 Jun 2017 16:40:59 -0400
> From: Jean-Francois Malouin <Jean-Francois.Malouin at bic.mni.mcgill.ca>
> To: Cluster Labs - All topics related to open-source clustering
> welcomed <users at clusterlabs.org>
> Subject: Re: [ClusterLabs] IPMI and APC switched PDUs fencing agents
> Message-ID: <20170608204059.GC11019 at bic.mni.mcgill.ca>
> Content-Type: text/plain; charset=us-ascii
>
> * Jean-Francois Malouin <Jean-Francois.Malouin at bic.mni.mcgill.ca>
> [20170607 13:09]:
> > Hi,
>
> ..snip...
>
> > I'm having some difficulties understanding the fencing_topology syntax.
> > Making the changes adapted for my local configuration I get the error
> > when trying to add the fence topology:
> >
> > ~# crm configure fencing_topology antonio: fence_antonio_ipmi
> fence_antonio_psu1_off,fence_antonio_psu2_off,fence_
> antonio_psu1_on,fence_antonio_psu2_on \
> > leonato: fence_leonato_ipmi fence_leonato_psu1_off,fence_
> leonato_psu2_off,fence_leonato_psu1_on,fence_leonato_psu2_on
> > WARNING: fencing_topology: target antonio not a node
> > WARNING: fencing_topology: target antonio not a node
> > WARNING: fencing_topology: target leonato not a node
> > WARNING: fencing_topology: target leonato not a node
>
> In retrospect I should have used the long (fqdn) of the nodes rather
> than the short names, which as reportd, are not node names.
>
> > What should I use for the <node> in:
> >
> > fencing_topology <stonith_resources> [<stonith_resources> ...]
> > fencing_topology <fencing_order> [<fencing_order> ...]
> >
> > fencing_order :: <target> <stonith_resources> [<stonith_resources> ...]
> >
> > stonith_resources :: <rsc>[,<rsc>...]
> > target :: <node>: | attr:<node-attribute>=<value>
>
> I tried to use the node names (fqdn) when adding the fence_topology
> resource to the cib but I always got an error from crm telling me of an
> invalid DTD/schema. In the end I had to add a 'dummy' attributes to each
> nodes (dummy=<node_name>) and use them to create the fence levels, as
> in:
>
> crm configure fencing_topology \
> attr:dummy=node1 fence_node1_ipmi fence_node1_psu1_off,fence_
> node1_psu2_off,fence_node1_psu1_on,fence_node1_psu2_on \
> attr:dummy=node2 fence_node2_ipmi fence_node2_psu1_off,fence_
> node2_psu2_off,fence_node2_psu1_on,fence_node2_psu2_on
>
> Is this the expected behaviour?
>
> Thanks,
> jf
>
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 9 Jun 2017 11:13:36 +0530
> From: ashutosh tiwari <ashutosh.kvas at gmail.com>
> To: users at clusterlabs.org
> Subject: [ClusterLabs] cluster setup for nodes at KVM guest
> Message-ID:
> <CA+vEgjj1a7gKYReY-YcFootNHmkAzGJf_3XHxJehW2S0vMrgaA at mail.gmail.
> com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi,
>
> We have two node cluster(ACTIVE/STANDBY).
> Recently we moved these nodes to KVM.
>
> When we create a private virtual network and use this vnet for assigning
> cluster interfaces then things work as expected and both the nodes are able
> to form the cluster.
>
> Nodes are not able to form cluster when we use macvtap(bridge) interfaces
> for cluster.
>
> came across this:
>
> https://github.com/vagrant-libvirt/vagrant-libvirt/issues/650
>
> tried the suggested workaround in the threadbut(trustGuetRxFilters='yes')
> but of no help.
>
>
> Thanks and Regards,
> Ashutosh
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.clusterlabs.org/pipermail/users/attachments/
> 20170609/2305453f/attachment-0001.html>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 9 Jun 2017 07:49:48 +0200
> From: Klaus Wenninger <kwenning at redhat.com>
> To: users at clusterlabs.org
> Subject: Re: [ClusterLabs] cluster setup for nodes at KVM guest
> Message-ID: <bcaac851-2c1f-3204-2839-9145a1b2acae at redhat.com>
> Content-Type: text/plain; charset=utf-8
>
> On 06/09/2017 07:43 AM, ashutosh tiwari wrote:
> > Hi,
> >
> > We have two node cluster(ACTIVE/STANDBY).
> > Recently we moved these nodes to KVM.
> >
> > When we create a private virtual network and use this vnet for
> > assigning cluster interfaces then things work as expected and both the
> > nodes are able to form the cluster.
> >
> > Nodes are not able to form cluster when we use macvtap(bridge)
> > interfaces for cluster.
> >
> > came across this:
> >
> > https://github.com/vagrant-libvirt/vagrant-libvirt/issues/650
> > <https://github.com/vagrant-libvirt/vagrant-libvirt/issues/650>
> >
>
> Seems to deal with multicast-issues...
> Wouldn't using corosync with unicast be a possibility?
>
> Regards,
> Klaus
>
> > tried the suggested workaround in the
> > threadbut(trustGuetRxFilters='yes') but of no help.
> >
> >
> > Thanks and Regards,
> > Ashutosh
> >
> >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
>
>
>
>
>
> ------------------------------
>
> _______________________________________________
> Users mailing list
> Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 29, Issue 10
> *************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170609/956a1f32/attachment-0002.html>
More information about the Users
mailing list