[ClusterLabs] IPaddr2 RA and bonding

Tomer Azran tomer.azran at edp.co.il
Thu Aug 10 11:02:16 UTC 2017


That looks exactly what I needed - it works.
I had to change the RA since I don't want to give an interface name as a parameter (it might change from server to server and I want to create a cloned resource).
I changed the RA a little bit to be able to guess the interface name based on a IP address parameter.
The new RA is published on my github repo: https://github.com/tomerazran/Pacemaker-Resource-Agents/blob/master/ipspeed 

Just to document the solution in case anyone will need it also, I run the following setup:

 # pcs resource create vip ocf:heartbeat:IPaddr2 ip=192.168.1.3 op monitor interval=30
 # pcs resource create vip_speed ocf:heartbeat:ipspeed ip=192.168.1.3 name=vip_speed op monitor interval=5s --clone
 # pcs constraint location vip rule score=-INFINITY vip_speed lt 1 or not_defined vip_speed

Thank you for the support,
Tomer.


-----Original Message-----
From: Vladislav Bogdanov [mailto:bubble at hoster-ok.com] 
Sent: Monday, August 7, 2017 9:22 PM
To: users at clusterlabs.org
Subject: Re: [ClusterLabs] IPaddr2 RA and bonding

07.08.2017 20:39, Tomer Azran wrote:
> I don't want to use this approach since I don't want to be depend on pinging to other host or couple of hosts.
> Is there any other solution?
> I'm thinking of writing a simple script that will take a bond down 
> using ifdown command when there are no slaves available and put it on 
> /sbin/ifdown-local

For the similar purpose I wrote and use this one - https://github.com/ClusterLabs/pacemaker/blob/master/extra/resources/ifspeed

It sets a node attribute on which other resources may depend via location constraint  - http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch08.html#ch-rules

It is not installed by default, and that should probably be fixed.

That RA supports bonds (and bridges), and even tries to guess actual resulting bond speed based on a bond type. For load-balancing bonds like LACP (mode 4) one it uses coefficient of 0.8 (iirc) to reflect actual possible load via multiple links.

>
>
> -----Original Message-----
> From: Ken Gaillot [mailto:kgaillot at redhat.com]
> Sent: Monday, August 7, 2017 7:14 PM
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed <users at clusterlabs.org>
> Subject: Re: [ClusterLabs] IPaddr2 RA and bonding
>
> On Mon, 2017-08-07 at 10:02 +0000, Tomer Azran wrote:
>> Hello All,
>>
>>
>>
>> We are using CentOS 7.3 with pacemaker in order to create a cluster.
>>
>> Each cluster node ha a bonding interface consists of two nics.
>>
>> The cluster has an IPAddr2 resource configured like that:
>>
>>
>>
>> # pcs resource show cluster_vip
>>
>> Resource: cluster_vip (class=ocf provider=heartbeat type=IPaddr2)
>>
>>   Attributes: ip=192.168.1.3
>>
>>   Operations: start interval=0s timeout=20s (cluster_vip
>> -start-interval-0s)
>>
>>               stop interval=0s timeout=20s (cluster_vip
>> -stop-interval-0s)
>>
>>               monitor interval=30s (cluster_vip 
>> -monitor-interval-30s)
>>
>>
>>
>>
>>
>> We are running tests and want to simulate a state when the network 
>> links are down.
>>
>> We are pulling both network cables from the server.
>>
>>
>>
>> The problem is that the resource is not marked as failed, and the 
>> faulted node keep holding it and does not fail it over to the other 
>> node.
>>
>> I think that the problem is within the bond interface. The bond 
>> interface is marked as UP on the OS. It even can ping itself:
>>
>>
>>
>> # ip link show
>>
>> 2: eno3: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq 
>> master bond1 state DOWN mode DEFAULT qlen 1000
>>
>>     link/ether 00:1e:67:f6:5a:8a brd ff:ff:ff:ff:ff:ff
>>
>> 3: eno4: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 1500 qdisc mq 
>> master bond1 state DOWN mode DEFAULT qlen 1000
>>
>>     link/ether 00:1e:67:f6:5a:8a brd ff:ff:ff:ff:ff:ff
>>
>> 9: bond1: <NO-CARRIER,BROADCAST,MULTICAST,MASTER,UP> mtu 1500 qdisc 
>> noqueue state DOWN mode DEFAULT qlen 1000
>>
>>     link/ether 00:1e:67:f6:5a:8a brd ff:ff:ff:ff:ff:ff
>>
>>
>>
>> As far as I understand the IPaddr2 RA does not check the link state 
>> of the interface – What can be done?
>
> You are correct. The IP address itself *is* up, even if the link is down, and it can be used locally on that host.
>
> If you want to monitor connectivity to other hosts, you have to do that separately. The most common approach is to use the ocf:pacemaker:ping resource. See:
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemak
> er_Explained/index.html#_moving_resources_due_to_connectivity_changes
>
>> BTW, I tried to find a solution on the bonding configuration which 
>> disables the bond when no link is up, but I didn't find any.
>>
>>
>>
>> Tomer.
>>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org 
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> --
> Ken Gaillot <kgaillot at redhat.com>
>
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org 
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> _______________________________________________
> Users mailing list: Users at clusterlabs.org 
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>


_______________________________________________
Users mailing list: Users at clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


More information about the Users mailing list