[Pacemaker] cloned IPaddr2 on 4 nodes

Vladimir Legeza vladimir.legeza at gmail.com
Thu Oct 28 14:00:39 UTC 2010


*Hello folks.

I try to setup four ip balanced nodes but,  I didn't found the right way to
balance load between nodes when some of them are filed.

I've done:*

[root at node1 ~]# crm configure show
node node1
node node2
node node3
node node4
primitive ClusterIP ocf:heartbeat:IPaddr2 \
    params ip="10.138.10.252" cidr_netmask="32"
clusterip_hash="sourceip-sourceport" \
    op monitor interval="30s"
clone StreamIP ClusterIP \
    meta globally-unique="true" *clone-max="8"
clone-node-max="2"*target-role="Started" notify="true" ordered="true"
interleave="true"
property $id="cib-bootstrap-options" \
    dc-version="1.0.9-0a40fd0cb9f2fcedef9d1967115c912314c57438" \
    cluster-infrastructure="openais" \
    expected-quorum-votes="4" \
    no-quorum-policy="ignore" \
    stonith-enabled="false"

*When all the nodes are up and running:*

 [root at node1 ~]# crm status
============
Last updated: Thu Oct 28 17:26:13 2010
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.9-0a40fd0cb9f2fcedef9d1967115c912314c57438
4 Nodes configured, 4 expected votes
2 Resources configured.
============

Online: [ node1 node2 node3 node4 ]

 Clone Set: StreamIP (unique)
     ClusterIP:0    (ocf::heartbeat:IPaddr2):    Started node1
     ClusterIP:1    (ocf::heartbeat:IPaddr2):    Started node1
     ClusterIP:2    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:3    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:4    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:5    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:6    (ocf::heartbeat:IPaddr2):    Started node4
     ClusterIP:7    (ocf::heartbeat:IPaddr2):    Started node4
*
Everything is OK and each node takes 1/4 of all traffic - wonderfull.
But we become to 25% traffic loss if one of them goes down:
*
[root at node1 ~]# crm node standby node1
[root at node1 ~]# crm status
============
Last updated: Thu Oct 28 17:30:01 2010
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.9-0a40fd0cb9f2fcedef9d1967115c912314c57438
4 Nodes configured, 4 expected votes
2 Resources configured.
============

Node node1: standby
Online: [ node2 node3 node4 ]

 Clone Set: StreamIP (unique)
*     ClusterIP:0    (ocf::heartbeat:IPaddr2):    Stopped
     ClusterIP:1    (ocf::heartbeat:IPaddr2):    Stopped *
     ClusterIP:2    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:3    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:4    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:5    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:6    (ocf::heartbeat:IPaddr2):    Started node4
     ClusterIP:7    (ocf::heartbeat:IPaddr2):    Started node4

*I found the solution (to prevent loosing) by set clone-node-max to 3*

[root at node1 ~]# crm resource meta StreamIP set clone-node-max 3
[root at node1 ~]# crm status
============
Last updated: Thu Oct 28 17:35:05 2010
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.9-0a40fd0cb9f2fcedef9d1967115c912314c57438
4 Nodes configured, 4 expected votes
2 Resources configured.
============

*Node node1: standby*
Online: [ node2 node3 node4 ]

 Clone Set: StreamIP (unique)
*     ClusterIP:0    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:1    (ocf::heartbeat:IPaddr2):    Started node3*
     ClusterIP:2    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:3    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:4    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:5    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:6    (ocf::heartbeat:IPaddr2):    Started node4
     ClusterIP:7    (ocf::heartbeat:IPaddr2):    Started node4

*The problem is that nothing gonna changed when node1 back online.*

[root at node1 ~]# crm node online node1
[root at node1 ~]# crm status
============
Last updated: Thu Oct 28 17:37:43 2010
Stack: openais
Current DC: node2 - partition with quorum
Version: 1.0.9-0a40fd0cb9f2fcedef9d1967115c912314c57438
4 Nodes configured, 4 expected votes
2 Resources configured.
============

Online: [ *node1* node2 node3 node4 ]

 Clone Set: StreamIP (unique)
*     ClusterIP:0    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:1    (ocf::heartbeat:IPaddr2):    Started node3*
     ClusterIP:2    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:3    (ocf::heartbeat:IPaddr2):    Started node2
     ClusterIP:4    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:5    (ocf::heartbeat:IPaddr2):    Started node3
     ClusterIP:6    (ocf::heartbeat:IPaddr2):    Started node4
     ClusterIP:7    (ocf::heartbeat:IPaddr2):    Started node4
*
There are NO TRAFFIC on node1.
If I back clone-node-max to 2  - all nodes revert to the original state.*



So, My question is How to avoid such "hand-made" changes ( or is it possible
to automate* clone-node-max* adjustments)?

Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20101028/fee2e980/attachment-0001.html>


More information about the Pacemaker mailing list