[ClusterLabs] Antw: Regarding IP tables and IP Address clone

Somanath Jeeva somanath.jeeva at ericsson.com
Mon Jan 4 14:15:28 UTC 2016


Hi,

I checked with the IT team.

No, the Multicast MAC is not getting added to the ARP table of the switch.
I will try adding the entry to the ARP table manually and check.

Regards
Somanath Thilak J

From: Michael Schwartzkopff [mailto:ms at sys4.de]
Sent: Thursday, December 31, 2015 00:49
To: Cluster Labs - All topics related to open-source clustering welcomed
Subject: Re: [ClusterLabs] Antw: Regarding IP tables and IP Address clone


Am Mittwoch, 30. Dezember 2015, 14:56:58 schrieb Somanath Jeeva:

> >From: Michael Schwartzkopff [mailto:ms at sys4.de]

> >Sent: Wednesday, December 30, 2015 8:09 PM

> >To: Cluster Labs - All topics related to open-source clustering welcomed

> >Subject: Re: [ClusterLabs] Antw: Regarding IP tables and IP Address clone

>

> Am Mittwoch, 30. Dezember 2015, 13:54:40 schrieb Somanath Jeeva:

> > >>>> Somanath Jeeva <somanath.jeeva at

> > >>>> ericsson.com<http://clusterlabs.org/mailman/listinfo/users>>

> > >>>> schrieb am 30.12.2015 um 11:34 in>

> > >

> > >Nachricht <4F5E5141ED95FF45B3128F3C7B1B2A6721ABFE13 at

>

> eusaamb109.ericsson.se<http://clusterlabs.org/mailman/listinfo/users>>:

> > >> On 12/22/2015 08:09 AM, Somanath Jeeva wrote:

> > >>> Hi

> > >>>

> > >>> I am trying to use ip loadbalancing using cloning feature in

> > >>> pacemaker.

> > >>> but

> > >>

> > >> After 15 min the virtual ip becomes unreachable. Below is the

> > >> pacemaker

> > >>

> > >> cluster config

> > >>

> > >>> # pcs status

> > >>>

> > >>> Cluster name: DES

> > >>>

> > >>> Last updated: Tue Dec 22 08:57:55 2015

> > >>>

> > >>> Last change: Tue Dec 22 08:10:22 2015

> > >>>

> > >>> Stack: cman

> > >>>

> > >>> Current DC: node-01 - partition with quorum

> > >>>

> > >>> Version: 1.1.11-97629de

> > >>>

> > >>> 2 Nodes configured

> > >>>

> > >>> 2 Resources configured

> > >>>

> > >>>

> > >>>

> > >>>

> > >>>

> > >>> Online: [ node-01 node-02 ]

> > >>>

> > >>> Full list of resources:

> > >>> Clone Set: ClusterIP-clone [ClusterIP] (unique)

> > >>>

> > >>> ClusterIP:0 (ocf::heartbeat:IPaddr2): Started

> > >>> node-01

> > >>>

> > >>> ClusterIP:1 (ocf::heartbeat:IPaddr2): Started

> > >>> node-02

> > >>>

> > >>> #pcs config

> > >>>

> > >>> Cluster Name: DES

> > >>>

> > >>> Corosync Nodes:

> > >>> node-01 node-02

> > >>>

> > >>> Pacemaker Nodes:

> > >>>

> > >>> node-01 node-02

> > >>>

> > >>> Resources:

> > >>> Clone: ClusterIP-clone

> > >>>

> > >>> Meta Attrs: clone-max=2 clone-node-max=2 globally-unique=true

> > >>>

> > >>> Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)

> > >>>

> > >>> Attributes: ip=10.61.150.55 cidr_netmask=23

> > >>>

> > >>> clusterip_hash=sourceip

> > >>>

> > >>> Operations: start interval=0s timeout=20s

> > >>> (ClusterIP-start-timeout-20s)

> > >>>

> > >>> stop interval=0s timeout=20s

> > >>>

> > >>> (ClusterIP-stop-timeout-20s)

> > >>>

> > >>> monitor interval=5s (ClusterIP-monitor-interval-5s)

> > >>>

> > >>> Stonith Devices:

> > >>>

> > >>> Fencing Levels:

> > >>>

> > >>>

> > >>>

> > >>> Location Constraints:

> > >>>

> > >>> Ordering Constraints:

> > >>>

> > >>> Colocation Constraints:

> > >>>

> > >>> Cluster Properties:

> > >>> cluster-infrastructure: cman

> > >>>

> > >>> cluster-recheck-interval: 0

> > >>>

> > >>> dc-version: 1.1.11-97629de

> > >>>

> > >>> stonith-enabled: false

> > >>>

> > >>> Pacemaker and Corosync version:

> > >>>

> > >>> Pacemaker - 1.1.12-4

> > >>>

> > >>> Corosync - 1.4.7

> > >>>

> > >>>

> > >>>

> > >>>

> > >>>

> > >>> Is the issue due to configuration error or firewall issue.

> > >>>

> > >>>

> > >>>

> > >>>

> > >>>

> > >>> With Regards

> > >>>

> > >>> Somanath Thilak J

> > >>>

> > >>>

> > >>>

> > >>> Hi Somanath,

> > >>

> > >> The configuration looks fine (aside from fencing not being

> > >> configured),

> > >>

> > >> so I'd suspect a network issue.

> > >>

> > >>

> > >>

> > >> The IPaddr2 cloning relies on multicast MAC addresses (at the

> > >> Ethernet

> > >>

> > >> level, not multicast IP), and many switches have issues with that.

> > >> Make

> > >>

> > >> sure your switch supports multicast MAC (and if necessary, has it

> > >>

> > >> enabled on the relevant ports).

> > >>

> > >>

> > >>

> > >> Some people have found it necessary to add a static ARP entry for

> > >> the

> > >>

> > >> cluster IP/MAC in their firewall and/or switch.

> > >>

> > >>

> > >>

> > >> Hi ,

> > >>

> > >>

> > >>

> > >> It seems that the switches have multicast support enabled. Any idea

> > >> on how

> > >>

> > >> to trouble shoot the issue. I also tried adding the Multicast MAC

> > >> to the ip

> > >>

> > >> neigh tables. Still the Virtual IP goes down in 15 min or so.

> > >

> > >Did you try a "watch arp -vn" on your nodes to watch for changes (if

> > >you only have a few connections)?

> >

> > I could not see my virtual ip in the arp -vn command output. Only if

> > ass the static arp entry I can see the Virtual IP in the command o/p.

> > I see the virtual ip and MAC only in iptables,ip addr,ip maddr

> > commands

> >

> >

> >

> > # service iptables status

> >

> > Table: filter

> >

> > Chain INPUT (policy ACCEPT)

> >

> > num target prot opt source destination

> >

> > 1 CLUSTERIP all -- 0.0.0.0/0 10.61.150.55

> > CLUSTERIP

> > hashmode=sourceip clustermac=51:33:83:16:0A:BF total_nodes=2

> > local_node=2

> > hash_init=0

> >

> > 2 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0

> >

> >

> >

> > Chain FORWARD (policy ACCEPT)

> >

> > num target prot opt source destination

> >

> >

> >

> > Chain OUTPUT (policy ACCEPT)

> >

> > num target prot opt source destination

> >

> > 1 ACCEPT all -- 0.0.0.0/0 0.0.0.0/0

> >

> >

> >

> > # ip addr show bond0

> >

> > 6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc

> > noqueue state UP

> >

> > link/ether 00:0c:29:32:8d:b9 brd ff:ff:ff:ff:ff:ff

> >

> > inet 10.61.150.212/23 brd 10.61.151.255 scope global bond0

> >

> > inet 10.61.150.55/23 brd 10.61.151.255 scope global secondary

> >

> > bond0

> >

> > inet6 fe80::20c:29ff:fe32:8db9/64 scope link tentative dadfailed

> >

> > valid_lft forever preferred_lft forever

> >

> > # ip maddr show bond0

> >

> > 6: bond0

> >

> > link 51:33:83:16:0a:bf

> >

> > link 01:00:5e:01:01:02

> >

> > link 33:33:ff:32:8d:b9

> >

> > link 33:33:00:00:00:01

> >

> > link 33:33:00:00:02:02

> >

> > link 33:33:00:75:00:75

> >

> > link 01:00:5e:00:00:01

> >

> > inet 224.1.1.2

> >

> > inet 224.0.0.1

> >

> > inet6 ff02::1:ff32:8db9

> >

> > inet6 ff0e::75:75

> >

> > inet6 ff02::202

> >

> > inet6 ff02::1

> > >>

> > >> Regards

> > >>

> > >> Somanath Thilak J

> >

> >Hi,

> >

> >instead of wild geussing, you should do a more systematic research.

> >

> >- If your VIP becomes not accessialbe any more, what are the ARP reuests on

> >the network? tcpdump is your friend ;-)

> I already the tcp dump during both the times the availability was there and

> not there.

>

> Here is the output I got

>

> When Reachable:

>

> 08:39:24.639101 IP (tos 0x0, ttl 64, id 11469, offset 0, flags [DF], proto

> TCP (6), length 100) 10.61.150.55.ssh > 136.225.198.11.41071: Flags [P.],

> cksum 0xefb7 (incorrect -> 0x7599), seq 3590:3638, ack 2486, win 175,

> options [nop,nop,TS val 2343972672 ecr 2807901513], length 48

> 08:39:24.639692 IP (tos 0x10, ttl 61, id 4485, offset 0, flags [DF], proto

> TCP (6), length 500) 136.225.198.11.41071 > 10.61.150.55.ssh: Flags [P.],

> cksum 0xd1c3 (correct), seq 2486:2934, ack 3638, win 175, options

> [nop,nop,TS val 2807901735 ecr 2343972672], length 448 08:39:24.639728 IP

> (tos 0x0, ttl 64, id 11470, offset 0, flags [DF], proto TCP (6), length 52)

>

> When not reachable :

>

> 08:46:53.936447 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has

> 10.61.150.55 tell 10.61.150.2, length 46 08:46:53.936474 ARP, Ethernet (len

> 6), IPv4 (len 4), Reply 10.61.150.55 is-at 51:33:83:16:0a:bf (oui Unknown),

> length 28

> >- As I told you before, check the mac address tables of your switch. Is it

> >OK?

> I check with the systems team. They said MAC multicasting is enabled. Also

> do I have to add anything in the switch side manually.



You have to answer the questions if you want help from the community.



But again in other words: Does the MAC Address for your VIP 10.61.1500.55 appear in the mac address table of your switch or does in not appear? Please could you answer this question with "yes" or "no".







> >- Check the arp tables of the sending host / router. Is there an entry for

> >the VIP? With the correct (multicast!) MAC?





> Another point is , I am trying this configuration in a clustered environment

> and the said virtual IP is reachable within the cluster always .





As above: Answer the questions. But form the tcpdump I learn that it seems that the router did not learn the MAC adress. Could you plesea send a tcpdump from the router if the ARP replies reach the router?



> But when

> I ping from outside the cluster it is not reachable after sometime. If I

> restart the iptables it becomes reachable for some time.



this seems to be irrelevant here.





Mit freundlichen Grüßen,



Michael Schwartzkopff



--

[*] sys4 AG



http://sys4.de, +49 (89) 30 90 46 64, +49 (162) 165 0044

Franziskanerstraße 15, 81669 München



Sitz der Gesellschaft: München, Amtsgericht München: HRB 199263

Vorstand: Patrick Ben Koetter, Marc Schiffbauer

Aufsichtsratsvorsitzender: Florian Kirstein
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20160104/99f3d043/attachment-0003.html>


More information about the Users mailing list