[Pacemaker] CoroSync's UDPu transport for public IP addresses?

Dmitry Koterov dmitry.koterov at gmail.com
Mon Dec 29 03:11:49 UTC 2014


Hello.

I have a geographically distributed cluster, all machines have public IP
addresses. No virtual IP subnet exists, so no multicast is available.

I thought that UDPu transport can work in such environment, doesn't it?

To test everything in advance, I've set up a corosync+pacemaker on Ubuntu
14.04 with the following corosync.conf:

totem {
  transport: udpu
  interface {
        ringnumber: 0
        bindnetaddr: ip-address-of-the-current-machine
        mcastport: 5405
  }
}
nodelist {
  node {
    ring0_addr: node1
  }
  node {
    ring0_addr: node2
  }
}
...

(here node1 and node2 are hostnames from /etc/hosts on both machines).
After running "service corosync start; service pacemaker start" logs show
no problems, but actually both nodes are always offline:

root at node1:/etc/corosync# crm status | grep node
OFFLINE: [ node1 node2 ]

and "crm node online" (as all other attempts to make crm to do something)
are timed out with "communication error".

No iptables, selinux, apparmor and other bullshit are active: just pure
virtual machines with single public IP addresses on each. Also tcpdump
shows that UDB packets on port 5405 are going in and out, and if I e.g.
stop corosync at node1, the tcpdump output at node2 changes significantly.
So they see each other definitely.

And if I attach a gvpe adapter to these 2 machines with a private subnet
and switch transport to the default one, corosync + pacemaker begin to work.

So my question is: what am I doing wrong? Maybe UDPu is not suitable for
communications among machines with public IP addresses only?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141229/b9800607/attachment-0003.html>


More information about the Pacemaker mailing list