[ClusterLabs] Antw: [EXT] What's wrong with IPsrcaddr?
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Thu Mar 17 03:42:11 EDT 2022
>>> ZZ Wave <zzwave at gmail.com> schrieb am 16.03.2022 um 10:24 in Nachricht
<CAM1SSGaictUhCJWwvO=aTpsj4Lmt9AFHW+hbu1uJEzsuL4qiQw at mail.gmail.com>:
> Hello. I'm trying to implement floating IP with pacemaker but I can't
> get IPsrcaddr to work correctly. I want a following thing - floating
> IP and its route SRC is started on node1. If node1 loses network
> connectivity to node2, node1 should instantly remove floating IP and
> restore default route,
> node2 brings these things up. And vice-versa when node1 returns.
> Static IPs should be intact in any way.
I would define a pingd resource to check the connectivity, and then base the IP (and other resources) on the value determined by the pingd resource.
For example we have (not actually used yet, displayed by "crm_mon -1Arfj"):
Node Attributes:
* Node: h16:
* val_net_gw1 : 1000
* Node: h18:
* val_net_gw1 : 1000
* Node: h19:
* val_net_gw1 : 1000
So all three nodes can read gateway1 (gw1).
The configuration (crm shell syntax):
primitive prm_ping_gw1 ocf:pacemaker:ping params name=val_net_gw1 dampen=120s multiplier=1000 host_list=...
op start timeout=60 interval=0 \
op stop timeout=60 interval=0 \
op monitor interval=60 timeout=60 \
meta priority=4500
clone cln_ping_gw1 prm_ping_gw1 \
meta interleave=true priority=4500
Regards,
Ulrich
>
> What I've done:
>
> pcs host auth node1 node2
> pcs cluster setup my_cluster node1 node2 --force
> pcs cluster enable node1 node2
> pcs cluster start node1 node2
> pcs property set stonith-enabled=false
> pcs property set no-quorum-policy=ignore
> pcs resource create virtip ocf:heartbeat:IPaddr2 ip=192.168.80.23
> cidr_netmask=24 op monitor interval=30s
> pcs resource create virtsrc ocf:heartbeat:IPsrcaddr
> ipaddress=192.168.80.23 cidr_netmask=24 op monitor interval=30
> pcs constraint colocation add virtip with virtsrc
> pcs constraint order virtip then virtsrc
>
> It sets IP and src correctly on node1 one time after this setup, but
> in case of failover to node2 a havoc occurs -
> https://pastebin.com/GZMtG480
>
> What's wrong? Help me please :)
More information about the Users
mailing list