[ClusterLabs] Two-Node Failover IP-Address and Gateway

brainheadz brainheadz at gmail.com
Mon Jan 22 12:54:43 EST 2018


I've got 2 public IP's and 2 Hosts.

Each IP is assigned to one host. The interfaces are not configured by the
system, I am using pacemaker to do this.



I am trying to get some form of active/passive cluster. fw-managed-01 is
the active node. If it fails, fw-managed-02 has to take over the VIP and
change it's IPsrcaddr. This works so far. But if fw-managed-01 comes back
online, the default Gateway isn't set again on the node fw-managed-02.

I'm quite new to this topic. The Cluster would work that way, but the
passive Node can never reach the internet cause of the missing default

Can anyone explain to what I am missing or doing wrong here?


# crm configure show
node 1: fw-managed-01
node 2: fw-managed-02
primitive default_gw Route \
        op monitor interval=10s \
        params destination=default device=bad gateway=
primitive src_address IPsrcaddr \
        op monitor interval=10s \
        params ipaddress=
primitive vip_bad IPaddr2 \
        op monitor interval=10s \
        params nic=bad ip= cidr_netmask=29
primitive vip_bad_2 IPaddr2 \
        op monitor interval=10s \
        params nic=bad ip= cidr_netmask=29
primitive vip_managed IPaddr2 \
        op monitor interval=10s \
        params ip= cidr_netmask=24
clone default_gw_clone default_gw \
        meta clone-max=2 target-role=Started
location cli-prefer-default_gw default_gw_clone role=Started inf:
location src_address_location src_address inf: fw-managed-01
location vip_bad_2_location vip_bad_2 inf: fw-managed-02
location vip_bad_location vip_bad inf: fw-managed-01
order vip_before_default_gw inf: vip_bad:start src_address:start
location vip_managed_location vip_managed inf: fw-managed-01
property cib-bootstrap-options: \
        have-watchdog=false \
        dc-version=1.1.14-70404b0 \
        cluster-infrastructure=corosync \
        cluster-name=debian \
        stonith-enabled=false \
        no-quorum-policy=ignore \
        last-lrm-refresh=1516362207 \

# crm status
Last updated: Mon Jan 22 18:47:12 2018          Last change: Fri Jan 19
17:04:12 2018 by root via cibadmin on fw-managed-01
Stack: corosync
Current DC: fw-managed-01 (version 1.1.14-70404b0) - partition with quorum
2 nodes and 6 resources configured

Online: [ fw-managed-01 fw-managed-02 ]

Full list of resources:

 vip_managed    (ocf::heartbeat:IPaddr2):       Started fw-managed-01
 vip_bad        (ocf::heartbeat:IPaddr2):       Started fw-managed-01
 Clone Set: default_gw_clone [default_gw]
     default_gw (ocf::heartbeat:Route): FAILED fw-managed-02 (unmanaged)
     Started: [ fw-managed-01 ]
 src_address    (ocf::heartbeat:IPsrcaddr):     Started fw-managed-01
 vip_bad_2      (ocf::heartbeat:IPaddr2):       Started fw-managed-02

Failed Actions:
* default_gw_stop_0 on fw-managed-02 'not installed' (5): call=26,
status=complete, exitreason='Gateway address is
    last-rc-change='Fri Jan 19 17:10:43 2018', queued=0ms, exec=31ms
* src_address_monitor_0 on fw-managed-02 'unknown error' (1): call=18,
status=complete, exitreason='[/usr/lib/heartbeat/findif -C] failed',
    last-rc-change='Fri Jan 19 17:10:43 2018', queued=0ms, exec=75ms

best regards,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20180122/656ace47/attachment-0002.html>

More information about the Users mailing list