[ClusterLabs] resource agent Route active on multiple nodes

Ken Gaillot kgaillot at redhat.com
Wed Jun 6 13:00:42 EDT 2018


On Wed, 2018-06-06 at 12:38 +0300, Andrei Borzenkov wrote:
> On Wed, Jun 6, 2018 at 12:28 PM, Florent Barra
> <florent.barra at cirpack.com> wrote:
> > Hi,
> > I want to create a simple cluster where only the second interface
> > is
> > managed.
> > I create my resources in the following ways:
> > 
> > pcs resource create ping-gateway ocf:pacemaker:ping name=ping-
> > counter
> > host_list=10.22.5.254 --clone
> > pcs resource create eth1 ocf:heartbeat:IPaddr2 ip=10.22.5.160
> > nic=eth1
> > cidr_netmask=24
> > pcs resource create route1 ocf:heartbeat:Route
> > destination="10.22.5.0/24"
> > device="eth1" gateway="10.22.5.254" table="trust"
> > pcs constraint location eth1 rule id=eth1_rule constraint-
> > id=eth1_location
> > ping-counter eq 0

The constraint above would keep eth1 *on* nodes that cannot ping the
gateway. (I'm not sure why it starts in your example below.) You
probably want:

pcs constraint location eth1 rule score=-INFINITY ping-counter lt 1 or
not_defined ping-counter

which will keep eth1 *off* nodes that cannot ping the gateway.

> > pcs constraint ticket set eth1 route1 sequential=false setoptions
> > ticket=interfaces id=interfaces loss-policy=stop
> > crm_ticket --ticket interfaces --grant –force

Tickets are needed when a cluster is combined with another cluster
elsewhere using booth. Is that your eventual goal, to connect two
clusters?

> > pcs constraint colocation set eth1 route1 sequential=true
> > setoptions
> > id=coloc score=INFINITY
> > pcs constraint order set eth1 route1 sequential=true setoptions
> > id=launch_order kind=mandatory

Constraint sets can make sense if you plan on adding more resources to
the set later. Otherwise, ordinary colocation and ordering constraints
would be a bit simpler here (though the above should be equivalent, so
it's not a problem).

> > when I start the cluster, I end up with route error:
> > 
> > root at nodeA ~]# pcs resource
> > Clone Set: ping-gateway-clone [ping-gateway]
> > Started: [nodeA nodeB ]
> > eth1 (ocf::heartbeat:IPaddr2): Started nodeA
> > eth10 (ocf::heartbeat:IPaddr2): Started nodeA
> > eth11 (ocf::heartbeat:IPaddr2): Started nodeA
> > eth12 (ocf::heartbeat:IPaddr2): Started nodeA
> > route1 (ocf::heartbeat:Route): FAILED (blocked)[nodeA nodeB ]
> > 
> > I don't know why my resource 'route1' is active on multiple nodes
> > instead to
> > be active only on the same node the eth1
> > 
> 
> Your resource is *not* active. Attempt to start it failed on both
> nodes. You need to investigate why it happened. Most obvious reason
> would be missing "trust" table.

Do you have fencing configured? The cluster will not normally attempt
to recover a resource elsewhere unless the resource can be confirmed
stopped on the original node or the original node has been fenced.

-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list