[ClusterLabs] Resource-stickiness is not working

Confidential Company sgurovosa at gmail.com
Tue Jun 5 19:47:48 EDT 2018


On Sat, 2018-06-02 at 22:14 +0800, Confidential Company wrote:
> On Fri, 2018-06-01 at 22:58 +0800, Confidential Company wrote:
> > Hi,
> >?
> > I have two-node active/passive setup. My goal is to failover a
> > resource once a Node goes down with minimal downtime as possible.
> > Based on my testing, when Node1 goes down it failover to Node2. If
> > Node1 goes up after link reconnection (reconnect physical cable),
> > resource failback to Node1 even though I configured resource-
> > stickiness. Is there something wrong with configuration below?
> >?
> > #service firewalld stop
> > #vi /etc/hosts --> 192.168.10.121 (Node1) / 192.168.10.122 (Node2)
> --
> > ----------- Private Network (Direct connect)
> > #systemctl start pcsd.service
> > #systemctl enable pcsd.service
> > #passwd hacluster --> define pw
> > #pcs cluster auth Node1 Node2
> > #pcs setup --name Cluster Node1 Node2
> > #pcs cluster start -all
> > #pcs property set stonith-enabled=false
> > #pcs resource create ClusterIP ocf:heartbeat:IPaddr2
> > ip=192.168.10.123 cidr_netmask=32 op monitor interval=30s
> > #pcs resource defaults resource-stickiness=100
> >?
> > Regards,
> > imnotarobot
>
> Your configuration is correct, but keep in mind scores of all kinds
> will be added together to determine where the final placement is.
>
> In this case, I'd check that you don't have any constraints with a
> higher score preferring the other node. For example, if you
> previously?
> did a "move" or "ban" from the command line, that adds a constraint
> that has to be removed manually if you no longer want it.
> --?
> Ken Gaillot <kgaillot at redhat.com>
>
>
> >>>>>>>>>>
> I'm confused. constraint from what I think means there's a preferred
> node. But if I want my resources not to have a preferred node is that
> possible?
>
> Regards,
> imnotarobot

Yes, that's one type of constraint -- but you may not have realized you
added one if you ran something like "pcs resource move", which is a way
of saying there's a preferred node.

There are a variety of other constraints. For example, as you add more
resources, you might say that resource A can't run on the same node as
resource B, and if that constraint's score is higher than the
stickiness, A might move if B starts on its node.

To see your existing constraints using pcs, run "pcs constraint show".
If there are any you don't want, you can remove them with various pcs
commands.
-- 
Ken Gaillot <kgaillot at redhat.com>


>>>>>>>>>>
Correct me if I'm wrong. So resource-stickiness policy can not be used
alone. A constraint configuration should be setup in order to make it work
but will also be dependent on the level of scores that was setup between
the two. Can you suggest what type of constraint configuration should i set
to achieve the simple goal above?

Regards,
imnotarobot
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20180606/22d62b4f/attachment-0002.html>


More information about the Users mailing list