[ClusterLabs] resource-stickiness

Rakovec Jost Jost.Rakovec at snt.si
Wed Sep 2 13:11:27 UTC 2015


Hi

Can I ask something else in this thred or shoud I open a new one?

questions:

1. whta is the purpos of "meta target-role=Started"  in

primitive apache apache \
        params configfile="/etc/apache2/httpd.conf" \
        op monitor timeout=20s interval=10 \
        op stop timeout=60s interval=0 \
        op start timeout=40s interval=0 \
        meta target-role=Started

I just find that if I tray to "start Parent:" it don't start any resource from group. But if I remove "meta target-role=Started" then it start all resources.

2. How can I just change something by CLI crm for example:

I have this in my configuration:

primitive stonith_sbd stonith:external/sbd

but I would like to add this:

crm(live)configure# stonith_sbd stonith:external/sbd \
   > params pcmk_delay_max="30"
ERROR: configure.stonith_sbd: No such command

I know that I can delete and then add new, but I don't like this solution.

3. Do I need to add colocation and order:

colocation apache-with-fs-ip inf: fs myip apache

and 

order apache-after-fs-ip Mandatory: fs myip apache


if I'm using group like this:

group web fs myip apache \
        meta target-role=Started is-managed=true resource-stickiness=1000



Thanks


Jost

________________________________________
From: Ken Gaillot <kgaillot at redhat.com>
Sent: Friday, August 28, 2015 4:12 PM
To: Rakovec Jost; users at clusterlabs.org
Subject: Re: [ClusterLabs] resource-stickiness

On 08/28/2015 03:39 AM, Rakovec Jost wrote:
> Hi
>
> Ok thanks. I find this on your howto
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-plugin/html/Clusters_from_Scratch/ch06s08.html
>
> so basically I just remove temporary constraint by using
>
> crm resource unmove aapche
>
> and cluster work as I want.
>
> 1.Can you please explain me why is this temporary constraint necessary since I don't see any benefit, just more work for sysadmin?

It is created when you do "crm resource move".

The cluster itself has no concept of "moving" resources; it figures out
the best place to put each resource, adjusting continuously for
configuration changes, failures, etc.

So how tools like crm implement "move" is to change the configuration,
by adding the temporary constraint. That tells the cluster "this
resource should be on that node". The cluster adjusts its idea of "best"
and moves the resource to match it.

> 2.Is this possible to disable some how?

Sure, "crm resource unmove" :)

The constraint can't be removed automatically because neither the
cluster nor the tool knows when you no longer prefer the resource to be
at the new location. You have to tell it.

If you have resource-stickiness, you can "unmove" as soon as the move is
done, and the resource will stay where it is (unless some other
configuration is stronger than the stickiness). If you don't have
resource-stickiness, then once you "unmove", the resource may move to
some other node, as the cluster adjusts its idea of "best".

> Thanks
>
> Jost
>
>
>
>
> ________________________________________
> From: Ken Gaillot <kgaillot at redhat.com>
> Sent: Thursday, August 27, 2015 4:00 PM
> To: users at clusterlabs.org
> Subject: Re: [ClusterLabs] resource-stickiness
>
> On 08/27/2015 02:42 AM, Rakovec Jost wrote:
>> Hi
>>
>>
>> it doesn't work as I expected, I change name to:
>>
>> location loc-aapche-sles1 aapche role=Started 10: sles1
>>
>>
>> but after I manual move resource via HAWK to other node it auto add this line:
>>
>> location cli-prefer-aapche aapche role=Started inf: sles1
>>
>>
>> so now I have both lines:
>>
>> location cli-prefer-aapche aapche role=Started inf: sles1
>> location loc-aapche-sles1 aapche role=Started 10: sles1
>
> When you manually move a resource using a command-line tool, those tools
> accomplish the moving by adding a constraint, like the one you see added
> above.
>
> Such tools generally provide another option to clear any constraints
> they added, which you can manually run after you are satisfied with the
> state of things. Until you do so, the added constraint will remain, and
> will affect resource placement.
>
>>
>> and resource-stickiness doesn't work since after fence node1 the resource is move back to node1 after node1 come back and this is what I don't like. I know that I can remove line  that was added by cluster, but this is not the proper solution. Please tell me what is wrong. Thanks.  My config:
>
> Resource placement depends on many factors. "Scores" affect the outcome;
> stickiness has a score, and each constraint has a score, and the active
> node with the highest score wins.
>
> In your config, resource-stickiness has a score of 1000, but
> cli-aapche-sles1 has a score of "inf" (infinity), so sles1 wins when it
> comes back online (infinity > 1000). By contrast, loc-aapche-sles1 has a
> score of 10, so by itself, it would not cause the resource to move back
> (10 < 1000).
>
> To achieve what you want, clear the temporary constraint added by hawk,
> before sles1 comes back.
>
>> node sles1
>> node sles2
>> primitive filesystem Filesystem \
>>         params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
>>         op start interval=0 timeout=60 \
>>         op stop interval=0 timeout=60 \
>>         op monitor interval=20 timeout=40
>> primitive myip IPaddr2 \
>>         params ip=10.9.131.86 \
>>         op start interval=0 timeout=20s \
>>         op stop interval=0 timeout=20s \
>>         op monitor interval=10s timeout=20s
>> primitive stonith_sbd stonith:external/sbd \
>>         params pcmk_delay_max=30
>> primitive web apache \
>>         params configfile="/etc/apache2/httpd.conf" \
>>         op start interval=0 timeout=40s \
>>         op stop interval=0 timeout=60s \
>>         op monitor interval=10 timeout=20s
>> group aapche filesystem myip web \
>>         meta target-role=Started is-managed=true resource-stickiness=1000
>> location cli-prefer-aapche aapche role=Started inf: sles1
>> location loc-aapche-sles1 aapche role=Started 10: sles1
>> property cib-bootstrap-options: \
>>         stonith-enabled=true \
>>         no-quorum-policy=ignore \
>>         placement-strategy=balanced \
>>         expected-quorum-votes=2 \
>>         dc-version=1.1.12-f47ea56 \
>>         cluster-infrastructure="classic openais (with plugin)" \
>>         last-lrm-refresh=1440502955 \
>>         stonith-timeout=40s
>> rsc_defaults rsc-options: \
>>         resource-stickiness=1000 \
>>         migration-threshold=3
>> op_defaults op-options: \
>>         timeout=600 \
>>         record-pending=true
>>
>>
>> BR
>>
>> Jost
>>
>>
>>
>> ________________________________________
>> From: Andrew Beekhof <andrew at beekhof.net>
>> Sent: Thursday, August 27, 2015 12:20 AM
>> To: Cluster Labs - All topics related to open-source clustering welcomed
>> Subject: Re: [ClusterLabs] resource-stickiness
>>
>>> On 26 Aug 2015, at 10:09 pm, Rakovec Jost <Jost.Rakovec at snt.si> wrote:
>>>
>>> Sorry  one typo: problem is the same....
>>>
>>>
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>>
>> Change the name of your constraint.
>> The 'cli-prefer-’ prefix is reserved for “temporary” constraints created by the command line tools (which therefor feel entitled to delete them as necessary).
>>
>>>
>>> to:
>>>
>>> location cli-prefer-aapche aapche role=Started inf: sles2
>>>
>>>
>>> It keep change to infinity.
>>>
>>>
>>>
>>> my configuration is:
>>>
>>> node sles1
>>> node sles2
>>> primitive filesystem Filesystem \
>>>        params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
>>>        op start interval=0 timeout=60 \
>>>        op stop interval=0 timeout=60 \
>>>        op monitor interval=20 timeout=40
>>> primitive myip IPaddr2 \
>>>        params ip=x.x.x.x \
>>>        op start interval=0 timeout=20s \
>>>        op stop interval=0 timeout=20s \
>>>        op monitor interval=10s timeout=20s
>>> primitive stonith_sbd stonith:external/sbd \
>>>        params pcmk_delay_max=30
>>> primitive web apache \
>>>        params configfile="/etc/apache2/httpd.conf" \
>>>        op start interval=0 timeout=40s \
>>>        op stop interval=0 timeout=60s \
>>>        op monitor interval=10 timeout=20s
>>> group aapche filesystem myip web \
>>>        meta target-role=Started is-managed=true resource-stickiness=1000
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>>> property cib-bootstrap-options: \
>>>        stonith-enabled=true \
>>>        no-quorum-policy=ignore \
>>>        placement-strategy=balanced \
>>>        expected-quorum-votes=2 \
>>>        dc-version=1.1.12-f47ea56 \
>>>        cluster-infrastructure="classic openais (with plugin)" \
>>>        last-lrm-refresh=1440502955 \
>>>        stonith-timeout=40s
>>> rsc_defaults rsc-options: \
>>>        resource-stickiness=1000 \
>>>        migration-threshold=3
>>> op_defaults op-options: \
>>>        timeout=600 \
>>>        record-pending=true
>>>
>>>
>>>
>>> and after migration:
>>>
>>>
>>> node sles1
>>> node sles2
>>> primitive filesystem Filesystem \
>>>        params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
>>>        op start interval=0 timeout=60 \
>>>        op stop interval=0 timeout=60 \
>>>        op monitor interval=20 timeout=40
>>> primitive myip IPaddr2 \
>>>        params ip=10.9.131.86 \
>>>        op start interval=0 timeout=20s \
>>>        op stop interval=0 timeout=20s \
>>>        op monitor interval=10s timeout=20s
>>> primitive stonith_sbd stonith:external/sbd \
>>>        params pcmk_delay_max=30
>>> primitive web apache \
>>>        params configfile="/etc/apache2/httpd.conf" \
>>>        op start interval=0 timeout=40s \
>>>        op stop interval=0 timeout=60s \
>>>        op monitor interval=10 timeout=20s
>>> group aapche filesystem myip web \
>>>        meta target-role=Started is-managed=true resource-stickiness=1000
>>> location cli-prefer-aapche aapche role=Started inf: sles2
>>> property cib-bootstrap-options: \
>>>        stonith-enabled=true \
>>>        no-quorum-policy=ignore \
>>>        placement-strategy=balanced \
>>>        expected-quorum-votes=2 \
>>>        dc-version=1.1.12-f47ea56 \
>>>        cluster-infrastructure="classic openais (with plugin)" \
>>>        last-lrm-refresh=1440502955 \
>>>        stonith-timeout=40s
>>> rsc_defaults rsc-options: \
>>>        resource-stickiness=1000 \
>>>        migration-threshold=3
>>> op_defaults op-options: \
>>>        timeout=600 \
>>>        record-pending=true
>>>
>>>
>>> From: Rakovec Jost
>>> Sent: Wednesday, August 26, 2015 1:33 PM
>>> To: users at clusterlabs.org
>>> Subject: resource-stickiness
>>>
>>> Hi list,
>>>
>>>
>>> I have configure simple cluster on sles 11 sp4 and have a problem with “auto_failover off". The problem is that when ever I migrate resource group via HAWK my configuration change from:
>>>
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>>>
>>> to:
>>>
>>> location cli-ban-aapche-on-sles1 aapche role=Started -inf: sles1
>>>
>>>
>>> It keep change to inf.
>>>
>>>
>>> and then after fance node, resource is moving back to original node which I don't want. How can I avoid this situation?
>>>
>>> my configuration is:
>>>
>>> node sles1
>>> node sles2
>>> primitive filesystem Filesystem \
>>>        params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
>>>        op start interval=0 timeout=60 \
>>>        op stop interval=0 timeout=60 \
>>>        op monitor interval=20 timeout=40
>>> primitive myip IPaddr2 \
>>>        params ip=x.x.x.x \
>>>        op start interval=0 timeout=20s \
>>>        op stop interval=0 timeout=20s \
>>>        op monitor interval=10s timeout=20s
>>> primitive stonith_sbd stonith:external/sbd \
>>>        params pcmk_delay_max=30
>>> primitive web apache \
>>>        params configfile="/etc/apache2/httpd.conf" \
>>>        op start interval=0 timeout=40s \
>>>        op stop interval=0 timeout=60s \
>>>        op monitor interval=10 timeout=20s
>>> group aapche filesystem myip web \
>>>        meta target-role=Started is-managed=true resource-stickiness=1000
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>>> property cib-bootstrap-options: \
>>>        stonith-enabled=true \
>>>        no-quorum-policy=ignore \
>>>        placement-strategy=balanced \
>>>        expected-quorum-votes=2 \
>>>        dc-version=1.1.12-f47ea56 \
>>>        cluster-infrastructure="classic openais (with plugin)" \
>>>        last-lrm-refresh=1440502955 \
>>>        stonith-timeout=40s
>>> rsc_defaults rsc-options: \
>>>        resource-stickiness=1000 \
>>>        migration-threshold=3
>>> op_defaults op-options: \
>>>        timeout=600 \
>>>        record-pending=true
>>>
>>>
>>>
>>> and after migration:
>>>
>>> node sles1
>>> node sles2
>>> primitive filesystem Filesystem \
>>>        params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
>>>        op start interval=0 timeout=60 \
>>>        op stop interval=0 timeout=60 \
>>>        op monitor interval=20 timeout=40
>>> primitive myip IPaddr2 \
>>>        params ip=10.9.131.86 \
>>>        op start interval=0 timeout=20s \
>>>        op stop interval=0 timeout=20s \
>>>        op monitor interval=10s timeout=20s
>>> primitive stonith_sbd stonith:external/sbd \
>>>        params pcmk_delay_max=30
>>> primitive web apache \
>>>        params configfile="/etc/apache2/httpd.conf" \
>>>        op start interval=0 timeout=40s \
>>>        op stop interval=0 timeout=60s \
>>>        op monitor interval=10 timeout=20s
>>> group aapche filesystem myip web \
>>>        meta target-role=Started is-managed=true resource-stickiness=1000
>>> location cli-ban-aapche-on-sles1 aapche role=Started -inf: sles1
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>>> property cib-bootstrap-options: \
>>>        stonith-enabled=true \
>>>        no-quorum-policy=ignore \
>>>        placement-strategy=balanced \
>>>        expected-quorum-votes=2 \
>>>        dc-version=1.1.12-f47ea56 \
>>>        cluster-infrastructure="classic openais (with plugin)" \
>>>        last-lrm-refresh=1440502955 \
>>>        stonith-timeout=40s
>>> rsc_defaults rsc-options: \
>>>        resource-stickiness=1000 \
>>>        migration-threshold=3
>>> op_defaults op-options: \
>>>        timeout=600 \
>>>        record-pending=true
>>>
>>>
>>>
>>>
>>> thanks
>>>
>>> Best Regards
>>>
>>> Jost
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>




More information about the Users mailing list