[ClusterLabs] Antw: Re: resource-stickiness

Andrew Beekhof andrew at beekhof.net
Fri Aug 28 02:25:59 UTC 2015


> On 27 Aug 2015, at 4:12 pm, Ulrich Windl <Ulrich.Windl at rz.uni-regensburg.de> wrote:
> 
>>>> Andrew Beekhof <andrew at beekhof.net> schrieb am 27.08.2015 um 00:20 in
> Nachricht
> <C0FD93F4-EA88-4C76-B47C-EF45AD4A80CA at beekhof.net>:
> 
>>> On 26 Aug 2015, at 10:09 pm, Rakovec Jost <Jost.Rakovec at snt.si> wrote:
>>> 
>>> Sorry  one typo: problem is the same....
>>> 
>>> 
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>> 
>> Change the name of your constraint.
>> The 'cli-prefer-’ prefix is reserved for “temporary” constraints
> created by 
>> the command line tools (which therefor feel entitled to delete them as 
>> necessary).
> 
> In which ways is "cli-prefer-" handled specially, if I may ask…

we delete them when you use the cli tools to move the resource somewhere else (crm_resource —ban, —move, —clear)

> 
>> 
>>> 
>>> to:
>>> 
>>> location cli-prefer-aapche aapche role=Started inf: sles2 
>>> 
>>> 
>>> It keep change to infinity. 
>>> 
>>> 
>>> 
>>> my configuration is:
>>> 
>>> node sles1 
>>> node sles2 
>>> primitive filesystem Filesystem \ 
>>>       params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
> 
>>>       op start interval=0 timeout=60 \ 
>>>       op stop interval=0 timeout=60 \ 
>>>       op monitor interval=20 timeout=40 
>>> primitive myip IPaddr2 \ 
>>>       params ip=x.x.x.x \ 
>>>       op start interval=0 timeout=20s \ 
>>>       op stop interval=0 timeout=20s \ 
>>>       op monitor interval=10s timeout=20s 
>>> primitive stonith_sbd stonith:external/sbd \ 
>>>       params pcmk_delay_max=30 
>>> primitive web apache \ 
>>>       params configfile="/etc/apache2/httpd.conf" \ 
>>>       op start interval=0 timeout=40s \ 
>>>       op stop interval=0 timeout=60s \ 
>>>       op monitor interval=10 timeout=20s 
>>> group aapche filesystem myip web \ 
>>>       meta target-role=Started is-managed=true resource-stickiness=1000 
>>> location cli-prefer-aapche aapche role=Started 10: sles2 
>>> property cib-bootstrap-options: \ 
>>>       stonith-enabled=true \ 
>>>       no-quorum-policy=ignore \ 
>>>       placement-strategy=balanced \ 
>>>       expected-quorum-votes=2 \ 
>>>       dc-version=1.1.12-f47ea56 \ 
>>>       cluster-infrastructure="classic openais (with plugin)" \ 
>>>       last-lrm-refresh=1440502955 \ 
>>>       stonith-timeout=40s 
>>> rsc_defaults rsc-options: \ 
>>>       resource-stickiness=1000 \ 
>>>       migration-threshold=3 
>>> op_defaults op-options: \ 
>>>       timeout=600 \ 
>>>       record-pending=true 
>>> 
>>> 
>>> 
>>> and after migration:
>>> 
>>> 
>>> node sles1 
>>> node sles2 
>>> primitive filesystem Filesystem \ 
>>>       params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
> 
>>>       op start interval=0 timeout=60 \ 
>>>       op stop interval=0 timeout=60 \ 
>>>       op monitor interval=20 timeout=40 
>>> primitive myip IPaddr2 \ 
>>>       params ip=10.9.131.86 \ 
>>>       op start interval=0 timeout=20s \ 
>>>       op stop interval=0 timeout=20s \ 
>>>       op monitor interval=10s timeout=20s 
>>> primitive stonith_sbd stonith:external/sbd \ 
>>>       params pcmk_delay_max=30 
>>> primitive web apache \ 
>>>       params configfile="/etc/apache2/httpd.conf" \ 
>>>       op start interval=0 timeout=40s \ 
>>>       op stop interval=0 timeout=60s \ 
>>>       op monitor interval=10 timeout=20s 
>>> group aapche filesystem myip web \ 
>>>       meta target-role=Started is-managed=true resource-stickiness=1000 
>>> location cli-prefer-aapche aapche role=Started inf: sles2 
>>> property cib-bootstrap-options: \ 
>>>       stonith-enabled=true \ 
>>>       no-quorum-policy=ignore \ 
>>>       placement-strategy=balanced \ 
>>>       expected-quorum-votes=2 \ 
>>>       dc-version=1.1.12-f47ea56 \ 
>>>       cluster-infrastructure="classic openais (with plugin)" \ 
>>>       last-lrm-refresh=1440502955 \ 
>>>       stonith-timeout=40s 
>>> rsc_defaults rsc-options: \ 
>>>       resource-stickiness=1000 \ 
>>>       migration-threshold=3 
>>> op_defaults op-options: \ 
>>>       timeout=600 \ 
>>>       record-pending=true
>>> 
>>> 
>>> From: Rakovec Jost
>>> Sent: Wednesday, August 26, 2015 1:33 PM
>>> To: users at clusterlabs.org 
>>> Subject: resource-stickiness
>>> 
>>> Hi list,
>>> 
>>> 
>>> I have configure simple cluster on sles 11 sp4 and have a problem with 
>> “auto_failover off". The problem is that when ever I migrate resource
> group 
>> via HAWK my configuration change from:
>>> 
>>> location cli-prefer-aapche aapche role=Started 10: sles2
>>> 
>>> to:
>>> 
>>> location cli-ban-aapche-on-sles1 aapche role=Started -inf: sles1
>>> 
>>> 
>>> It keep change to inf. 
>>> 
>>> 
>>> and then after fance node, resource is moving back to original node which I
> 
>> don't want. How can I avoid this situation?
>>> 
>>> my configuration is:
>>> 
>>> node sles1 
>>> node sles2 
>>> primitive filesystem Filesystem \ 
>>>       params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
> 
>>>       op start interval=0 timeout=60 \ 
>>>       op stop interval=0 timeout=60 \ 
>>>       op monitor interval=20 timeout=40 
>>> primitive myip IPaddr2 \ 
>>>       params ip=x.x.x.x \ 
>>>       op start interval=0 timeout=20s \ 
>>>       op stop interval=0 timeout=20s \ 
>>>       op monitor interval=10s timeout=20s 
>>> primitive stonith_sbd stonith:external/sbd \ 
>>>       params pcmk_delay_max=30 
>>> primitive web apache \ 
>>>       params configfile="/etc/apache2/httpd.conf" \ 
>>>       op start interval=0 timeout=40s \ 
>>>       op stop interval=0 timeout=60s \ 
>>>       op monitor interval=10 timeout=20s 
>>> group aapche filesystem myip web \ 
>>>       meta target-role=Started is-managed=true resource-stickiness=1000 
>>> location cli-prefer-aapche aapche role=Started 10: sles2 
>>> property cib-bootstrap-options: \ 
>>>       stonith-enabled=true \ 
>>>       no-quorum-policy=ignore \ 
>>>       placement-strategy=balanced \ 
>>>       expected-quorum-votes=2 \ 
>>>       dc-version=1.1.12-f47ea56 \ 
>>>       cluster-infrastructure="classic openais (with plugin)" \ 
>>>       last-lrm-refresh=1440502955 \ 
>>>       stonith-timeout=40s 
>>> rsc_defaults rsc-options: \ 
>>>       resource-stickiness=1000 \ 
>>>       migration-threshold=3 
>>> op_defaults op-options: \ 
>>>       timeout=600 \ 
>>>       record-pending=true 
>>> 
>>> 
>>> 
>>> and after migration:
>>> 
>>> node sles1 
>>> node sles2 
>>> primitive filesystem Filesystem \ 
>>>       params fstype=ext3 directory="/srv/www/vhosts" device="/dev/xvdd1" \
> 
>>>       op start interval=0 timeout=60 \ 
>>>       op stop interval=0 timeout=60 \ 
>>>       op monitor interval=20 timeout=40 
>>> primitive myip IPaddr2 \ 
>>>       params ip=10.9.131.86 \ 
>>>       op start interval=0 timeout=20s \ 
>>>       op stop interval=0 timeout=20s \ 
>>>       op monitor interval=10s timeout=20s 
>>> primitive stonith_sbd stonith:external/sbd \ 
>>>       params pcmk_delay_max=30 
>>> primitive web apache \ 
>>>       params configfile="/etc/apache2/httpd.conf" \ 
>>>       op start interval=0 timeout=40s \ 
>>>       op stop interval=0 timeout=60s \ 
>>>       op monitor interval=10 timeout=20s 
>>> group aapche filesystem myip web \ 
>>>       meta target-role=Started is-managed=true resource-stickiness=1000 
>>> location cli-ban-aapche-on-sles1 aapche role=Started -inf: sles1 
>>> location cli-prefer-aapche aapche role=Started 10: sles2 
>>> property cib-bootstrap-options: \ 
>>>       stonith-enabled=true \ 
>>>       no-quorum-policy=ignore \ 
>>>       placement-strategy=balanced \ 
>>>       expected-quorum-votes=2 \ 
>>>       dc-version=1.1.12-f47ea56 \ 
>>>       cluster-infrastructure="classic openais (with plugin)" \ 
>>>       last-lrm-refresh=1440502955 \ 
>>>       stonith-timeout=40s 
>>> rsc_defaults rsc-options: \ 
>>>       resource-stickiness=1000 \ 
>>>       migration-threshold=3 
>>> op_defaults op-options: \ 
>>>       timeout=600 \ 
>>>       record-pending=true
>>> 
>>> 
>>> 
>>> 
>>> thanks
>>> 
>>> Best Regards
>>> 
>>> Jost
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org 
>>> http://clusterlabs.org/mailman/listinfo/users 
>>> 
>>> Project Home: http://www.clusterlabs.org 
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>>> Bugs: http://bugs.clusterlabs.org 
>> 
>> 
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org 
>> http://clusterlabs.org/mailman/listinfo/users 
>> 
>> Project Home: http://www.clusterlabs.org 
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
>> Bugs: http://bugs.clusterlabs.org 
> 
> 
> 
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Users mailing list