[ClusterLabs] [Question:pacemaker_remote] About limitation of the placement of the resource to remote node.

renayama19661014 at ybb.ne.jp renayama19661014 at ybb.ne.jp
Tue Aug 18 23:55:26 UTC 2015


Hi Andrew,

>> Potentially.  I’d need a crm_report to confirm though.
> 
> 
> Okay!
> 
> I will send crm_report tomorrow.
> When a file is big, I register a file with Bugzilla with these contents.


I registered these contents with Bugzilla.
And I attached a file of crm_report.

 * http://bugs.clusterlabs.org/show_bug.cgi?id=5249

Best Regards,
Hideo Yamauchi.



----- Original Message -----
> From: "renayama19661014 at ybb.ne.jp" <renayama19661014 at ybb.ne.jp>
> To: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
> Cc: 
> Date: 2015/8/18, Tue 12:52
> Subject: Re: [ClusterLabs] [Question:pacemaker_remote] About limitation of the placement of the resource to remote node.
> 
> Hi Andrew,
> 
> Thank you for comments.
> 
>>>   Hi All,
>>> 
>>>   We confirmed movement of 
>> 
> pacemaker_remote.(version:pacemaker-ad1f397a8228a63949f86c96597da5cecc3ed977)
>>> 
>>>   It is the following cluster constitution.
>>>    * sl7-01(KVM host)
>>>    * snmp1(Guest on the sl7-01 host)
>>>    * snmp2(Guest on the sl7-01 host)
>>> 
>>>   We prepared for the next CLI file to confirm the resource placement to 
> 
>>  remote node.
>>> 
>>>   ------------------------------
>>>   property no-quorum-policy="ignore" \
>>>     stonith-enabled="false" \
>>>     startup-fencing="false"
>>> 
>>>   rsc_defaults resource-stickiness="INFINITY" \
>>>     migration-threshold="1"
>>> 
>>>   primitive remote-vm2 ocf:pacemaker:remote \
>>>     params server="snmp1" \
>>>     op monitor interval=3 timeout=15
>>> 
>>>   primitive remote-vm3 ocf:pacemaker:remote \
>>>     params server="snmp2" \
>>>     op monitor interval=3 timeout=15
>>> 
>>>   primitive dummy-remote-A Dummy \
>>>     op start interval=0s timeout=60s \
>>>     op monitor interval=30s timeout=60s \
>>>     op stop interval=0s timeout=60s
>>> 
>>>   primitive dummy-remote-B Dummy \
>>>     op start interval=0s timeout=60s \
>>>     op monitor interval=30s timeout=60s \
>>>     op stop interval=0s timeout=60s
>>> 
>>>   location loc1 dummy-remote-A \
>>>     rule 200: #uname eq remote-vm3 \
>>>     rule 100: #uname eq remote-vm2 \
>>>     rule -inf: #uname eq sl7-01
>>>   location loc2 dummy-remote-B \
>>>     rule 200: #uname eq remote-vm3 \
>>>     rule 100: #uname eq remote-vm2 \
>>>     rule -inf: #uname eq sl7-01
>>>   ------------------------------
>>> 
>>>   Case 1) The resource is placed as follows when I spend the CLI file 
> which 
>>  we prepared for.
>>>    However, the placement of the dummy-remote resource does not meet a 
>>  condition.
>>>    dummy-remote-A starts in remote-vm2.
>>> 
>>>   [root at sl7-01 ~]# crm_mon -1 -Af
>>>   Last updated: Thu Aug 13 08:49:09 2015          Last change: Thu Aug 
> 13 
>>  08:41:14 2015 by root via cibadmin on sl7-01
>>>   Stack: corosync
>>>   Current DC: sl7-01 (version 1.1.13-ad1f397) - partition WITHOUT quorum
>>>   3 nodes and 4 resources configured
>>> 
>>>   Online: [ sl7-01 ]
>>>   RemoteOnline: [ remote-vm2 remote-vm3 ]
>>> 
>>>    dummy-remote-A (ocf::heartbeat:Dummy): Started remote-vm2
>>>    dummy-remote-B (ocf::heartbeat:Dummy): Started remote-vm3
>>>    remote-vm2     (ocf::pacemaker:remote):        Started sl7-01
>>>    remote-vm3     (ocf::pacemaker:remote):        Started sl7-01
>> 
>>  It is possible that there was a time when only remote-vm2 was available (so 
> we 
>>  put dummy-remote-A there) and then before we could start dummy-remote-B 
> there 
>>  too, remote-vm3 showed up but due to resource-stickiness=“INFINITY”, we 
> didn’t 
>>  move dummy-remote-A.
> 
> We think that it is caused by a timing of the recognition of the node, too.
> 
> 
>> 
>>> 
>>>   (snip)
>>> 
>>>   Case 2) When we change CLI file of it and spend it,
>> 
>>  You lost me here :-)
>>  Can you rephrase please?
> 
> We changed limitation as follows.
> And I rebooted a cluster and sent it.
> 
> Then the placement becomes right.
> 
> 
> (snip)
> location loc1 dummy-remote-A \
>   rule 200: #uname eq remote-vm3 \
>   rule 100: #uname eq remote-vm2 \
>   rule -inf: #uname ne remote-vm2 and #uname ne remote-vm3 \
>   rule -inf: #uname eq sl7-01
> location loc2 dummy-remote-B \
>   rule 200: #uname eq remote-vm3 \
>   rule 100: #uname eq remote-vm2 \
>   rule -inf: #uname ne remote-vm2 and #uname ne remote-vm3 \
>   rule -inf: #uname eq sl7-01
> (snip)
> 
> 
> 
>> 
>>>   the resource is placed as follows.
>>>    The resource is placed definitely.
>>>    dummy-remote-A starts in remote-vm3.
>>>    dummy-remote-B starts in remote-vm3.
>>> 
>>> 
>>>   (snip)
>>>   location loc1 dummy-remote-A \
>>>     rule 200: #uname eq remote-vm3 \
>>>     rule 100: #uname eq remote-vm2 \
>>>     rule -inf: #uname ne remote-vm2 and #uname ne remote-vm3 \
>>>     rule -inf: #uname eq sl7-01
>>>   location loc2 dummy-remote-B \
>>>     rule 200: #uname eq remote-vm3 \
>>>     rule 100: #uname eq remote-vm2 \
>>>     rule -inf: #uname ne remote-vm2 and #uname ne remote-vm3 \
>>>     rule -inf: #uname eq sl7-01
>>>   (snip)
>>> 
>>> 
>>>   [root at sl7-01 ~]# crm_mon -1 -Af
>>>   Last updated: Thu Aug 13 08:55:28 2015          Last change: Thu Aug 
> 13 
>>  08:55:22 2015 by root via cibadmin on sl7-01
>>>   Stack: corosync
>>>   Current DC: sl7-01 (version 1.1.13-ad1f397) - partition WITHOUT quorum
>>>   3 nodes and 4 resources configured
>>> 
>>>   Online: [ sl7-01 ]
>>>   RemoteOnline: [ remote-vm2 remote-vm3 ]
>>> 
>>>    dummy-remote-A (ocf::heartbeat:Dummy): Started remote-vm3
>>>    dummy-remote-B (ocf::heartbeat:Dummy): Started remote-vm3
>>>    remote-vm2     (ocf::pacemaker:remote):        Started sl7-01
>>>    remote-vm3     (ocf::pacemaker:remote):        Started sl7-01
>>> 
>>>   (snip)
>>> 
>>>   As for the placement of the resource being wrong with the first CLI 
> file, 
>>  the placement limitation of the remote node is like remote resource not 
> being 
>>  evaluated until it is done start.
>>> 
>>>   The placement becomes right with the CLI file which I revised, but the 
> 
>>  description of this limitation is very troublesome when I compose a cluster 
> of 
>>  more nodes.
>>> 
>>>   Does remote node not need processing delaying placement limitation 
> until it 
>>  is done start?
>> 
>>  Potentially.  I’d need a crm_report to confirm though.
> 
> 
> Okay!
> 
> I will send crm_report tomorrow.
> When a file is big, I register a file with Bugzilla with these contents.
> 
> Best Regards,
> Hideo Yamauchi.
> 
>> 
>>> 
>>>   Is there a method to easily describe the limitation of the resource to 
> 
>>  remote node?
>>> 
>>>    * As one means, we know that the placement of the resource goes well 
> by 
>>  dividing the first CLI file into two.
>>>      * After a cluster sent CLI which remote node starts, I send CLI 
> where a 
>>  cluster starts a resource.
>>>    * However, we do not want to divide CLI file into two if possible.
>>> 
>>>   Best Regards,
>>>   Hideo Yamauchi.
>>> 
>>> 
>>>   _______________________________________________
>>>   Users mailing list: Users at clusterlabs.org
>>>   http://clusterlabs.org/mailman/listinfo/users
>>> 
>>>   Project Home: http://www.clusterlabs.org
>>>   Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>   Bugs: http://bugs.clusterlabs.org
>> 
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 




More information about the Users mailing list