[Pacemaker] node score question?

Andreas Kurz andreas.kurz at linbit.com
Thu Oct 15 21:06:50 UTC 2009


Hello,

On Wed, October 14, 2009 18:21, Mihai Vintila wrote:
> Hello,
> I have a config like this:
> Online: [ int3.test.tst wor1.test.tst wor2.test.tst wor3.test.tst
> int1.test.tst int2.test.tst ]
>
> ip1     (ocf::heartbeat:IPaddr):        Started int1.test.tst ip2
> (ocf::heartbeat:IPaddr):        Started int2.test.tst
> ip3     (ocf::heartbeat:IPaddr):        Started int3.test.tst ip4
> (ocf::heartbeat:IPaddr):        Started wor2.test.tst
> ip5     (ocf::heartbeat:IPaddr):        Started wor1.test.tst ip6
> (ocf::heartbeat:IPaddr):        Started wor3.test.tst
> Clone Set: clone_sql
> Started: [ int2.test.tst int1.test.tst ] Clone Set: clone_sip
> Started: [ int3.test.tst int1.test.tst int2.test.tst ] Clone Set:
> clone_http Started: [ int1.test.tst int2.test.tst ] Clone Set: clone_pbx
> Started: [ wor2.test.tst wor3.test.tst wor1.test.tst ] Clone Set:
> clone_cache Started: [ wor2.test.tst wor1.test.tst wor3.test.tst ]
>
>
>
> I want to keep ip1 always tied up to int1 ip2 to int2 .. and so on, my
> problem is that even if I put a location constrain with INFINITY on ip
> resurce For some node, there are cases like above when the ip resources
> are reversed. For example in this situation ip5 should on wor2 and ip4 on
> wor1
>
> Reason for this is that in the log two nodes have infinity and it
> probably chooses the one with lower id in openais or something.
>
> Oct 14 17:50:33 int1 pengine: [8648]: WARN: native_choose_node: 2 nodes
> with equal score (INFINITY) for running ip4 resources.  Chose
> wor2.test.tst. Oct 14 17:50:33 int1 pengine: [8648]: WARN:
> native_choose_node: 2 nodes with equal score (INFINITY) for running ip5
> resources.  Chose wor1.test.tst. Oct 14 17:50:33 int1 pengine: [8648]:
> WARN: native_choose_node: 2 nodes with equal score (INFINITY) for running
> ip6 resources.  Chose wor3.test.tst.
>
> My question is how is this score calculated since in the rules only 1
> node has INFINITY for location, and second where can i view current score
> for each resource/node.
>
> Apart from the location rules i also set a collocation constraint so that
> ip1 for example can start only on nodes with
> clone_sql,clone_sip,clone_http, same for ip2. And two order that
> clone_mysql should be started before sip and before pbx
>
> The behavior i want to obtain is that ip1 can move to ip2 if some
> resource fails on int1 but not to have ip1 on int2 when ip2 on int1
>
>
> Also what role does      <nvpair name="default-resource-stickiness"
> id="cib-bootstrap-options-default-resource-stickiness" value="0"/> have
> in this. What node is the one it forces the resource to stick on? I've
> tried it with it set to 2 or 0 and i get the same result. Also
> symmetric-cluster is set to false since i have -INFINITY locations rules
> to constraint the start of the resources.
>
>
> If someone could explain this to me i'll really appreciate it since i
> can't find something useful, in the docs related to this.
>
> An output from ptest -sL shows:
> native_color: ip4 allocation score on wor2.test.tst: 1000000
> native_color: ip4 allocation score on int2.test.tst: -1000000
> native_color: ip4 allocation score on wor3.test.tst: 1000000
> native_color: ip4 allocation score on int3.test.tst: -1000000
> native_color: ip4 allocation score on wor1.test.tst: 1000000
> native_color: ip4 allocation score on int1.test.tst: -1000000
> native_color: ip5 allocation score on wor1.test.tst: 1000000
> native_color: ip5 allocation score on int1.test.tst: -1000000
> native_color: ip5 allocation score on wor2.test.tst: 1000000
> native_color: ip5 allocation score on int2.test.tst: -1000000
> native_color: ip5 allocation score on wor3.test.tst: 1000000
> native_color: ip5 allocation score on int3.test.tst: -1000000
> native_color: ip6 allocation score on int3.test.tst: -1000000
> native_color: ip6 allocation score on wor1.test.tst: 1000000
> native_color: ip6 allocation score on int1.test.tst: -1000000
> native_color: ip6 allocation score on wor2.test.tst: 1000000
> native_color: ip6 allocation score on int2.test.tst: -1000000
> native_color: ip6 allocation score on wor3.test.tst: 1000000
>
>
>
> While the only rules for ip6 for example are:
> <rsc_location id="infrastructure6.3" rsc="ip6" node="int3.test.tst"
> score="-INFINITY"/> <rsc_location id="infrastructure6.4" rsc="ip6"
> node="wor1.test.tst" score="0"/> <rsc_location id="infrastructure6.1"
> rsc="ip6" node="int1.test.tst" score="-INFINITY"/> <rsc_location
> id="infrastructure6.5" rsc="ip6" node="wor2.test.tst" score="0"/>
> <rsc_location id="infrastructure6.2" rsc="ip6" node="int2.test.tst"
> score="-INFINITY"/> <rsc_location id="infrastructure6.6" rsc="ip6"
> node="wor3.test.tst" score="INFINITY"/>

first score of INF for wor3

> <rsc_colocation id="ip-colo-6" rsc="ip6" score="INFINITY"
> with-rsc="clone_pbx"/>

second score of INF for every node where clone_pbx is located (wor1 ||
wor2 || wor3)

> <rsc_colocation
> id="ip-colo-10" rsc="ip6" score="INFINITY" with-rsc="clone_cache"/>

third score of INF for every node where clone_cache is located (wor1 ||
wor2 || wor3)

Regards,
Andreas

>
> Why do I get 3 nodes with same score 100000(which I understand is
> INFINITY)
>
>
>
> With respect,
> Vintila Mihai Alexandru
>
>
> Best regards,
> Mihai Vintila
> 4PSA - Providing Server Solutions
> Technical Support Engineer
>
>
>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>
>


-- 
: Andreas Kurz
: LINBIT | Your Way to High Availability
: Tel +43-1-8178292-64, Fax +43-1-8178292-82
:
: http://www.linbit.com

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.

This e-mail is solely for use by the intended recipient(s). Information
contained in this e-mail and its attachments may be confidential,
privileged or copyrighted. If you are not the intended recipient you are
hereby formally notified that any use, copying, disclosure or
distribution of the contents of this e-mail, in whole or in part, is
prohibited. Also please notify immediately the sender by return e-mail
and delete this e-mail from your system. Thank you for your co-operation.





More information about the Pacemaker mailing list