[ClusterLabs] Antw: Re: Antw: [EXT] Move a resource only where another has Started
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Fri Oct 8 04:05:54 EDT 2021
>>> martin doc <db1280 at hotmail.com> schrieb am 08.10.2021 um 09:24 in Nachricht
<PS2P216MB0546195EFC2E0A1730BCF825C2B29 at PS2P216MB0546.KORP216.PROD.OUTLOOK.COM>:
> Hi,
>
> Yes, the suggestion to use a rule helped some. I had tried that but what I
> got wrong is that the name for the score stored by ping is not ping but pingd
> (yay backwards compat.) Thanks Ken for the pointer and getting me to go back
> to that.
Actually the RA has a "name" parameter; I used params name=val_net_gw1, so I get:
Node Attributes:
* Node: h16:
* val_net_gw1 : 1000
* Node: h18:
* val_net_gw1 : 1000
* Node: h19:
* val_net_gw1 : 1000
>
> Now I'm stuck with the problem of getting resources to rebalance when all of
> the clones are available. If I arbitrarily set the node utilization of cpu to
> 8 and memory to 10000 and then assign cpu=5 and memory=5000 to each resource,
> it does not rebalance once all of the pingd resources have a value > 0. A
utilization basically does not "rebalance", but "load limit", also depending on the lacement strategy.
The other thing is whether you really want stickiness=0 for your resources. THEN the cluster whill reshuffle your resources frequently.
> "crm_simulate" shows one node with 10 cpu & 10000 memory free, one with 0
> cpu/memory free and one half used. The utilization will prevent over
> allocation but doesn't balance out resources.
Did you try "placement-strategy=balanced"?
>
> A change in the state of pingd's value does cause the policy engine to do
> something but it just decides to keep all of the resources where they are.
You need a constraint rule I guess.
>
> I don't know anything about pcs colors.
>
> I will keep trying variations.
More information about the Users
mailing list