[ClusterLabs] Antw: [EXT] Re: Resource balancing and "ptest scores"

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Thu Feb 25 05:15:16 EST 2021


>>> Ken Gaillot <kgaillot at redhat.com> schrieb am 24.02.2021 um 23:45 in
Nachricht
<6373352fd18e819bada715a7d610499a658eda29.camel at redhat.com>:
> On Wed, 2021‑02‑24 at 11:16 +0100, Ulrich Windl wrote:
>> Hi!
>> 
>> Using a utilization‑based placement strategy (placement‑
>> strategy=balanced), I wonder why pacemaker chose node h16 to place a
>> new resource.
>> 
>> The situation before placement looks like this:
>> Remaining: h16 capacity: utl_ram=207124 utl_cpu=340
>> Remaining: h18 capacity: utl_ram=209172 utl_cpu=360
>> Remaining: h19 capacity: utl_ram=180500 utl_cpu=360
>> 
>> So h18 has most resources left.
>> 
>> A new resource prm_xen_v16 (utilization utl_cpu=40 utl_ram=16384)
>> will be placed to h16, however, and I don't understand why:
>> 
>> Transition Summary:
>>  * Start      prm_xen_v16           ( h16 )
>>  * Start      prm_cron_snap_v16     ( h16 )
>> 
>> (the "snap" resource depends on the xen resource)
>> 
>> The cluster actually placed the resource as indicated, leaving:
>> Remaining: h16 capacity: utl_ram=190740 utl_cpu=300
>> Remaining: h18 capacity: utl_ram=209172 utl_cpu=360
>> Remaining: h19 capacity: utl_ram=180500 utl_cpu=360
>> 
>> So h18 still has most capacity left.
>> 
>> I have 5 VMs on h16, 3 VMs on h18, and 2 VMs on h19...
>> 
>> pacemaker‑2.0.4+20200616.2deceaa3a‑3.3.1.x86_64 of SLES15 SP2.
>> 
>> Regards,
>> Ulrich
> 
> Pacemaker checks the resource's node scores first (highest score wins
> the resource ‑‑ assuming the node has sufficient capacity of course).
> Only if node scores are equal will it choose based on free capacity.

Hi Ken!

Thanks again. Unfortunately I don't know what might have influenced the node
score.
I don't think we had failures or location constraints active.
I was surprised, because _usually_ the resource balancing works quite nicely.

> 
> For example, if a location constraint gives a particular node a high
> enough preference, that will be considered more important than free
> capacity. ("High enough" being relative to the rest of the
> configuration ‑‑ other constraint scores, etc.)

The thing is that I have no location constraints.

Maybe we need some "cluster alexa" saying: "I started resource X on node Y
because ..." (where "..." is a good explanation) ;-)
Currently I feel only true voodoo priests know how the internals work.

Regards,
Ulrich

> ‑‑ 
> Ken Gaillot <kgaillot at redhat.com>
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 





More information about the Users mailing list