[ClusterLabs] ClusterLabsAntw: Re: Utilization zones

Ferenc Wágner wferi at niif.hu
Tue Apr 19 12:36:58 EDT 2016


"Ulrich Windl" <Ulrich.Windl at rz.uni-regensburg.de> writes:

> Ferenc Wágner <wferi at niif.hu> schrieb am 19.04.2016 um 13:42 in Nachricht
>
>> "Ulrich Windl" <Ulrich.Windl at rz.uni-regensburg.de> writes:
>> 
>>> Ferenc Wágner <wferi at niif.hu> schrieb am 18.04.2016 um 17:07 in Nachricht
>>> 
>>>> I'm using the "balanced" placement strategy with good success.  It
>>>> distributes our VM resources according to memory size perfectly.
>>>> However, I'd like to take the NUMA topology into account.  That means
>>>> each host should have several capacity pools (of each capacity type) to
>>>> arrange the resources in.  Can Pacemaker do something like this?
>>>
>>> I think you can, but depending on VM technology, the hypervisor may
>>> not care much about NUMA. More details?
>> 
>> The NUMA technology would be handled by the resource agent, if it was
>> told by Pacemaker which utilization zone to use on its host.  I just
>> need the policy engine to do more granular resource placement and
>> communicate the selected zone to the resource agents on the hosts.
>> 
>> I'm pretty sure there's no direct support for this, but there might be
>> different approaches I missed.  Thus I'm looking for ideas here.
>
> My initial idea was this: Define a memory resource for every NUMA pool
> on each host, the assign your resources to NUMA pools (utilization):
> The resources will pick some host, but when one pool is full, your
> resources cannot go to another pool. Is something like this what you
> wanted?

Yes, and you also see correctly why this solution is unsatisfactory: I
don't want to tie my resources to a fraction of the host capacities
(like for example the first NUMA nodes of the hosts).

If nothing better comes up, I'll probably interleave all my VM memory
and forget about the NUMA topology until I find the time to implement a
new placement strategy.  That would be an unfortunate pessimization,
though.
-- 
Thanks,
Feri




More information about the Users mailing list