[ClusterLabs] Policy engine: Limit clones processing to location constraints

Ken Gaillot kgaillot at redhat.com
Fri Mar 1 16:53:22 EST 2019


On Wed, 2019-02-27 at 17:25 +0200, Michael Kolomiets wrote:
> Hi
> I have some cloned resources (dlm, clvmd, conntrackd etc) which
> location limited only with hardware nodes (9 servers). But I see in
> log that PE trying to calculate state for bigger number of this
> clones, in amount that, I suspect, accorded to number of all nodes,
> hardware and pacemaker remote.
> I think that this calculations lead to slowing cluster PE operation
> and I would to reduce them.
> Can I use clone-max attribute or something else to achive this
> reduction?
> 
> There is example log for one of clonned group of resources.
> 
> Feb 26 15:34:14 [48029] lwb01-n05.XXX    pengine:     info:
> rsc_merge_weights:  dlm:9: Rolling back scores from clvmd:9
> Feb 26 15:34:14 [48029] lwb01-n05.XXX    pengine:     info:
> native_color:       Resource dlm:9 cannot run anywhere
> Feb 26 15:34:14 [48029] lwb01-n05.XXX    pengine:     info:
> native_color:       Resource clvmd:9 cannot run anywhere
> ...
> Feb 26 15:34:14 [48029] lwb01-n05.XXX    pengine:     info:
> rsc_merge_weights:  dlm:84: Rolling back scores from clvmd:84
> Feb 26 15:34:14 [48029] lwb01-n05.XXX    pengine:     info:
> native_color:       Resource dlm:84 cannot run anywhere
> Feb 26 15:34:14 [48029] lwb01-n05.XXX    pengine:     info:
> native_color:       Resource clvmd:84 cannot run anywhere

Correct, you can set clone-max to the number of full cluster nodes, to
get rid of the log messages. You'll have to remember to change it
manually if you add or remove cluster nodes.

I'm not sure it'll have much of a performance impact. It does give me
the idea that it could be worthwhile to put the scheduler execution
time in the "Calculated transition" message.
-- 
Ken Gaillot <kgaillot at redhat.com>




More information about the Users mailing list