[ClusterLabs] [EXT] Limit the number of resources starting/stoping in parallel possible?

Windl, Ulrich u.windl at ukr.de
Wed Oct 11 04:28:40 EDT 2023


Hi!



I wonder (thinking "utilization"): The types of resources are quite different (like a VM vs. an IP address), so wouldn't it make sense to be able to define classes of resources that you can limit (or not) regarding parallel execution?



The current utilization mechanism cannot be used, because it uses "static allocation", that is the score is based on where resources are running. For concurrency the score would have to be based on "dynamic allocation", i.e.: the score should be based on the resource operations currently (or planned to run) in progress.



So "resource heavy (slow)" resources could be assigned a high dynamic utilization value, while "light resources" could be assigned a small value. Depending on the nodes a basic value could be assigned to limit the number of resource operations *per node*, but not globally.



Maybe some clever mind want to suggest something very smart... 😉



Kind regards,

Ulrich



-----Original Message-----
From: Users <users-bounces at clusterlabs.org> On Behalf Of Knauf Steffen
Sent: Monday, September 18, 2023 4:24 PM
To: users at clusterlabs.org
Subject: [EXT] [ClusterLabs] Limit the number of resources starting/stoping in parallel possible?



Hi,



we have multiple Cluster (2 node + quorum setup) with more then 100 Resources ( 10 x VIP + 90 Microservices) per Node.

If the Resources are stopped/started at the same time the Server is under heavy load, which may result into timeouts and an unresponsive server.

We configured some Ordering Constraints (VIP --> Microservice). Is there a way to limit the number of resources starting/stoping in parallel?

Perhaps you have some other tips to handle such a situation.



Thanks & greets



Steffen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20231011/fec6b6a3/attachment.htm>


More information about the Users mailing list