[ClusterLabs] Limit the number of resources starting/stoping in parallel possible?

Knauf Steffen knauf at patronas.com
Tue Sep 19 04:11:27 EDT 2023


Hi Ken,

that sounds good. I'll test the option. Perhaps we'll change something on the resource type, too. Our systemd Resource do some things with docker containers (start, stop ...). We need the real status of the dockerized application (perhaps via a REST Endpoint), up & running does not correspond to the acutal state of the dockerized application. But that's another topic 😉

Thanks and greets

Steffen
________________________________
Von: Users <users-bounces at clusterlabs.org> im Auftrag von Ken Gaillot <kgaillot at redhat.com>
Gesendet: Montag, 18. September 2023 16:36
An: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
Betreff: Re: [ClusterLabs] Limit the number of resources starting/stoping in parallel possible?

On Mon, 2023-09-18 at 14:24 +0000, Knauf Steffen wrote:
> Hi,
>
> we have multiple Cluster (2 node + quorum setup) with more then 100
> Resources ( 10 x VIP + 90 Microservices) per Node.
> If the Resources are stopped/started at the same time the Server is
> under heavy load, which may result into timeouts and an unresponsive
> server.
> We configured some Ordering Constraints (VIP --> Microservice). Is
> there a way to limit the number of resources starting/stoping in
> parallel?
> Perhaps you have some other tips to handle such a situation.
>
> Thanks & greets
>
> Steffen
>

Hi,

Yes, see the batch-limit cluster option:

https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Explained/html/options.html#cluster-options

--
Ken Gaillot <kgaillot at redhat.com>

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20230919/b263f674/attachment-0001.htm>


More information about the Users mailing list