[ClusterLabs] Antw: Re: corosync/pacemaker on ~100 nodes cluser

Radoslaw Garbacz radoslaw.garbacz at xtremedatainc.com
Fri Sep 2 14:26:54 UTC 2016


Indeed, the cluster is quite sluggish when responding to the events, but
still acceptable for me - since the priority is to have it running with
many nodes. In my case the network is quite heavily used, but the shared
storage was limited. The settings, which worked for the 55 nodes I tested,
were to make it running, but are not reasonable as a long time solution
(hence my post). For me the "pacemaker-remote" seems to be the way to go
beyond the 16-ish "corosync" limit.


On Thu, Aug 25, 2016 at 1:19 AM, Ulrich Windl <
Ulrich.Windl at rz.uni-regensburg.de> wrote:

> Hi!
>
> I have two questions:
> 1) TOTEM being a ring protocol will have to pass each message to every
> node, one after the other, right? Wouldn't a significant delay in message
> processing happen?
> 2) If you use some shared storage (shared disks), how do you provide
> sufficient bandwidth? I'm assuming that 99 of 100 nodes don't have an
> idle/standby role in the cluster.
>
> Regards,
> Ulrich
>
> >>> Radoslaw Garbacz <radoslaw.garbacz at xtremedatainc.com> schrieb am
> 24.08.2016 um
> 19:49 in Nachricht
> <CAHBw7oQoQt_9BXGDuFH5N0-bZr8_eK3h3eGidWaDpxYkoVOn5A at mail.gmail.com>:
> > Hi,
> >
> > Thank you for the advice. Indeed, seems like Pacemaker Remote will solve
> my
> > big cluster problem.
> >
> > With regard to your questions about my current solution, I scale corosync
> > parameters based on the number of nodes, additionally modifying some of
> the
> > kernel network parameters. Tests I did let me select certain corosync
> > settings, which works, but are possibly not the best (cluster is quite
> slow
> > when reacting to some quorum related events).
> >
> > The problem seems to be only related to cluster start, once running, any
> > operations such as node lost/reconnect, agents creation/start/stop work
> > well. Memory and network seems important with regard to the hardware.
> >
> > Below are settings I used for my latest test (the largest working
> cluster I
> > tried):
> > * latest pacemaker/corosync
> > * 55 c3.4xlarge nodes (amazon cloud)
> > * 55 active nodes, 552 resources in a cluster
> > * kernel settings:
> > net.core.wmem_max=12582912
> > net.core.rmem_max=12582912
> > net.ipv4.tcp_rmem= 10240 87380 12582912
> > net.ipv4.tcp_wmem= 10240 87380 12582912
> > net.ipv4.tcp_window_scaling = 1
> > net.ipv4.tcp_timestamps = 1
> > net.ipv4.tcp_sack = 1
> > net.ipv4.tcp_no_metrics_save = 1
> > net.core.netdev_max_backlog = 5000
> >
> > * corosync settings:
> > token: 12000
> > consensus: 16000
> > join: 1500
> > send_join: 80
> > merge: 2000
> > downcheck: 2000
> > max_network_delay: 150 # for azure
> >
> > Best regards,
> >
> >
> > On Tue, Aug 23, 2016 at 12:00 PM, Ken Gaillot <kgaillot at redhat.com>
> wrote:
> >
> >> On 08/23/2016 11:46 AM, Klaus Wenninger wrote:
> >> > On 08/23/2016 06:26 PM, Radoslaw Garbacz wrote:
> >> >> Hi,
> >> >>
> >> >> I would like to ask for settings (and hardware requirements) to have
> >> >> corosync/pacemaker running on about 100 nodes cluster.
> >> > Actually I had thought that 16 would be the limit for full
> >> > pacemaker-cluster-nodes.
> >> > For larger deployments pacemaker-remote should be the way to go. Were
> >> > you speaking of a cluster with remote-nodes?
> >> >
> >> > Regards,
> >> > Klaus
> >> >>
> >> >> For now some nodes get totally frozen (high CPU, high network usage),
> >> >> so that even login is not possible. By manipulating
> >> >> corosync/pacemaker/kernel parameters I managed to run it on ~40 nodes
> >> >> cluster, but I am not sure which parameters are critical, how to make
> >> >> it more responsive and how to make the number of nodes even bigger.
> >>
> >> 16 is a practical limit without special hardware and tuning, so that's
> >> often what companies that offer support for clusters will accept.
> >>
> >> I know people have gone well higher than 16 with a lot of optimization,
> >> but I think somewhere between 32 and 64 corosync can't keep up with the
> >> messages. Your 40 nodes sounds about right. I'd be curious to hear what
> >> you had to do (with hardware, OS tuning, and corosync tuning) to get
> >> that far.
> >>
> >> As Klaus mentioned, Pacemaker Remote is the preferred way to go beyond
> >> that currently:
> >>
> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-
> >> single/Pacemaker_Remote/index.html
> >>
> >> >> Thanks,
> >> >>
> >> >> --
> >> >> Best Regards,
> >> >>
> >> >> Radoslaw Garbacz
> >> >> XtremeData Incorporation
> >>
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org
> >> http://clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/
> doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >>
> >
> >
> >
> > --
> > Best Regards,
> >
> > Radoslaw Garbacz
> > XtremeData Incorporation
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
Best Regards,

Radoslaw Garbacz
XtremeData Incorporation
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20160902/567277ba/attachment-0003.html>


More information about the Users mailing list