[ClusterLabs] About Corosync up to 16 nodes limit

Ken Gaillot kgaillot at redhat.com
Thu Jul 6 10:00:00 EDT 2017

On 07/06/2017 03:51 AM, mlb_1 wrote:
> thanks for your solution.
> Is anybody can officially reply this topic ?

Digimer is correct, the Red Hat and SuSE limits are their own chosen
limits for technical support, not enforced by the code. There are no
hard limits in the code, but practically speaking, it is very difficult
to go beyond 32 corosync nodes.

Pacemaker Remote is the currently recommended way to scale a cluster
larger. With Pacemaker Remote, a small number of nodes run the full
cluster stack including corosync and all pacemaker daemons, while the
Pacemaker Remote nodes run only a single pacemaker daemon (the local
resource manager). This allows the number of nodes to scale much higher.

As I understand it, the corosync limit is mainly a function of needing
to pass the token around to all nodes in a small amount of time, to
guarantee that each message has been received by every node, and in
order. Therefore the speed and reliability of the network, the nodes'
network interfaces, and the nodes' ability to process network traffic
are the main bottlenecks to larger clusters.

In Pacemaker, bottlenecks I'm aware of are the size of the CIB (which
must be frequently passed between nodes over the network, and compressed
if it is large), the time it takes the policy engine to calculate
necessary actions in a complex cluster (lots of nodes, resources, and
constraints), the time it takes to complete a DC election when a node
leaves or rejoins the cluster, and to a lesser extent some daemon
communication that is less efficient than it could be due to the need to
support rolling upgrades from older versions.

Scalability is a major area of interest for future corosync and
pacemaker development.

> At 2017-07-06 11:45:05, "Digimer" <lists at alteeve.ca> wrote:
>>I'm not employed by Red Hat, so I can't speak authoritatively.
>>My understanding, however, is that they do not distinguish as corosync
>>on its own doesn't do much. The complexity comes from corosync traffic
>>though, but it gets more of a concern when you add in pacemaker traffic
>>and/or the CIB grows large.
>>Again, there is no hard code limit here, just what is practical. Can I
>>ask how large of a cluster you are planning to build, and what it will
>>be used for?
>>Note also; This is not related to pacemaker remote. You can have very
>>large counts of remote nodes.
>>On 2017-07-05 11:27 PM, mlb_1 wrote:
>>> Is RedHat limit node's number, or corosync's code?
>>> At 2017-07-06 11:11:39, "Digimer" <lists at alteeve.ca> wrote:
>>>>On 2017-07-05 09:03 PM, mlb_1 wrote:
>>>>> Hi:
>>>>>       I heard corosync-node's number limit to 16? It's true? And Why?
>>>>>      Thanks for anyone's answer.
>>>>>  https://specs.openstack.org/openstack/fuel-specs/specs/6.0/pacemaker-improvements.html 
>>>>>   * Corosync 2.0 has a lot of improvements that allow to have up to 100
>>>>>     Controllers. Corosync 1.0 scales up to 10-16 node
>>>>There is no hard limit on how many nodes can be in a cluster, but Red
>>>>Hat supports up to 16. SUSE supports up to 32, iirc. The problem is that
>>>>it gets harder and harder to keep things stable as the number of nodes
>>>>grow. There is a lot of coordination that has to happen between the
>>>>nodes and it gets ever more complex.
>>>>Generally speaking, you don't want large clusters. It is always advised
>>>>to break things up it separate smaller clusters whenever possible.
>>Papers and Projects: https://alteeve.com/w/
>>"I am, somehow, less interested in the weight and convolutions of
>>Einstein’s brain than in the near certainty that people of equal talent
>>have lived and died in cotton fields and sweatshops." - Stephen Jay Gould

More information about the Users mailing list