[ClusterLabs] PCMK_ipc_buffer recommendation

Ken Gaillot kgaillot at redhat.com
Mon Jan 21 17:38:03 EST 2019


On Sat, 2019-01-19 at 00:46 +0200, Michael Kolomiets wrote:
> Hi
> Ken, what is mean "active client" - cluster nodes, concurrent jobs or
> so?

Local IPC client on the same node. The daemons use IPC to communicate
with each other and command-line tools. (Note this is unrelated to
communication between different nodes, which uses corosync.)

Some daemons are clients of other daemons, e.g. the controller will be
a client of the CIB manager, the scheduler, the attribute manager, and
the fencer. This counts for 0-3 (relatively permanent) clients
depending on the daemon.

Then certain command-line tools are clients (relatively briefly). For
example crm_attribute might be a client of the attribute manager or the
CIB manager. Some resource agents use crm_attribute in their monitor
command, so how many crm_attribute clients there are would depend on
how many resource monitors using those agents are running at any given
moment. stonith_admin might be a client of the fencer, and so on.

> We had issue with buffer size and when we have increased it to 4MB
> problem was gone. But this value isn't related to anything so I'd
> hope
> to know how to calculate right IPC buffer size.
> Our cluster has nine nodes and about 80 pacemaker_remote members, so
> what IPC buffer size I should set?

I wish there were a convenient formula, but all we have now is the log
messages that say when it's too small. It's generally correlated to the
size of the CIB.

> пт, 18 янв. 2019 г. в 18:24, Ken Gaillot <kgaillot at redhat.com>:
> > Each daemon will need 10MB per active client. The number of clients
> > is
> > unlikely to grow large in normal operation (maybe a dozen or so?),
> > though one could imagine a runaway loop in some script spawning a
> > bunch
> > of commands that need client connections, or 100 resource monitors
> > all
> > setting node attributes at the same time.
-- 
Ken Gaillot <kgaillot at redhat.com>




More information about the Users mailing list