[ClusterLabs] Antw: Re: Antw: Re: Using different folder for /var/lib/pacemaker and usage of /dev/shm files

Ken Gaillot kgaillot at redhat.com
Wed May 18 20:27:11 UTC 2016


On 05/18/2016 01:15 AM, Ulrich Windl wrote:
>>>> Ken Gaillot <kgaillot at redhat.com> schrieb am 17.05.2016 um 16:53 in Nachricht
> <573B3074.1040305 at redhat.com>:
>> On 05/17/2016 04:07 AM, Nikhil Utane wrote:
>>> What I would like to understand is how much total shared memory
>>> (approximately) would Pacemaker need so that accordingly I can define
>>> the partition size. Currently it is 300 MB in our system. I recently ran
>>> into insufficient shared memory issue because of improper clean-up. So
>>> would like to understand how much Pacemaker would need for a 6-node
>>> cluster so that accordingly I can increase it.
>>
>> I have no idea :-)
> 
> A related question would be: What's in those segments? "strings" indicates that there is a lot of XML in those segments, and I as programmer who's first computer had 400 bytes of RAM wonder whether that is really needed... Aren't there more efficient representations for information exchange?

That design choice was way before my time, so I can't speak to the
reasons. I'm guessing it was an easy way to ensure compatibility across
nodes with different OSes, software versions and machine endianness.

>>
>> I don't think there's any way to pre-calculate it. The libqb library is
>> the part of the software stack that actually manages the shared memory,
>> but it's used by everything -- corosync (including its cpg and
>> votequorum components) and each pacemaker daemon.
>>
>> The size depends directly on the amount of communication activity in the
>> cluster, which is only indirectly related to the number of
>> nodes/resources/etc., the size of the CIB, etc. A cluster with nodes
>> joining/leaving frequently and resources moving around a lot will use
>> more shared memory than a cluster of the same size that's quiet. Cluster
>> options such as cluster-recheck-interval would also matter.
>>
>> Practically, I think all you can do is simulate expected cluster
>> configurations and loads, and see what it comes out to be.
> [...]
> 
> 
> Regards,
> Ulrich
> 
> 





More information about the Users mailing list