[ClusterLabs] Upgrade corosync problem

Christine Caulfield ccaulfie at redhat.com
Tue Jul 3 09:42:45 UTC 2018


On 03/07/18 07:53, Jan Pokorný wrote:
> On 02/07/18 17:19 +0200, Salvatore D'angelo wrote:
>> Today I tested the two suggestions you gave me. Here what I did. 
>> In the script where I create my 5 machines cluster (I use three
>> nodes for pacemaker PostgreSQL cluster and two nodes for glusterfs
>> that we use for database backup and WAL files).
>>
>> FIRST TEST
>> ——————————
>> I added the —shm-size=512m to the “docker create” command. I noticed
>> that as soon as I start it the shm size is 512m and I didn’t need to
>> add the entry in /etc/fstab. However, I did it anyway:
>>
>> tmpfs      /dev/shm      tmpfs   defaults,size=512m   0   0
>>
>> and then
>> mount -o remount /dev/shm
>>
>> Then I uninstalled all pieces of software (crmsh, resource agents,
>> corosync and pacemaker) and installed the new one.
>> Started corosync and pacemaker but same problem occurred.
>>
>> SECOND TEST
>> ———————————
>> stopped corosync and pacemaker
>> uninstalled corosync
>> build corosync with --enable-small-memory-footprint and installed it
>> starte corosync and pacemaker
>>
>> IT WORKED.
>>
>> I would like to understand now why it didn’t worked in first test
>> and why it worked in second. Which kind of memory is used too much
>> here? /dev/shm seems not the problem, I allocated 512m on all three
>> docker images (obviously on my single Mac) and enabled the container
>> option as you suggested. Am I missing something here?
> 
> My suspicion then fully shifts towards "maximum number of bytes of
> memory that may be locked into RAM" per-process resource limit as
> raised in one of the most recent message ...
> 
>> Now I want to use Docker for the moment only for test purpose so it
>> could be ok to use the --enable-small-memory-footprint, but there is
>> something I can do to have corosync working even without this
>> option?
> 
> ... so try running the container the already suggested way:
> 
>   docker run ... --ulimit memlock=33554432 ...
> 
> or possibly higher (as a rule of thumb, keep doubling the accumulated
> value until some unreasonable amount is reached, like the equivalent
> of already used 512 MiB).
> 
> Hope this helps.

This makes a lot of sense to me. As Poki pointed out earlier, in
corosync 2.4.3 (I think) we fixed a regression in that caused corosync
NOT to be locked in RAM after it forked - which was causing potential
performance issues. So if you replace an earlier corosync with 2.4.3 or
later then it will use more locked memory than before.

Chrissie


> 
>> The reason I am asking this is that, in the future, it could be
>> possible we deploy in production our cluster in containerised way
>> (for the moment is just an idea). This will save a lot of time in
>> developing, maintaining and deploying our patch system. All
>> prerequisites and dependencies will be enclosed in container and if
>> IT team will do some maintenance on bare metal (i.e. install new
>> dependencies) it will not affects our containers. I do not see a lot
>> of performance drawbacks in using container. The point is to
>> understand if a containerised approach could save us lot of headache
>> about maintenance of this cluster without affect performance too
>> much. I am notice in Cloud environment this approach in a lot of
>> contexts.
> 
> 
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 



More information about the Users mailing list