[ClusterLabs] Upgrade corosync problem
Jan Pokorný
jpokorny at redhat.com
Tue Jul 3 06:53:36 UTC 2018
On 02/07/18 17:19 +0200, Salvatore D'angelo wrote:
> Today I tested the two suggestions you gave me. Here what I did.
> In the script where I create my 5 machines cluster (I use three
> nodes for pacemaker PostgreSQL cluster and two nodes for glusterfs
> that we use for database backup and WAL files).
>
> FIRST TEST
> ——————————
> I added the —shm-size=512m to the “docker create” command. I noticed
> that as soon as I start it the shm size is 512m and I didn’t need to
> add the entry in /etc/fstab. However, I did it anyway:
>
> tmpfs /dev/shm tmpfs defaults,size=512m 0 0
>
> and then
> mount -o remount /dev/shm
>
> Then I uninstalled all pieces of software (crmsh, resource agents,
> corosync and pacemaker) and installed the new one.
> Started corosync and pacemaker but same problem occurred.
>
> SECOND TEST
> ———————————
> stopped corosync and pacemaker
> uninstalled corosync
> build corosync with --enable-small-memory-footprint and installed it
> starte corosync and pacemaker
>
> IT WORKED.
>
> I would like to understand now why it didn’t worked in first test
> and why it worked in second. Which kind of memory is used too much
> here? /dev/shm seems not the problem, I allocated 512m on all three
> docker images (obviously on my single Mac) and enabled the container
> option as you suggested. Am I missing something here?
My suspicion then fully shifts towards "maximum number of bytes of
memory that may be locked into RAM" per-process resource limit as
raised in one of the most recent message ...
> Now I want to use Docker for the moment only for test purpose so it
> could be ok to use the --enable-small-memory-footprint, but there is
> something I can do to have corosync working even without this
> option?
... so try running the container the already suggested way:
docker run ... --ulimit memlock=33554432 ...
or possibly higher (as a rule of thumb, keep doubling the accumulated
value until some unreasonable amount is reached, like the equivalent
of already used 512 MiB).
Hope this helps.
> The reason I am asking this is that, in the future, it could be
> possible we deploy in production our cluster in containerised way
> (for the moment is just an idea). This will save a lot of time in
> developing, maintaining and deploying our patch system. All
> prerequisites and dependencies will be enclosed in container and if
> IT team will do some maintenance on bare metal (i.e. install new
> dependencies) it will not affects our containers. I do not see a lot
> of performance drawbacks in using container. The point is to
> understand if a containerised approach could save us lot of headache
> about maintenance of this cluster without affect performance too
> much. I am notice in Cloud environment this approach in a lot of
> contexts.
--
Jan (Poki)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20180703/f52537e5/attachment.sig>
More information about the Users
mailing list