[ClusterLabs] Queries about a Cluster setup inside a docker

Narayanan, Devarajan Devarajan.Narayanan at dell.com
Tue Mar 25 14:37:15 UTC 2025


Hi,

I have a setup where I have multiple docker instances on a base linux (Node1).
I have a cluster inside each docker instances which pair with a similar setup on another node (node2) and form 2 node clusters.
See pic1 below.

In this setup, the cluster status etc resides in the docker overlay file system I presume.
<Query1> Is there a clear list of files which have the cluster status (Basically data of corosync, pacemaker, sbd, crmsh processes I think)?

<Query2> In this setup if I wanted the cluster data to be persistent across "remove and re-run of the docker instance", what can I do?

Presuming cluster data will be in /var, /etc, /usr folders, I tried the following solution.
Created volumes for var, etc and usr and then during docker run used the options like "-v var_vol:/var -v etc_vol:/etc -v usr_vol"
With this some portion worked and saw some weird behaviour as well.
<Query3> Is this a correct way of solving the problem to have a persistent cluster data? Have I missed mapping any folder?

FYI, I have given the details about the experiment I tried to verify if the cluster data is consistent below (Experiment).
Let me know if this makes sense.


Pic1
[cid:image002.png at 01DB9DC0.635B6FA0]


Experiment
Tried the following experiment. Please let me know if this makes sense
1) In a proper working cluster, stopped app-1-on-node-2 container on node2 to get the following crm status in app-1-on-node-1
Node List:
  * Online: [ app-1-on-node-1 ]
  * OFFLINE: [ app-1-on-node-2 ]
2) Stopped and started the app-1-on-node-1 and checked the crm status. Remained the same as before
Node List:
  * Online: [ app-1-on-node-1 ]
  * OFFLINE: [ app-1-on-node-2 ]
3) Remove the container app-1-on-node-1 and run it newly and then checked the crm status.
  Now the status was changed by not showing the app-1-on-node-2 (Presume the reason for this is old cluster data is not available)
Node List:
  * Online: [ app-1-on-node-1 ]
4) Repeated the step 1 and observed the crm status (This time I used "-v var_vol:/var -v etc_vol:/etc -v usr_vol" during docker run)
Node List:
  * Online: [ app-1-on-node-1 ]
  * OFFLINE: [ app-1-on-node-2 ]
5) Remove the container app-1-on-node-1 and run it newly (with "-v var_vol:/var -v etc_vol:/etc -v usr_vol" during docker run) and then checked the crm status.

6) Now checked the crm status
Node List:
  * Online: [ app-1-on-node-1 ]
  * OFFLINE: [ app-1-on-node-2 ]

Regards,
Deva.


Internal Use - Confidential
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20250325/d0b5e5c0/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.png
Type: image/png
Size: 171551 bytes
Desc: image002.png
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20250325/d0b5e5c0/attachment-0001.png>


More information about the Users mailing list