[ClusterLabs] Off-line build-time cluster configuration

Strahil Nikolov hunter86_bg at yahoo.com
Thu Apr 16 05:39:36 EDT 2020


On April 15, 2020 8:10:09 PM GMT+03:00, Craig Johnston <agspoon at gmail.com> wrote:
>Yes, you could think of our use case as an appliance.  I'm being
>intentionally vague about the actual implementation, but the underlying
>requirement to build an arbitrary cluster configuration off-line, or
>pre-boot, is the key thing we're looking for help with.
>
>Craig
>
>On Tue, Apr 14, 2020 at 2:05 PM Strahil Nikolov <hunter86_bg at yahoo.com>
>wrote:
>
>> On April 14, 2020 9:46:14 PM GMT+03:00, Craig Johnston
><agspoon at gmail.com>
>> wrote:
>> >Hello,
>> >
>> >Sorry if this has already been covered, but a perusal of recent mail
>> >archives didn't turn up anything for me.
>> >
>> >We are looking for help in configuring a pacemaker/corosync cluster
>at
>> >the
>> >time the Linux root file system is built, or perhaps as part of a
>> >"pre-pivot" process in the initramfs of a live-CD environment.
>> >
>> >We are using the RHEL versions of the cluster products.  Current
>> >production
>> >is RHEL7 based, and we are trying to move to RHEL8.
>> >
>> >The issues we have stem from the configuration tools' expectation
>that
>> >they
>> >are operating on a live system, with all cluster nodes available on
>the
>> >network.  This is obviously not the case during a "kickstart"
>install
>> >and
>> >configuration process.  It's also not true in an embedded
>environment
>> >where
>> >all nodes are powered simultaneously and expected to become
>operational
>> >without any human intervention.
>> >
>> >We create the cluster configuration from a "system model", that
>> >describes
>> >the available nodes, cluster managed services, fencing agents, etc..
>> >This
>> >model is different for each deployment, and is used as input to
>create
>> >a
>> >customized Linux distribution that is deployed to a set of physical
>> >hardware, virtual machines, or containers.  Each node, and it's root
>> >file
>> >system, is required to be configured and ready to go, the very first
>> >time
>> >it is ever booted.  The on-media Linux file system is also
>immutable,
>> >and
>> >thus each boot is exactly like the previous one.
>> >
>> >Under RHEL7, we were able to use the "pcs" command to create the
>> >corosync.conf/cib.xml files for each node.
>> >e.g.
>> >      pcs cluster setup --local --enable --force --name mycluster
>node1
>> >node2 node3
>> >          pcs -f ${CIB} property set startup-fencing=false
>> >          pcs -f ${CIB} resource create tftp ocf:heartbeat:Xinetd
>> > service=tftp  --group grp_tftp
>> >          etc...
>> >
>> >Plus a little "awk" "sed" on the corosync.conf file, and we were
>able
>> >to
>> >create a working configuration that worked out of the box. It's not
>> >pretty,
>> >but it works in spite of the fact that we feel like we're swimming
>up
>> >stream.
>> >
>> >Under RHEL8 however, the "pcs cluster" command no longer has a
>> >"--local"
>> >option.  We can't find any tool to replace it's functionality.  We
>can
>> >use
>> >"cibadmin --empty" to create a starting cib.xml file, but there is
>no
>> >way
>> >to add nodes to it (or create the corosync.conf file with nodes".
>> >
>> >Granted, we could write our own tools to create template
>> >corosync.conf/cib.xml files, and "pcs -f" still works.  However,
>that
>> >leaves us in the unenviable position where the cluster configuration
>> >schema
>> >could change, and our tools would not be the wiser.  We'd much
>prefer
>> >to
>> >use a standard and maintained interface for configuring the cluster.
>> >
>> >Any suggestions would be very welcome.  While we have a non-standard
>> >use-case, we don't believe it is unrealistic given the current
>> >environment
>> >for cloud services, and automated deployment.
>> >
>> >Thanks,
>> >Craig
>>
>> I guess I'm a  little bit narrow-minded , but I don't get the whole
>> concept.
>>
>> Why should you need to use  a ' pre-pivot" process in the initramfs
>of a
>> live-CD environment.'   ?
>>
>> Can't you just pull the configuration from a central location (lile
>salt,
>> puppet or just a plain http server ) ?
>>
>> Are you trying to build an apliance that selfdeploys without human
>> intervention ?
>>
>> Best Regards,
>> Strahil Nikolov
>>

Have you  tried  to execute the same pcs commands on all nodes?
Theoretically they should have the same configs.

For example run the same command on each node:
pcs  cluster setup Clustername node1  node2

If it doesn't work , then you might be able to create a single node cluster (on only 1 appliance)  and when a connectivity between nodes is established , the same node to add the rest.

Best Regards,
Strahil Nikolov


More information about the Users mailing list