[ClusterLabs] Pacemaker (remote) component relations
Ken Gaillot
kgaillot at redhat.com
Mon Feb 8 15:40:35 UTC 2016
On 02/08/2016 07:55 AM, Ferenc Wágner wrote:
> Hi,
>
> I'm looking for information about the component interdependencies,
> because I'd like to split the Pacemaker packages in Debian properly.
> The current idea is to create two daemon packages, pacemaker and
> pacemaker-remote, which exclude each other, as they contain daemons
> listening on the same sockets.
>
> 1. Is the socket names configurable? Are there reasonable use cases
> requiring both daemons running concurrently?
No, they are exclusive. Pacemaker Remote simulates the cluster services,
so they listen on the same sockets. They can be installed on the same
machine, just not running at the same time.
> These two daemon packages would depend on a package providing the common
> hacluster user, the haclient group, the sysconfig and logrotate config.
>
> What else should go here?
>
> 2. Are the various RNG and XSL files under /usr/share/pacemaker used
> equally by pacemakerd and pacemaker_remoted? Or by the CLI utils?
Yes, the RNG and XSL files should be considered common.
> 3. Maybe the ocf:pacemaker RAs and their man pages? Or are they better
> in a separate pacemaker-resource-agents package?
If I were starting from scratch, I would consider a separate package.
Upstream, we recently moved most of the agents to the CLI package, which
is also a good alternative.
Note that a few agents can't run on remote nodes, and we've left those
in the pacemaker package upstream. These are remote (obviously),
controld and o2cb (I believe because the services they start require
direct access to fencing).
> 4. Is /usr/share/snmp/mibs/PCMK-MIB.txt used by any component, or is it
> only for decyphering SNMP traps at their destination? I guess it can
> go anywhere, it will be copied by the admin anyway.
The ClusterMon resource uses it, and user scripts can use it. I'd vote
for common, as it's architecture-neutral and theoretically usable by
anything.
> There's also a separate package for the various command line utilities,
> which would depend on pacemaker OR pacemaker-remote.
>
> 5. crm_attribute is in the pacemaker RPM package, unlike the other
> utilites, which are in pacemaker-cli. What's the reason for this?
>
> 6. crm_node connects to corosync directly, so it won't work with
> pacemaker_remote. crm_master uses crm_node, thus it won't either (at
> least without -N). But 4.3.5. of the Remote book explicitly mentions
> crm_master amongst the tools usable with pacemaker_remote. Where's
> the mistake?
crm_attribute and crm_node both depend on the cluster-layer libraries,
which won't necessarily be available on a remote node. We hope to remove
that dependency at some point.
It is possible to install the cluster libraries on a remote node, and
some of the crm_attribute/crm_node functionality will work, though not all.
> 7. According to its man page, crm_master should be invoked from an OCF
> resource agent, which could happen under pacemaker_remote. This is
> again problematic due to the previous point.
I'd have to look into this to be sure, but it's possible that if the
cluster libs are installed, the particular options that crm_master uses
are functional.
> 8. Do fence_legacy, fence_pcmk and stonith_admin make any sense outside
> the pacemaker package? Even if they can run on top of
> pacemaker_remote, the cluster would never used them there, right?
> And what about attrd_updater? To be honest, I don't know what that's
> for anyway...
No fence agents run on remote nodes.
I'd expect stonith_admin and attrd_updater to work, as the remote node
will proxy socket connections to stonithd and attrd. attrd_updater is an
alternative interface similar to crm_attribute.
> 9. What's the purpose of chgrp haclient /etc/pacemaker if
> pacemaker_remoted is running as root? What other haclients may need
> access to the authkey file?
The crmd (which runs as hacluster:haclient) manages the cluster side of
a remote node connection, so on the cluster nodes, the key has to be
readable by that process. In the how-to's, I kept it consistent on the
remote nodes to reduce confusion and the chance for errors.
More information about the Users
mailing list