[ClusterLabs] Possible idea for 2.0.0: renaming the Pacemaker daemons

Jan Pokorný jpokorny at redhat.com
Fri Apr 6 06:24:14 EDT 2018


On 06/04/18 09:09 +0200, Kristoffer Grönlund wrote:
> Ken Gaillot <kgaillot at redhat.com> writes:
>> On Tue, 2018-04-03 at 08:33 +0200, Kristoffer Grönlund wrote:
>>> Ken Gaillot <kgaillot at redhat.com> writes:
>>> 
>>>>> I would vote against PREFIX-configd as compared to other cluster
>>>>> software, I would expect that daemon name to refer to a more
>>>>> generic cluster configuration key/value store, and that is
>>>>> something that I have some hope of adding in the future ;) So
>>>>> I'd like to keep "config" or "database" for such a possible
>>>>> future component...
>>>> 
>>>> What's the benefit of another layer over the CIB?
>>> 
>>> The idea is to provide a more generalized key-value store that
>>> other applications built on top of pacemaker can use. Something
>>> like a HTTP REST API to a key-value store with transactional
>>> semantics provided by the cluster. My understanding so far is that
>>> the CIB is too heavy to support that kind of functionality well,
>>> and besides that the interface is not convenient for non-cluster
>>> applications.

First, from the bigger picture perspective, let's figure out if this is
envisioned as something mandatory for each and every pacemaker-charged
stack, or whether this is actually an optional part helping just the
particular use cases you have in mind.

* Do the casual cluster-awarizing agents need to synchronize state way
  beyond what they can do now with node attributes and various run-time
  indicators passed by cluster resource manager directly?

  - Wouldn't using corosync infrastructure directly serve better in
    that case (as mentioned by Klaus)?

* Or is there a shift from "pacemaker, the executioner of the jobs
  within cluster to achieve configured goals, high-level servant of
  the users" to "pacemaker, the distributed systems enabler for
  otherwise single host software, primarily low-level servant of such
  applications stacked on top"?
  
  - Isn't this rather an opportunity for new "addon" type of in-CIB
    resource that would have much more intimate contact with
    pacemaker, would rather act as a sibling of other pacemaker
    daemons (which we can effectively understand as default clones
    with unlimited restarts upon crash), but would be
    started/plugged-in after all these native ones, could possibly
    live outside of the pacemaker's own lifetime (in which case it
    would use a backup communication channel, perhaps limited just
    to bootstrapping procedure), could live in the project on its
    own given that "addon" API would be well-defined, and importantly,
    would be completely opt-in for those happy with the original
    pacemaker use case (i.e., more akin to UNIX philosophy)

>> My first impression is that it sounds like a good extension to attrd,
>> cluster-wide attributes instead of node attributes. (I would envision a
>> REST API daemon sitting in front of all the daemons without providing
>> any actual functionality itself.)

REST API daemon could be just another opt-in "addon" type of resource,
if need be.

[Or, considering shining new things, perhaps Varlink protocol:
https://github.com/varlink/documentation/wiki
might be appealing as well, assuming its HTTP proxy counterpart:
https://github.com/varlink/org.varlink.http
that can serve JSONs remotely.]

>> The advantage to extending attrd is that it already has code to
>> synchronize attributes at start-up, DC election, partition healing,
>> etc., as well as features such as write dampening.
> 
> Yes, I've considered that as well and yes, I think it could make
> sense. I need to gain a better understanding of the current attrd
> implementation to see how to make it do what I want. The configd
> name/part comes into play when bringing in syncing data beyond the
> key-value store (see below).
>
> [...]
> 
>>> My most immediate applications for that would be to build file
>>> syncing into the cluster and to avoid having to have an extra
>>> communication layer for the UI.

How does this relate to csync2 I see frequently used together
with the cluster stack proper?  Would it be deprecated with
you intended long-term vision, or just using a modified back-end
to somehow utilize such key-value store?

>> How would file syncing via a key-value store work?
>> 
>> One of the key hurdles in any cluster-based sync is
>> authentication/authorization. Authorization to use a cluster UI is
>> not necessarily equivalent to authorization to transfer arbitrary
>> files as root.
> 
> Yeah, the key-value store wouldn't be enough to implement file
> syncing, but it could potentially be the mechanism by which the file
> syncing implementation maintains its state. I'm somewhat conflating two
> things that I want that are both related to syncing configuration beyond
> the cluster daemon itself across the cluster.
> 
> I don't see authentication/authorization as a hurdle or blocker, but
> it's certainly something that needs to be considered. Clearly a
> less-privileged user shouldn't be able to configure syncing of
> root-owned files across the cluster.

* * *

On 06/04/18 09:14 +0200, Kristoffer Grönlund wrote:
> Klaus Wenninger <kwenning at redhat.com> writes:
>> One thing I thought over as well is some kind of
>> a chicken & egg issue arising when you want to
>> use the syncing-mechanism so setup (bootstrap)
>> the cluster.
>> So something like the ssh-mechanism pcsd is
>> using might still be needed.
>> The file-syncing approach would have the data
>> easily available locally prior to starting the
>> actual cluster-wide syncing.
>> 
>> Well ... no solutions or anything ... just
>> a few thoughts I had on that issue ... 25ct max ;-)
>> 
> 
> Bootstrapping is a problem I've thought about quite a bit.. It's
> possible to implement in a number of ways, and it's not clear what's the
> better approach. But I see a cluster-wide configuration database

Hmm, and then inter-cluster configuration database generalization
kicks in when booth is considered :-)

> as an enabler for better bootstrapping rather than a hurdle. If a
> new node doesn't need a local copy of the database but can access
> the database from an existing node, it would be possible for the new
> node to bootstrap itself into the cluster with nothing more than
> remote access to that database, so a single port to open and a
> single authentication mechanism - this could certainly be handled
> over SSH just like pcsd and crmsh implements it today.

Btw. a viable approach towards within-domain full bootstrap
(I as a destined-to-be-node-of-cluster-unknown-to-me-apriori
need to figure out everything incl. where's the configuration
database located) access could also be arranged in an out-of-band
manner, using DNS (or perhaps DHCP + DNS):

- location(s) of the remote configuration database is encoded
  in SRV record for the home domain

- authenticity verified via public key encoded in whatever else
  record type (yep, with a requirement the DNS server is strictly
  another machine) using standard public key infrastructure
  methods (also for confidentiality of the exchanged data)

> But yes, at some point there needs to be communication channel opened..

Granted, there's a LOT to wrap one's head around...

-- 
Jan (Poki)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20180406/7d91c8f2/attachment-0002.sig>


More information about the Users mailing list