[ClusterLabs] Best way to create a floating identity file
Tony Stocker
akostocker at gmail.com
Thu Jan 7 11:55:44 EST 2021
On Thu, Jan 7, 2021 at 2:14 AM Reid Wahl <nwahl at redhat.com> wrote:
>
> If there will only ever be one value (e.g., "httpserver") in the file,
> then yet another possibility is to use an ocf:heartbeat:symlink
> resource. You could set up a file on each node (say,
> /var/local/project/cluster-node-real) with "httpserver" as its
> contents. Then create a symlink resource with options
> `link=/var/local/project/cluster-node
> target=/var/local/project/cluster-node-real`, and add that symlink
> resource to your resource group. When it starts, the
> /var/local/project/cluster-node file gets created as a symlink to
> /var/local/project/cluster-node-real; when the resource stops, the
> symlink gets removed.
>
> If I read your email correctly, this would make the cron script work as-is.
>
Thanks very much! This does indeed make things simpler and doesn't
require as many (if any) changes in the cron entries! Plus, it allows
users who aren't allowed to run 'pcs' to be able to also check if the
system that they're on is indeed the primary/in-use one. I greatly
appreciate this!
> On Wed, Dec 16, 2020 at 10:43 AM Ken Gaillot <kgaillot at redhat.com> wrote:
> >
> > On Wed, 2020-12-16 at 04:46 -0500, Tony Stocker wrote:
> > > On Tue, Dec 15, 2020 at 12:29 PM Ken Gaillot <kgaillot at redhat.com>
> > > wrote:
> > > >
> > > > On Tue, 2020-12-15 at 17:02 +0300, Andrei Borzenkov wrote:
> > > > > On Tue, Dec 15, 2020 at 4:58 PM Tony Stocker <
> > > > > akostocker at gmail.com>
> > > > > wrote:
> > > > > >
> > > >
> > > > Just for fun, some other possibilities:
> > > >
> > > > You could write your script/cron as an OCF RA itself, with an
> > > > OCF_CHECK_LEVEL=20 monitor doing the actual work, scheduled to run
> > > > at
> > > > whatever interval you want (or using time-based rules, enabling it
> > > > to
> > > > run at a particular time). Then you can colocate it with the
> > > > workload
> > > > resources.
> > > >
> > > > Or you could write a systemd timer unit to call your script when
> > > > desired, and colocate that with the workload as a systemd resource
> > > > in
> > > > the cluster.
> > > >
> > > > Or similar to the crm_resource method, you could colocate an
> > > > ocf:pacemaker:attribute resource with the workload, and have your
> > > > script check the value of the node attribute (with attrd_updater
> > > > -Q) to
> > > > know whether to do stuff or not.
> > > > --
> > >
> > > All three options look interesting, but the last one seems the
> > > simplest. Looking at the description I'm curious to know what happens
> > > with the 'inactive_value' string. Is that put in the 'state' file
> > > location whenever a node is not the active one? For example, when I
> > > first set up the attribute and it gets put on the active node
> > > currently running the resource group with the 'active_value' string,
> > > will the current backup node automatically get the same 'state' file
> > > created with the 'inactive_value'? Or does that only happen when the
> > > resource group is moved?
> > >
> > > Secondly, does this actually create a file with a plaintext entry
> > > matching one of the *_value strings? Or is it simply an empty file
> > > with the information stored somewhere in the depths of the PM config?
> >
> > The state file is just an empty file used to determine whether the
> > resource is "running" or not (since there's no actual daemon process
> > kept around for it).
> >
> > > Finally (for the moment), what does the output of 'attrd_updater -Q'
> > > look like? I need to figure out how to utilize the output for a cron
> > > 'if' statement similar to the previous one:
> > >
> > > if [ -f /var/local/project/cluster-node ] && [ `cat
> > > /var/local/project/cluster-node` = "distroserver" ]; then ...
> >
> > First you need to know the node attribute name. By default this is
> > "opa-" plus the resource ID but you can configure it as a resource
> > parameter (name="whatever") if you want something more obvious.
> >
> > Then you can query the value on the local node with:
> >
> > attrd_updater -Q -n <attribute-name>
> >
> > It's possible the attribute has not been set at all (the node has never
> > run the resource). In that case there will be an error return and a
> > message on stderr.
> >
> > If the attribute has been set, the output will look like
> >
> > name="attrname" host="nodename" value="1"
> >
> > Looking at it now, I realize there should be a --quiet option to print
> > just the value by itself, but that doesn't exist currently. :) Also, we
> > are moving toward having the option of XML output for all tools, which
> > is more reliable for parsing by scripts than textual output that can at
> > least theoretically change from release to release, but attrd_updater
> > hasn't gained that capability yet.
> >
> > That means a (somewhat uglier) one-liner test would be something like:
> >
> > [ "$(attrd_updater -Q -n attrname 2>/dev/null | sed -n -e 's/.* value="\(.*\)".*/\1/p')" = "1" ]
> >
> > That relies on the fact that the value will be "1" (or whatever you set
> > as active_value) only if the attribute resource is currently active on
> > the local node. Otherwise it will be "0" (if the resource previously
> > ran on the local node but no longer is) or empty (if the resource never
> > ran on the local node).
> >
> > > since the cron script is run on both nodes, I need to know how the
> > > output can be used to determine which node will run the necessary
> > > commands. If the return values are the same regardless of which node
> > > I
> > > run attrd_updater on, what do I use to differentiate?
> > >
> > > Unfortunately right now I don't have a test cluster that I can play
> > > with things on, only a 'live' one that we had to rush into service
> > > with a bare minimum of testing, so I'm loath to play with things on
> > > it.
> > >
> > > Thanks!
> > --
> > Ken Gaillot <kgaillot at redhat.com>
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
>
>
> --
> Regards,
>
> Reid Wahl, RHCA
> Senior Software Maintenance Engineer, Red Hat
> CEE - Platform Support Delivery - ClusterHA
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
--
Tony Stocker
-------------------------------------------------------------------
"There are no wrong turnings.
Only paths you had not known
you were meant to walk."
-------------------------------------------------------------------
More information about the Users
mailing list