[ClusterLabs] Command to show location constraints?
jpokorny at redhat.com
Thu Aug 29 12:01:36 EDT 2019
On 28/08/19 10:27 +0200, Ulrich Windl wrote:
>>>> Jan Pokorný <jpokorny at redhat.com> schrieb am 28.08.2019 um 10:03
>>>> in Nachricht <20190828080347.GA9493 at redhat.com>:
>> On 27/08/19 09:24 ‑0600, Casey & Gina wrote:
>>> Hi, I'm looking for a way to show just location constraints, if they
>>> exist, for a cluster. I'm looking for the same data shown in the
>>> output of `pcs config` under the "Location Constraints:" header, but
>>> without all the rest, so that I can write a script that checks if
>>> there are any set.
>>> The situation is that sometimes people will perform a failover with
>>> `pcs resource move ‑‑master <resource>`, but then forget to follow
>>> it up with `pcs resource clear <resource>`, and then it causes
>>> unnecessary failbacks later. As we never want to have any
>>> particular node in the cluster preferred for this resource, I'd like
>>> to write a script that can automatically check for any location
>>> constraints being set and either alert or clear it automatically.
>> One could also think of "crm_resource ‑‑clear ‑‑dry‑run" that doesn't
>> exist yet. Please, file a RFE with https://bugs.clusterlabs.org if
>> this would be useful.
> Thinking about it: Location constraints can have an expire
> timestamp, but they don't have a creation timestamp (AFAIK). This
> the first step in automatically cleaning up location constraints
> that don't have an expiration time set would be adding a creation
> time stamp. The next step would be a definition of a "maximum
> location constraint lifetime", and finally actual code that removes
> location constraints that exceeded their lifetime.
It feels that any such automatism (even if user configuration driven)
would need to, moreover, prove (with internal what-if simulation)
that no change in resource assignment will immediately occur in return,
since you likely don't want spurious (only "positively deterministic"
if you really review the plans periodically and you are fine with
them), unnecessary shuffles in your cluster.
> Thinking about it: There may be a more pleasing solution for the
> current code base: Define a maximum lifetime for any migration
> constraint, and use that if the user did not specify one. (The
> constrains would still linger around, but would not have an effect
> any more.)
All in all, I think the impact of cancelling shall be factored into
decisions as mentioned, to stand the "least surprise" test.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 819 bytes
Desc: not available
More information about the Users