[ClusterLabs] Doing reload right

Andrew Beekhof abeekhof at redhat.com
Tue Jul 26 04:40:10 UTC 2016


On Sat, Jul 23, 2016 at 7:10 AM, Ken Gaillot <kgaillot at redhat.com> wrote:
> On 07/21/2016 07:46 PM, Andrew Beekhof wrote:
>>>> What do you mean by native restart action? Systemd restart?
>>
>> Whatever the agent supports.
>
> Are you suggesting that pacemaker starting checking whether the agent
> metadata advertises a "restart" action? Or just assume that certain
> resource classes support restart (e.g. systemd) and others don't (e.g. ocf)?

No, I'm suggesting the crm_resource cli start checking... not the same thing

>
>>>>
>>>>> 3. re-enables the recurring monitor operations regardless of whether
>>>>> the reload succeeds, fails, or times out, etc
>>>>>
>>>>> No maintenance mode required, and whatever state the resource ends up
>>>>> in is re-detected by the cluster in step 3.
>>>>
>>>> If you're lucky :-)
>>>>
>>>> The cluster may still mess with the resource even without monitors, e.g.
>>>> a dependency fails or a preferred node comes online.
>>
>> Can you explain how neither of those results in a restart of the service?
>
> Unless the resource is unmanaged, the cluster could do something like
> move it to a different node, disrupting the local force-restart.

But the next time it starts there, it will come up with the new configuration.
Achieving the desired affect.

This is no different to using maintenance-mode and the cluster moving
or stopping it immediately after is it disabled again.
Either way, the resource is no-longer running with the old
configuration at the end of the call.

>
> Ideally, we'd be able to disable monitors and unmanage the resource for
> the duration of the force-restart, but only on the local node.
>
>>>> Maintenance
>>>> mode/unmanaging would still be safer (though no --force-* option is
>>>> completely safe, besides check).
>>>
>>> I'm happy with whatever you gurus come up with ;-)  I'm just hoping
>>> that it can be made possible to pinpoint an individual resource on an
>>> individual node, rather than having to toggle maintenance flag(s)
>>> across a whole set of clones, or a whole node.
>>
>> Yep.
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org




More information about the Users mailing list