[ClusterLabs] Antw: [EXT] Re: resource management of standby node
Ken Gaillot
kgaillot at redhat.com
Mon Nov 30 13:52:19 EST 2020
On Mon, 2020-11-30 at 18:17 +0300, Andrei Borzenkov wrote:
> 30.11.2020 17:05, Ulrich Windl пишет:
> > > > > Andrei Borzenkov <arvidjaar at gmail.com> schrieb am 30.11.2020
> > > > > um 14:18 in
> >
> > Nachricht
> > <CAA91j0XLfztbSmCkDGM0Ofb2FBKCquwyhiEU8LV5WgiUU3H=iA at mail.gmail.com
> > >:
> > > On Mon, Nov 30, 2020 at 3:11 PM Ulrich Windl
> > > <Ulrich.Windl at rz.uni‑regensburg.de> wrote:
> > > >
> > > > Hi!
> > > >
> > > > In SLES15 I'm surprised what a standby node does: My guess was
> > > > that a
> > >
> > > standby node would stop all resources and then just "shut up",
> > > but it seems
> > > it still tried to place resources and calls monitor operations.
The variety of maintenance-related options can be confusing. Standby
should cause any active resources to stop on the affected node(s), and
no new resources will be placed on the node(s), but probes (one-time
monitors to determine current state) can still run if state is unknown
(e.g. a new resource is added, or history is cleaned up).
Standby doesn't prevent the scheduler from running, it just tells the
scheduler what should (not) be done.
Standby nodes continue to contribute to quorum at the cluster level
(corosync), and can vote for or be elected DC.
Some detail is available at:
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#s-monitoring-unmanaged
> > > Standby nodes are ineligible for running resources. It does not
> > > stop
> > > pacemaker from trying to place resources somewhere in cluster.
> >
> > But it's somewhat ridiculous if all nodes are in standby.
> >
>
> If you do not want cluster to manager resources, put cluster in
> maintenance mode.
Keep in mind that will leave resources in their current state, so any
active resources will not be stopped.
> > > > Like this after a configuration change:
> > > > pacemaker‑controld[49413]: notice: Result of probe operation
> > > > for
> > >
> > > prm_test_raid_md1 on h18: not running
> > > >
> > >
> > > Probe is not monitor. Normally it happens once when the pacemaker
> > > is
> > > started. It should not really be affected by putting node in
> > > standby.
> >
> > A configuration change triggered it. Again: It makes little sense
> > if all nodes
> > are in standby: What action would be performed depending on the
> > result of the
> > probe? None, I guess; so why probe?
Pacemaker wants to know the current state at all times, so as soon as
circumstances change (i.e. standby mode is lifted), it can start things
right away rather than have to wait for probes on all nodes to come
back.
> I guess because you are using standby node in different way than it
> was
> designed for. It really is intended for graceful isolating of single
> node, not to stop resource management throughout the whole cluster.
Though there's nothing wrong with putting all nodes in standby. Another
alternative would be to set the stop-all-resources cluster property.
> > > > Or this (on the DC node):
> > > > pacemaker‑schedulerd[69599]: notice: Cannot pair
> > > > prm_test_raid_md1:0 with
> > > instance of cln_DLM
> > > >
> > >
> > > So? As mentioned, pacemaker still attempts to manage resources,
> > > it
> > > just excludes standby nodes from the list of possible candidates.
> > > If
> >
> > Buit there are no (zero) candidates!
> >
>
> If there is workflow that can only be implemented by putting all
> nodes
> in standby then I guess this optimization could be implemented. So
> far
> it is not clear, are you complaining about log entries or did you
> experience real problem? I have a feeling that everything worked for
> you
> but you are annoyed by seeing log entries that you did not expect.
>
> > > all nodes are in standby mode, no resource can run anywhere, but
> > > pacemaker still needs to try placing resources to see it. Maybe
> > > you
> > > really want cluster maintenance mode instead.
> >
> > I thought about that:
> > First put all nodes in standby to stop resources, then put all
> > nodes in
> > maintenance mode, then edit configuration.
>
> There is no maintenance mode for a single node.
Actually there is ... you can set the "maintenance" node attribute to
true:
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#_special_node_attributes
However I don't recommend it, because unlike cluster-wide maintenance
mode, restarting Pacemaker on a node in single-node maintenance mode
will cause problems.
> > Then turn off maintenance mode for all nodes, then put them online
> > again.
> >
> > Soounds somewhat complicated.
> >
>
> Put cluster in maintenance, edit configuration, exit maintenance
> mode.
> You do not need to stop resources as long as new configuration will
> have
> the same resources under different names. They can remain active,
> pacemaker will probe and discover them when existing maintenance
> mode.
Currently, Pacemaker can't detect renames as such. It will consider the
old name as an orphan resource that must be stopped, and the new name
as a new resource to be started.
> > > > Maybe I should have done differently, but after a test setup I
> > > > noticed that
> > > I named by primitives in a non‑consistent way, and wanted to
> > > mass‑rename
> > > resources.
> > > > As from the past renaming running resources had issues, I
> > > > wanted to stop
> >
> > all
> > > resources before changing the configuration.
> > > > So I was expecting the cluster to be silent until I put at
> > > > least one node
> > >
> > > online again.
> > > >
> > > > Expectation failed. Is there a better way to do it?
> > > >
> > > > Regards,
> > > > Ulrich
--
Ken Gaillot <kgaillot at redhat.com>
More information about the Users
mailing list