[ClusterLabs] Configure a resource to only run a single instance at all times
jm2109384 at gmail.com
jm2109384 at gmail.com
Sat Nov 3 12:14:35 EDT 2018
Thanks for pointing this out. This solution worked for me.
On Wed., Oct. 31, 2018, 09:43 Andrei Borzenkov <arvidjaar at gmail.com wrote:
> On Wed, Oct 31, 2018 at 3:59 PM jm2109384 at gmail.com <jm2109384 at gmail.com>
> wrote:
> >
> > Thanks for responding Andrei.
> >
> > How would i enable monitors on inactive nodes?
>
> Quoting documentation:
>
> By default, a monitor operation will ensure that the resource is
> running where it is supposed to. The
> target-role property can be used for further checking.
>
> For example, if a resource has one monitor operation with interval=10
> role=Started and a
> second monitor operation with interval=11 role=Stopped, the cluster
> will run the first monitor on
> any nodes it thinks should be running the resource, and the second
> monitor on any nodes that it thinks
> should not be running the resource (for the truly paranoid, who want
> to know when an administrator
> manually starts a service by mistake).
>
> > I thought monitors runs on all nodes that the resource is on. Would you
> be able to provide a configuration sample that i can refer to?
> > Or will it be possible to configure the cluster to perform a probe in a
> given interval? Will appreciate some guidance on this. Thanks
> >
> >
> > On Mon., Oct. 29, 2018, 13:20 Andrei Borzenkov, <arvidjaar at gmail.com>
> wrote:
> >>
> >> 29.10.2018 20:04, jm2109384 at gmail.com пишет:
> >> > Hi Guys,
> >> >
> >> > I'm a new user of pacemaker clustering software and I've just
> configured a
> >> > cluster with a single systemd resource. I have the following cluster
> and
> >> > resource configurations below. Failover works perfectly between the
> two
> >> > nodes however, i wanted to have a constraint/rule or a config that
> will
> >> > ensure that my resource has a single instance running on the cluster
> at all
> >> > times. I'd like to avoid the situation where the resource gets started
> >> > manually and ends up running on both cluster nodes. Hoping to get
> your
> >> > advice on how to achieve this. Thanks in advance.
> >>
> >> pacemaker does one time probe on each node when pacemaker is started.
> >> This covers the case when resource was manually started before
> >> pacemaker. You can enable monitor on inactive nodes which should also
> >> detect if resource was started outside of pacemaker. But note that it
> >> leaves you some window (up to monitoring interval) when multiple
> >> instances may be up on different nodes until pacemaker is aware of it.
> >>
> >> >
> >> > ----
> >> > Cluster Name: cluster1
> >> > Corosync Nodes:
> >> > node1 node2
> >> > Pacemaker Nodes:
> >> > node1 node2
> >> >
> >> > Resources:
> >> > Resource: app1_service (class=systemd type=app1-server)
> >> > Operations: monitor interval=10s (app1_service-monitor-interval-10s)
> >> > start interval=0s timeout=120s
> >> > (app1_service-start-interval-0s)
> >> > stop interval=0s timeout=120s
> (app1_service-stop-interval-0s)
> >> > failure interval=0s timeout=120s
> >> > (app1_service-failure-interval-0s)
> >> >
> >> > Stonith Devices:
> >> > Fencing Levels:
> >> >
> >> > Location Constraints:
> >> > Ordering Constraints:
> >> > Colocation Constraints:
> >> > Ticket Constraints:
> >> >
> >> > Alerts:
> >> > No alerts defined
> >> >
> >> > Resources Defaults:
> >> > resource-stickiness: 100
> >> > migration-threshold: 1
> >> > failure-timeout: 120s
> >> > Operations Defaults:
> >> > No defaults set
> >> >
> >> > Cluster Properties:
> >> > cluster-infrastructure: corosync
> >> > dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9
> >> > have-watchdog: false
> >> > last-lrm-refresh: 1540829641
> >> > no-quorum-policy: ignore
> >> > stonith-enabled: false
> >> > symmetric-cluster: true
> >> >
> >> > Quorum:
> >> > Options:
> >> >
> >> >
> >> > _______________________________________________
> >> > Users mailing list: Users at clusterlabs.org
> >> > https://lists.clusterlabs.org/mailman/listinfo/users
> >> >
> >> > Project Home: http://www.clusterlabs.org
> >> > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> > Bugs: http://bugs.clusterlabs.org
> >> >
> >>
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org
> >> https://lists.clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20181103/9f3758a8/attachment.html>
More information about the Users
mailing list