<div dir="auto">Thanks for pointing this out. This solution worked for me.</div><br><div class="gmail_quote"><div dir="ltr">On Wed., Oct. 31, 2018, 09:43 Andrei Borzenkov <<a href="mailto:arvidjaar@gmail.com">arvidjaar@gmail.com</a> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Wed, Oct 31, 2018 at 3:59 PM <a href="mailto:jm2109384@gmail.com" target="_blank" rel="noreferrer">jm2109384@gmail.com</a> <<a href="mailto:jm2109384@gmail.com" target="_blank" rel="noreferrer">jm2109384@gmail.com</a>> wrote:<br>
><br>
> Thanks for responding Andrei.<br>
><br>
> How would i enable monitors on inactive nodes?<br>
<br>
Quoting documentation:<br>
<br>
By default, a monitor operation will ensure that the resource is<br>
running where it is supposed to. The<br>
target-role property can be used for further checking.<br>
<br>
For example, if a resource has one monitor operation with interval=10<br>
role=Started and a<br>
second monitor operation with interval=11 role=Stopped, the cluster<br>
will run the first monitor on<br>
any nodes it thinks should be running the resource, and the second<br>
monitor on any nodes that it thinks<br>
should not be running the resource (for the truly paranoid, who want<br>
to know when an administrator<br>
manually starts a service by mistake).<br>
<br>
> I thought monitors runs on all nodes that the resource is on. Would you be able to provide a configuration sample that i can refer to?<br>
> Or will it be possible to configure the cluster to perform a probe in a given interval? Will appreciate some guidance on this. Thanks<br>
><br>
><br>
> On Mon., Oct. 29, 2018, 13:20 Andrei Borzenkov, <<a href="mailto:arvidjaar@gmail.com" target="_blank" rel="noreferrer">arvidjaar@gmail.com</a>> wrote:<br>
>><br>
>> 29.10.2018 20:04, <a href="mailto:jm2109384@gmail.com" target="_blank" rel="noreferrer">jm2109384@gmail.com</a> пишет:<br>
>> > Hi Guys,<br>
>> ><br>
>> > I'm a new user of pacemaker clustering software and I've just configured a<br>
>> > cluster with a single systemd resource. I have the following cluster and<br>
>> > resource configurations below. Failover works perfectly between the two<br>
>> > nodes however, i wanted to have a constraint/rule or a config that will<br>
>> > ensure that my resource has a single instance running on the cluster at all<br>
>> > times. I'd like to avoid the situation where the resource gets started<br>
>> > manually and ends up running on both cluster nodes. Hoping to get your<br>
>> > advice on how to achieve this. Thanks in advance.<br>
>><br>
>> pacemaker does one time probe on each node when pacemaker is started.<br>
>> This covers the case when resource was manually started before<br>
>> pacemaker. You can enable monitor on inactive nodes which should also<br>
>> detect if resource was started outside of pacemaker. But note that it<br>
>> leaves you some window (up to monitoring interval) when multiple<br>
>> instances may be up on different nodes until pacemaker is aware of it.<br>
>><br>
>> ><br>
>> > ----<br>
>> > Cluster Name: cluster1<br>
>> > Corosync Nodes:<br>
>> > node1 node2<br>
>> > Pacemaker Nodes:<br>
>> > node1 node2<br>
>> ><br>
>> > Resources:<br>
>> > Resource: app1_service (class=systemd type=app1-server)<br>
>> > Operations: monitor interval=10s (app1_service-monitor-interval-10s)<br>
>> > start interval=0s timeout=120s<br>
>> > (app1_service-start-interval-0s)<br>
>> > stop interval=0s timeout=120s (app1_service-stop-interval-0s)<br>
>> > failure interval=0s timeout=120s<br>
>> > (app1_service-failure-interval-0s)<br>
>> ><br>
>> > Stonith Devices:<br>
>> > Fencing Levels:<br>
>> ><br>
>> > Location Constraints:<br>
>> > Ordering Constraints:<br>
>> > Colocation Constraints:<br>
>> > Ticket Constraints:<br>
>> ><br>
>> > Alerts:<br>
>> > No alerts defined<br>
>> ><br>
>> > Resources Defaults:<br>
>> > resource-stickiness: 100<br>
>> > migration-threshold: 1<br>
>> > failure-timeout: 120s<br>
>> > Operations Defaults:<br>
>> > No defaults set<br>
>> ><br>
>> > Cluster Properties:<br>
>> > cluster-infrastructure: corosync<br>
>> > dc-version: 1.1.18-11.el7_5.3-2b07d5c5a9<br>
>> > have-watchdog: false<br>
>> > last-lrm-refresh: 1540829641<br>
>> > no-quorum-policy: ignore<br>
>> > stonith-enabled: false<br>
>> > symmetric-cluster: true<br>
>> ><br>
>> > Quorum:<br>
>> > Options:<br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
>> > <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>> ><br>
>> > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>> > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> ><br>
>><br>
>> _______________________________________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div>