<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><br><div><div>On Oct 6, 2010, at 3:43 AM, Jayakrishnan wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div><br>Hello,</div>
<div> </div>
<div>Guess the change:-- </div>
<div>location loc_pingd g_cluster_services rule -inf: not_defined pingd or pingd<font style="BACKGROUND-COLOR: #ffff66"> number:lte 0</font><br><br>should work</div>
<div class="gmail_quote"> </div>
<div class="gmail_quote"><br clear="all"><br></div></blockquote><div><br></div><div><br></div><div>ocf:pacemaker:ping is recommended as a replacement for pingd RA</div><div><br></div><div>Both RA define node attribute "pingd" by default, I think this question arises a lot, since crm ra meta for both agents is misleading:</div><div><br></div><div># crm ra meta ocf:pacemaker:ping</div><div><div><br></div><div>name (string, [undef]): Attribute name</div><div> The name of the attributes to set. This is the name to be used in the constraints.</div><div><br></div><div>I think it should say "pingd" instead of "undef"</div><div><br></div><div>Obviously, you can redefine any name you like and use it instead, but, unfortunately, "pingd" is the only attribute name that crm_mon -f would display, the name is hardcoded in crm_mon.c:</div><div><br></div><div><div> if(safe_str_eq("pingd", g_hash_table_lookup(rsc->meta, "type"))) {</div></div><div><br></div><div>this is inconvenience for multi-homed clusters where you need to define separate ping clones for each network, so maybe crm_mon should display attributes starting with "ping". Just a thought.</div><div><br></div><div><br></div><div>Vadym</div><div><br></div></div><div><br></div><blockquote type="cite"><div class="gmail_quote">-- <br>Regards,<br><br>Jayakrishnan. L<br><br>Visit: <br><a href="http://www.foralllinux.blogspot.com/" target="_blank">www.foralllinux.blogspot.com</a><br><a href="http://www.jayakrishnan.bravehost.com/" target="_blank">www.jayakrishnan.bravehost.com</a><br>
</div>
<div class="gmail_quote"> </div>
<div class="gmail_quote"> </div>
<div class="gmail_quote">On Wed, Oct 6, 2010 at 11:56 AM, Claus Denk <span dir="ltr"><<a href="mailto:denk@us.es">denk@us.es</a>></span> wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">I am having a similar problem, so let's wait for the experts, But in the meanwhile, try changing<br><br>
<br>location loc_pingd g_cluster_services rule -inf: not_defined p_pingd<br>or p_pingd lte 0<br><br>to<br><br>location loc_pingd g_cluster_services rule -inf: not_defined pingd<br>or pingd number:lte 0<br><br>and see what happens. As far as I have read, it is also more recommended to use the "ping"<br>
resource instead of "pingd"...<br><br>kind regards, Claus<br><br><br><br><br><br><br>On 10/06/2010 05:45 AM, Craig Hurley wrote:<br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Hello,<br><br>I have a 2 node cluster, running DRBD, heartbeat and pacemaker in<br>active/passive mode. On both nodes, eth0 is connected to the main<br>
network, eth1 is used to connect the nodes directly to each other.<br>The nodes share a virtual IP address on eth0. Pacemaker is also<br>controlling a custom service with an LSB compliant script in<br>/etc/init.d/. All of this is working fine and I'm happy with it.<br>
<br>I'd like to configure the nodes so that they fail over if eth0 goes<br>down (or if they cannot access a particular gateway), so I tried<br>adding the following (as per<br><a href="http://www.clusterlabs.org/wiki/Example_configurations#Set_up_pingd" target="_blank">http://www.clusterlabs.org/wiki/Example_configurations#Set_up_pingd</a>)<br>
<br>primitive p_pingd ocf:pacemaker:pingd params host_list=172.20.0.254 op<br>monitor interval=15s timeout=5s<br>clone c_pingd p_pingd meta globally-unique=false<br>location loc_pingd g_cluster_services rule -inf: not_defined p_pingd<br>
or p_pingd lte 0<br><br>... but when I do add that, all resource are stopped and they don't<br>come back up on either node. Am I making a basic mistake or do you<br>need more info from me?<br><br>All help is appreciated,<br>
Craig.<br><br><br>pacemaker<br>Version: 1.0.8+hg15494-2ubuntu2<br><br>heartbeat<br>Version: 1:3.0.3-1ubuntu1<br><br>drbd8-utils<br>Version: 2:8.3.7-1ubuntu2.1<br><br><br>rp@rpalpha:~$ sudo crm configure show<br>node $id="32482293-7b0f-466e-b405-c64bcfa2747d" rpalpha<br>
node $id="3f2aac12-05aa-4ac7-b91f-c47fa28efb44" rpbravo<br>primitive p_drbd_data ocf:linbit:drbd \<br> params drbd_resource="data" \<br> op monitor interval="30s"<br>primitive p_fs_data ocf:heartbeat:Filesystem \<br>
params device="/dev/drbd/by-res/data" directory="/mnt/data"<br>fstype="ext4"<br>primitive p_ip ocf:heartbeat:IPaddr2 \<br> params ip="172.20.50.3" cidr_netmask="255.255.0.0" nic="eth0" \<br>
op monitor interval="30s"<br>primitive p_rp lsb:rp \<br> op monitor interval="30s" \<br> meta target-role="Started"<br>group g_cluster_services p_ip p_fs_data p_rp<br>ms ms_drbd p_drbd_data \<br>
meta master-max="1" master-node-max="1" clone-max="2"<br>clone-node-max="1" notify="true"<br>location loc_preferred_master g_cluster_services inf: rpalpha<br>colocation colo_mnt_on_master inf: g_cluster_services ms_drbd:Master<br>
order ord_mount_after_drbd inf: ms_drbd:promote g_cluster_services:start<br>property $id="cib-bootstrap-options" \<br> dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \<br> cluster-infrastructure="Heartbeat" \<br>
no-quorum-policy="ignore" \<br> stonith-enabled="false" \<br> expected-quorum-votes="2" \<br><br><br>rp@rpalpha:~$ sudo cat /etc/ha.d/<a href="http://ha.cf/" target="_blank">ha.cf</a><br>
node rpalpha<br>node rpbravo<br><br>keepalive 2<br>warntime 5<br>deadtime 15<br>initdead 60<br><br>mcast eth0 239.0.0.43 694 1 0<br>bcast eth1<br><br>use_logd yes<br>autojoin none<br>crm respawn<br><br><br>rp@rpalpha:~$ sudo cat /etc/drbd.conf<br>
global {<br> usage-count no;<br>}<br>common {<br> protocol C;<br><br> handlers {}<br><br> startup {}<br><br> disk {}<br><br> net {<br> cram-hmac-alg sha1;<br> shared-secret "foobar";<br>
}<br><br> syncer {<br> verify-alg sha1;<br> rate 100M;<br> }<br>}<br>resource data {<br> device /dev/drbd0;<br> meta-disk internal;<br> on rpalpha {<br>
disk /dev/mapper/rpalpha-data;<br> address <a href="http://192.168.1.1:7789/" target="_blank">192.168.1.1:7789</a>;<br> }<br> on rpbravo {<br> disk /dev/mapper/rpbravo-data;<br>
address <a href="http://192.168.1.2:7789/" target="_blank">192.168.1.2:7789</a>;<br> }<br>}<br><br>_______________________________________________<br>Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br><br>Project Home: <a href="http://www.clusterlabs.org/" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
<br></blockquote><br><br>_______________________________________________<br>Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>Project Home: <a href="http://www.clusterlabs.org/" target="_blank">http://www.clusterlabs.org</a><br>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br></blockquote></div><br><br><div> <br class="webkit-block-placeholder"></div>
_______________________________________________<br>Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br><br>Project Home: <a href="http://www.clusterlabs.org">http://www.clusterlabs.org</a><br>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br></blockquote></div><br></body></html>