<div class="gmail_quote">On Tue, May 11, 2010 at 11:58 AM, Dejan Muhamedagic <span dir="ltr"><<a href="mailto:dejanmm@fastmail.fm">dejanmm@fastmail.fm</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0.8ex; border-left-width: 1px; border-left-color: rgb(204, 204, 204); border-left-style: solid; padding-left: 1ex; ">
<div><div class="h5">Do you see the attribute set in the status section (cibadmin -Ql</div></div>| grep -w pingd)? If not, then the problem is with the resource.</blockquote><div><br></div><div> [root@ha1 ~]# cibadmin -Ql | grep -w pingd</div>
<div> <expression attribute="pingd" id="nfs-group-with-pinggw-expression" operation="not_defined"/></div><div> <expression attribute="pingd" id="nfs-group-with-pinggw-expression-0" operation="lte" value="0"/></div>
<div> <nvpair id="status-ha1-pingd" name="pingd" value="100"/></div><div> <nvpair id="status-ha2-pingd" name="pingd" value="100"/></div>
<div><div><br></div><div><br></div><div>Tried to change from pacemaker:ping RA to pacemaker:pingd RA</div><div>(even if I read that the former should be preferred)</div></div><div>while still the iptables rule is in place and prevents ha1 to reach the gw</div>
<div><br></div><div><div>[root@ha1 ~]# crm resource stop cl-pinggw</div><div>--> services go down (OK, expected)</div><div><br></div><div>[root@ha1 ~]# crm configure delete nfs-group-with-pinggw</div><div>[root@ha1 ~]# crm configure delete cl-pinggw</div>
<div>[root@ha1 ~]# crm resource delete nfs-group-with-pinggw</div><div>-->services restart</div><div><br></div><div>[root@ha1 ~]# crm resource stop pinggw</div><div>[root@ha1 ~]# crm configure delete pinggw</div><div>[root@ha1 ~]# crm configure primitive pinggw ocf:pacemaker:pingd \</div>
<div>> params host_list="192.168.101.1" multiplier="100" \</div><div>> op start interval="0" timeout="90" \</div><div>> op stop interval="0" timeout="100"</div>
<div>[root@ha1 ~]# crm configure clone cl-pinggw pinggw meta globally-unique="false"</div><div><br></div><div>Now I correctly have:</div><div><br></div><div>Migration summary:</div><div>* Node ha1: pingd=0</div>
<div>* Node ha2: pingd=100</div><div><br></div><div>[root@ha1 ~]# crm configure location nfs-group-with-pinggw nfs-group rule -inf: not_defined pinggw or pinggw lte 0</div><div><br></div><div>stop of all</div></div><div>
<br></div><div>But this is another problem I'm trying to solve</div><div>(it seems to me that having a group where in the order there is before an IPaddr2 resource and then a linbit:drbd reosurce, then I don't get the failover.... in the sense that the node ha1 remains drbd primary and there is no demote/promote....</div>
<div>I will eventually post in separate e-mail)</div><div><br></div><div>It seems from this test that the pacemaker:ping RA doesn't work for me.... I will stay at pingd for the moment.</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im"><br>
> Probably I didn't understand correctly what described at the link:<br>
> <a href="http://www.clusterlabs.org/wiki/Pingd_with_resources_on_different_networks" target="_blank">http://www.clusterlabs.org/wiki/Pingd_with_resources_on_different_networks</a> [1]<br>
> or it is outdated now... and instead of defining two clones it is better<br>
> (aka works) to populate the host_list parameter as described here in case of<br>
> more networks connected:<br>
><br>
> <a href="http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch09s03s03.html" target="_blank">http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch09s03s03.html</a> [2]<br>
<br>
</div>The former is when you need to test connectivity on different<br>
networks. I don't know if you need that.<br><br></blockquote><div><br></div><div>Ok. In [1] above, it makes sense if I have different resources bound to different networks and I want to prevent the loss on a network to cause unnecessary failover of the other defined resource...</div>
<div>Put a case where for some reason I have a single resource that depends on two networks, I can instead simply use [2] with only one clone resource and an extended host_list... </div><div><br></div><div><br></div></div>