Hello Andrew,<br><br>I have created the report. <br><br>I do not know how to access your bugzilla. Could you please let me know where/what to do.<br><br>Thanks in advance.<br><br><div class="gmail_quote">On Wed, Mar 24, 2010 at 12:49 AM, Andrew Beekhof <span dir="ltr"><<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div><div></div><div class="h5">On Wed, Mar 24, 2010 at 12:43 AM, Travis Dolan <<a href="mailto:travis@mylasso.com">travis@mylasso.com</a>> wrote:<br>
> I would like to know if it is possible to configure more than two resources<br>
> within a collocation group.<br>
><br>
> Simply put I have 10 Virtual IPs that will need to migrate from Node A to<br>
> Node B in the event of any failures. I also need these IPs to all start on<br>
> the same Node in the event that both Nodes are knocked out, then come back<br>
> online (ie. my cage goes dark).<br>
><br>
> My existing config is below...<br>
><br>
> node wsa \<br>
> attributes standby="off"<br>
> node wsb \<br>
> attributes standby="off"<br>
> primitive ip1 ocf:heartbeat:IPaddr2 \<br>
> params ip="10.0.1.10" cidr_netmask="24" nic="eth0:1" \<br>
> op monitor interval="30"<br>
> primitive ip2 ocf:heartbeat:IPaddr2 \<br>
> params ip="10.0.1.11" cidr_netmask="24" nic="eth0:2" \<br>
> op monitor interval="30"<br>
> primitive ip3 ocf:heartbeat:IPaddr2 \<br>
> params ip="10.0.1.12" cidr_netmask="24" nic="eth0:3" \<br>
> op monitor interval="30"<br>
> primitive ip4 ocf:heartbeat:IPaddr2 \<br>
> params ip="10.0.1.13" cidr_netmask="24" nic="eth0:4" \<br>
> op monitor interval="30"<br>
> colocation all-ips inf: ip1 ip2 ip3 ip4<br>
> property $id="cib-bootstrap-options" \<br>
> dc-version="1.0.8-2a76c6ac04bcccf42b89a08e55bfbd90da2fb49a" \<br>
> cluster-infrastructure="openais" \<br>
> expected-quorum-votes="2" \<br>
> stonith-enabled="false" \<br>
> no-quorum-policy="ignore"<br>
> rsc_defaults $id="rsc-options" \<br>
> resource-stickiness="100"<br>
><br>
> If I stop corosync from running on either node, the IPs are migrated without<br>
> issue. The problem occures when both nodes come up at or close to the same<br>
> time. An example of this would be...<br>
><br>
> Node A<br>
> /etc/init.d/corosync stop<br>
><br>
> Node B<br>
> /etc/init.d/corosync stop<br>
><br>
> Node A<br>
> /etc/init.d/corosync start<br>
><br>
> Node B<br>
> /etc/init.d/corosync start<br>
><br>
> Result<br>
> ---------<br>
><br>
> Online: [ wsa wsb ]<br>
><br>
> ip1 (ocf::heartbeat:IPaddr2): Started wsa<br>
> ip2 (ocf::heartbeat:IPaddr2): Started wsb<br>
> ip3 (ocf::heartbeat:IPaddr2): Started wsa<br>
> ip4 (ocf::heartbeat:IPaddr2): Started wsb<br>
><br>
<br>
</div></div>Looks like a bug somewhere.<br>
Could you create a hb_report archive covering the test scenario<br>
described above and attach it to a new bugzilla please?<br>
<div><div></div><div class="h5"><br>
_______________________________________________<br>
Pacemaker mailing list<br>
<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
</div></div></blockquote></div><br><br>