<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Hi,<br>
<br>
Vadym Chepkov wrote:
<blockquote cite="mid:EFC137ED-DA86-49BA-BD0B-021A9668584C@gmail.com"
type="cite">
<pre wrap="">On Oct 28, 2010, at 2:53 AM, Dan Frincu wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi,
Andreas Ntaflos wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Hi,
first time poster, short time Pacemaker user. I don't think this is a
very difficult question to answer but I seem to be feeding Google the
wrong search terms. I am using Pacemaker 1.0.8 and Corosync 1.2.0 on
Ubuntu 10.04.1 Server.
Short version: How do I configure multiple independent two-node clusters
where the nodes are all on the same subnet? Only the two nodes that form
the cluster should see that cluster's resources and not any other.
Is this possible? Where should I look for more and detailed information?
</pre>
</blockquote>
<pre wrap="">You need to specify different multicast sockets for this to work. Under the /etc/corosync/corosync.conf you have the interface statements. Even if all servers are in the same subnet, you can "split them apart" by defining unique multicast sockets.
An example should be useful. Let's say that you have only one interface statement in the corosync file.
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0
mcastaddr: 239.192.168.1
mcastport: 5405
}
The multicast socket in this case is 239.192.168.1:5405. All nodes that should be in the same cluster should use the same multicast socket. In your case, the first two nodes should use the same multicast socket. How about the other two nodes? Use another unique multicast socket.
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0
mcastaddr: 239.192.168.112
mcastport: 5405
}
Now the multicast socket is 239.192.168.112:5405. It's unique, the network address is the same, but you add this config (edit according to your environment, this is just an example) to your other two nodes. So you have cluster1 formed out of node1 and node2 linked to 239.192.168.1:5405 and cluster2 formed out of node3 and node4 linked to 239.192.168.112:5405.
This way, the clusters don't _see_ each other, so you can reuse the resource ID's and see only two nodes per cluster.
</pre>
</blockquote>
<pre wrap=""><!---->
Out of curiosity, RFC2365 defines "local scope" multicast space 239.255.0.0/16 and "organizational local scope" 239.192.0.0/14.
Seems most examples for pacemaker cluster use later. But since most clusters are not spread across different subnets, wouldn't it be more appropriate to use the former?
Thanks,
Vadym
</pre>
</blockquote>
You do realize that 239.0.0.0/8 has the same general purpose as
RFC1918, only it references multicast addresses instead. Basically
general guidelines dictate usage of 239.255.0.0/16 locally scoped
address range (e.g.: all nodes are in the same general location, such
as a building), but this just as saying use 192.168.0.0/16 instead of
10.0.0.0/16. It really boils down to the network engineer's choice of
addressing, either solution works, but this kind of an elaborate
multicast addressing scheme design implies also a large number of nodes
in many locations, all under the same general administration. <br>
<br>
Thinking that for each 2 node cluster with one communication channel
you need one multicast address, and that you can put many nodes in the
same cluster (where such should arise), the number of multicast
addresses is usually small, so it makes little difference whether you
choose from a 2^16 range or from a 2^24 range of multicast addresses.<br>
<br>
Going to another level with this, imagine your using vlan's for each
cluster, all of a sudden, you can use the same multicast address :)<br>
<br>
The main concern in this case should pertain less to the addressing
scheme and more to the interconnecting devices' support for multicast.<br>
<br>
Just my 2 cents.<br>
<br>
Regards,<br>
<br>
Dan<br>
<blockquote cite="mid:EFC137ED-DA86-49BA-BD0B-021A9668584C@gmail.com"
type="cite">
<pre wrap="">
_______________________________________________
Pacemaker mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>
<a class="moz-txt-link-freetext" href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a>
Project Home: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org">http://www.clusterlabs.org</a>
Getting started: <a class="moz-txt-link-freetext" href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a>
Bugs: <a class="moz-txt-link-freetext" href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a>
</pre>
</blockquote>
<br>
<pre class="moz-signature" cols="72">--
Dan FRINCU
Systems Engineer
CCNA, RHCE
Streamwide Romania
</pre>
</body>
</html>