BTW: The order matters in the colocation rule. When I configure:<br>colocation colo-master_worker -1: master worker<br>Then "failback" is blocked by the stickiness. In my opinion this is a bug, but others may have an explanation.<br>
This is the default version that installs on FC12 using the GUI software packages tools.<br>Alan<br><br><div class="gmail_quote">On Tue, Mar 23, 2010 at 3:47 PM, Alan Jones <span dir="ltr"><<a href="mailto:falancluster@gmail.com">falancluster@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">The following rules give me the behavior I was looking for:<br><br>primitive master ocf:pacemaker:Dummy meta resource-stickiness="INFINITY" is-managed="true"<br>
location l-master_a master 1: fc12-a<br>location l-master_b master 1: fc12-b<br>
primitive master ocf:pacemaker:Dummy<br>location l-worker_a worker 1: fc12-a<br>location l-worker_b worker 1: fc12-b<br>colocation colo-master_worker -1: worker master<br><br>To recap, the goal is an active-active two node cluster were "master" is sticky and "master" and "worker" with anti-colocate when possible for performance.<br>
Note that I had to add points for each resource on each node to overcome the negative colocation value to allow them both to run on one node.<br>If there is a more elegant solution, let me know.<br><font color="#888888">Alan</font><div>
<div></div><div class="h5"><br><br><div class="gmail_quote">
On Tue, Mar 23, 2010 at 8:24 AM, Andrew Beekhof <span dir="ltr"><<a href="mailto:andrew@beekhof.net" target="_blank">andrew@beekhof.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div>On Mon, Mar 22, 2010 at 9:18 PM, Alan Jones <<a href="mailto:falancluster@gmail.com" target="_blank">falancluster@gmail.com</a>> wrote:<br>
> Well, I guess my configuration is not as common.<br>
> In my case, one of these resources, say resource A, suffers greater<br>
> disruption if it is moved.<br>
> So, after a failover I would prefer that resource B move, reversing the node<br>
> placement.<br>
> Is this possible to express?<br>
<br>
</div>Make A stickier than B.<br>
<br>
Please google for the following keywords:<br>
site:<a href="http://clusterlabs.org" target="_blank">clusterlabs.org</a> resource-stickiness<br>
<div><div></div><div><br>
> Alan<br>
><br>
> On Mon, Mar 22, 2010 at 11:10 AM, Dejan Muhamedagic <<a href="mailto:dejanmm@fastmail.fm" target="_blank">dejanmm@fastmail.fm</a>><br>
> wrote:<br>
>><br>
>> Hi,<br>
>><br>
>> On Mon, Mar 22, 2010 at 09:29:50AM -0700, Alan Jones wrote:<br>
>> > Friends,<br>
>> > I have what should be a simple goal. Two resources to run on two nodes.<br>
>> > I'd like to configure them to run on separate nodes when available, ie.<br>
>> > active-active,<br>
>> > and provide for them to run together on either node when one fails, ie.<br>
>> > failover.<br>
>> > Up until this point I have assumed that this would be a base use case<br>
>> > for<br>
>> > Pacemaker, however, it seems from the discussion on:<br>
>> > <a href="http://wiki.lustre.org/index.php/Using_Pacemaker_with_Lustre" target="_blank">http://wiki.lustre.org/index.php/Using_Pacemaker_with_Lustre</a><br>
>> > ... that it is not (see below). Any ideas?<br>
>><br>
>> Why not just two location constraints (aka node preferences):<br>
>><br>
>> location l1 rsc1 100: node1<br>
>> location l2 rsc2 100: node2<br>
>><br>
>> Thanks,<br>
>><br>
>> Dejan<br>
>><br>
>> > Alan<br>
>> ><br>
>> > *Note:* Use care when setting up your point system. You can use the<br>
>> > point system if your cluster has at least three nodes or if the resource<br>
>> > can acquire points from other constraints. However, in a system with<br>
>> > only two nodes and no way to acquire points, the constraint in the<br>
>> > example above will result in an inability to migrate a resource from a<br>
>> > failed node.<br>
>> ><br>
>> > The example they refer to is similar to yours:<br>
>> ><br>
>> > # crm configure colocation colresOST1resOST2 -100: resOST1 resOST2<br>
>><br>
>> > _______________________________________________<br>
>> > Pacemaker mailing list<br>
>> > <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
>> > <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Pacemaker mailing list<br>
>> <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
>> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
><br>
> _______________________________________________<br>
> Pacemaker mailing list<br>
> <a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
><br>
<br>
_______________________________________________<br>
Pacemaker mailing list<br>
<a href="mailto:Pacemaker@oss.clusterlabs.org" target="_blank">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
</div></div></blockquote></div><br>
</div></div></blockquote></div><br>