[Pacemaker] order constraint based on any one of many

Patrick Irvine pirv at cybersites.ca
Tue Aug 24 02:03:58 UTC 2010


Hi Vishal & list,

Thanks for the info.  Unfortuantly that won't due since this clone 
(glfs) is the actual mounting of the user's home directorys and needs to 
be mounted whither the local glfsd(server) is running or not.  I do 
think I have a solution, it's some what of a hack.

If I turn my glfsd-x (servers) into a single master with multiple slaves 
(cloned resources) then I could order the glfs(client) clone after the 
master starts

ie.

order glfs-after-glfsd-ORDER    inf: clone-glfsd:master clone-glfs

this would achive what I want I think,

I would ofcourse have to insure that even if only one of the glfs-x 
servers is running, it would be master.

any comments?  (and was I understandable?)

Pat.

On 23/08/2010 12:49 AM, Vishal wrote:
> From what I have read and understood a clone will run simultaneously 
> on all the nodes specified in the config . I donot see how a clone 
> will run once a resource group is started . You can alternatively add 
> the clone to each group and ensure when either group starts , the 
> clone runs with it.
>
> On Aug 23, 2010, at 11:50 AM, Patrick Irvine <pirv at cybersites.ca> wrote:
>
>> Hi,
>>
>> I am setting up a Pacemaker/Corosync/Glusterfs HA cluster set.
>>
>> Pacemaker ver. 1.0.9.1
>>
>> With Glusterfs I have 4 nodes serving replicated (RAID1) storage 
>> back-ends and up to 5 servers mounting the store.  With out getting 
>> into the specifics of how Gluster works, simply put, as long as any 
>> one of the 4 backed nodes are running, all of the 5 servers will be 
>> able to mount the store.
>>
>> I have started setting up a testing cluster and have the following: 
>> (crm configure show output)
>>
>> node test1
>> node test2
>> node test3
>> node test4
>> primitive glfs ocf:cybersites:glusterfs \
>>       params volfile="repstore.vol" mount_dir="/home" \
>>       op monitor interval="10s" timeout="30"
>> primitive glfsd-1 ocf:cybersites:glusterfsd \
>>       params volfile="glfs.vol" \
>>       op monitor interval="10s" timeout="30" \
>>       meta target-role="Started"
>> primitive glfsd-1-IP ocf:heartbeat:IPaddr2 \
>>       params ip="192.168.5.221" nic="eth1" cidr_netmask="24" \
>>       op monitor interval="5s"
>> primitive glfsd-2 ocf:cybersites:glusterfsd \
>>       params volfile="glfs.vol" \
>>       op monitor interval="10s" timeout="30" \
>>       meta target-role="Started"
>> primitive glfsd-2-IP ocf:heartbeat:IPaddr2 \
>>       params ip="192.168.5.222" nic="eth1" cidr_netmask="24" \
>>       op monitor interval="5s" \
>>       meta target-role="Started"
>> primitive glfsd-3 ocf:cybersites:glusterfsd \
>>       params volfile="glfs.vol" \
>>       op monitor interval="10s" timeout="30" \
>>       meta target-role="Started"
>> primitive glfsd-3-IP ocf:heartbeat:IPaddr2 \
>>       params ip="192.168.5.223" nic="eth1" cidr_netmask="24" \
>>       op monitor interval="5s"
>> primitive glfsd-4 ocf:cybersites:glusterfsd \
>>       params volfile="glfs.vol" \
>>       op monitor interval="10s" timeout="30" \
>>       meta target-role="Started"
>> primitive glfsd-4-IP ocf:heartbeat:IPaddr2 \
>>       params ip="192.168.5.224" nic="eth1" cidr_netmask="24" \
>>       op monitor interval="5s"
>> group glfsd-1-GROUP glfsd-1-IP glfsd-1
>> group glfsd-2-GROUP glfsd-2-IP glfsd-2
>> group glfsd-3-GROUP glfsd-3-IP glfsd-3
>> group glfsd-4-GROUP glfsd-4-IP glfsd-4
>> clone clone-glfs glfs \
>>       meta clone-max="4" clone-node-max="1" target-role="Started"
>> location block-glfsd-1-GROUP-test2 glfsd-1-GROUP -inf: test2
>> location block-glfsd-1-GROUP-test3 glfsd-1-GROUP -inf: test3
>> location block-glfsd-1-GROUP-test4 glfsd-1-GROUP -inf: test4
>> location block-glfsd-2-GROUP-test1 glfsd-2-GROUP -inf: test1
>> location block-glfsd-2-GROUP-test3 glfsd-2-GROUP -inf: test3
>> location block-glfsd-2-GROUP-test4 glfsd-2-GROUP -inf: test4
>> location block-glfsd-3-GROUP-test1 glfsd-3-GROUP -inf: test1
>> location block-glfsd-3-GROUP-test2 glfsd-3-GROUP -inf: test2
>> location block-glfsd-3-GROUP-test4 glfsd-3-GROUP -inf: test4
>> location block-glfsd-4-GROUP-test1 glfsd-4-GROUP -inf: test1
>> location block-glfsd-4-GROUP-test2 glfsd-4-GROUP -inf: test2
>> location block-glfsd-4-GROUP-test3 glfsd-4-GROUP -inf: test3
>>
>>
>> now I need a way of saying that clone-glfs can start once any of 
>> glfsd-1, glfsd-2,glfsd-3 or glfsd-4 have started.
>>
>> Any ideas.  I have read the crm cli document, as well of many 
>> iterations of the clusters from scratch, etc.
>>
>> I just can't seem to find an answer.  can it be done?
>>
>> Pat.
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: 
>> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker 
>>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: 
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>





More information about the Pacemaker mailing list