[Pacemaker] Multiple DRBD resources how best to configure resources on top.

Frank Lazzarini flazzarini at gmail.com
Wed Oct 5 06:05:30 EDT 2011


Hi there,

I actually had the same situation and this thread helped me a lot with 
the colocation thing of master slave resources by check ms:master with 
ms:master .... anyways I wanted to share my configuration with all of 
you guys out there, that are trying to create a failover system with two 
drbd resources which should always stay on the same host. The only thing 
about my config which doesn't really work that well, is the order in 
which the ms resources should start, maybee someone has an idea how this 
could be done better, ms_drbd01 should start before ms_drbd02.....

don't be confused by the hostnames I actually played arround a bit with 
xtreemfs on those hosts before :)


node $id="acf811bd-ccde-43c1-a6d5-99da002203ff" xtreemfs02 \
         attributes standby="off"
node $id="caf5e799-4e95-4878-80a9-838705318bf9" xtreemfs01 \
         attributes standby="off"

primitive p_drbd01 ocf:linbit:drbd \
         params drbd_resource="r0" \
         op start interval="0" timeout="240" \
         op stop interval="0" timeout="240" \
         op monitor interval="20" role="Master" \
         op monitor interval="30" role="Slave"

primitive p_drbd02 ocf:linbit:drbd \
         params drbd_resource="r1" \
         op start interval="0" timeout="240" \
         op stop interval="0" timeout="240" \
         op monitor interval="20" role="Master" \
         op monitor interval="30" role="Slave"

primitive p_fs01 ocf:heartbeat:Filesystem \
         params device="/dev/drbd0" fstype="ext2" directory="/data1" \
         op start interval="0" timeout="60" \
         op stop interval="0" timeout="60" \
         op monitor interval="10" timeout="40"

primitive p_fs02 ocf:heartbeat:Filesystem \
         params device="/dev/drbd1" fstype="ext2" directory="/data2" \
         op start interval="0" timeout="60" \
         op stop interval="0" timeout="60" \
         op monitor interval="10" timeout="40"

primitive p_ip ocf:heartbeat:IPaddr2 \
         params ip="10.90.91.60" nic="eth0" cidr_netmask="24" 
broadcast="10.90.91.255" \
         op start interval="0" timeout="30" \
         op stop interval="0" timeout="40" \
         op monitor interval="10" timeout="20"

group cluster p_ip p_fs01 p_fs02 \
         meta target-role="Started"

ms ms_drbd01 p_drbd01 \
         meta notify="true"
ms ms_drbd02 p_drbd02 \
         meta notify="true"

colocation cluster_with_drbdmaster inf: cluster ms_drbd01:Master
colocation drbd01_with_drbd02 inf: ms_drbd01:Master ms_drbd02:Master
order drbd01_before_drbd02 inf: ms_drbd02:promote ms_drbd01:start
order drbd_before_cluster inf: ms_drbd01:promote cluster:start
order fs01_before_fs02 inf: p_fs01:start p_fs02:start

property $id="cib-bootstrap-options" \
         dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
         cluster-infrastructure="Heartbeat" \
         no-quorum-policy="ignore" \
         stonith-enabled="false" \
         last-lrm-refresh="1317807669"


Hope this is helping someone out there.....


On 02/23/2011 03:10 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Mon, Feb 21, 2011 at 05:44:34PM +0000, Brett Delle Grazie wrote:
>> Hi,
>>
>> If I have two DRBD resources:
>> resourceA
>> resourceB
>>
>> What's the best way to configure the constraints in pacemaker so that
>> a tomcat resource, which serves both of them when mounted,
>> will not be started unless they are both present? My current
>> configuration (below) fails to migrate correctly when drbd_B fails due
>> to network glitch
>> and tomcat fails to stop.  In this case, the demote on drbd_B fails
>> because the directory is still mounted and I end up with a split brain
>> scenario.  Basically I want to force drbd_A and drbd_B to be master on
>> the same system.
>>
>> Currently I have:
>> node nodeA
>> node nodeB
>>
>> primitive drbd_A ocf:linbit:drbd \
>>          params drbd_resource="A" \
>>          op start interval="0" timeout="240" \
>>          op stop interval="0" timeout="100" \
>>          op monitor interval="30"
>> primitive drbd_B ocf:linbit:drbd \
>>          params drbd_resource="B" \
>>          op start interval="0" timeout="240" \
>>          op stop interval="0" timeout="100" \
>>          op monitor interval="30"
>>
>> primitive fs_A ocf:heartbeat:Filesystem \
>>          params device="/dev/drbd/by-res/A" directory="/mnt/A"
>> fstype="ext3" options="defaults,noatime" \
>>          op start interval="0" timeout="60" \
>>          op stop interval="0" timeout="60" \
>>          op monitor interval="60" timeout="40" depth="0"
>> primitive fs_B ocf:heartbeat:Filesystem \
>>          params device="/dev/drbd/by-res/B" directory="/mnt/B"
>> fstype="ext3" options="defaults,noatime" \
>>          op start interval="0" timeout="60" \
>>          op stop interval="0" timeout="60" \
>>          op monitor interval="60" timeout="40" depth="0"
>>
>> primitive tomcat_tc1 ocf:heartbeat:tomcat \
>>          params tomcat_user="tomcat" catalina_home="/opt/tomcat6"
>> catalina_base="/home/tomcat/tc1"
>> catalina_pid="/home/tomcat/tc1/temp/tomcat.pid"
>> catalina_rotate_log="NO" script_log="/home/tomcat/tc1/logs/tc1.log"
>> statusurl="http://localhost/manager/serverinfo"
>> java_home="/usr/lib/jvm/java" \
>>          op start interval="0" timeout="60" \
>>          op stop interval="0" timeout="20" \
>>          op monitor interval="60" timeout="30" start-delay="60"
>> primitive vip_1 ocf:heartbeat:IPaddr2 \
>>          params ip="10.xx,xx,xx" nic="bond0" iflabel="3" \
>>          op monitor interval="10" timeout="20"
>> group grp_1 fs_A fs_B vip_1 tomcat_tc1
>>
>> ms ms_drbd_A drbd_A \
>>          meta master-max="1" master-node-max="1" clone-max="2"
>> clone-node-max="1" notify="true"
>> ms ms_drbd_B drbd_B \
>>          meta master-max="1" master-node-max="1" clone-max="2"
>> clone-node-max="1" notify="true"
>>
>> location loc_ms_drbd_A ms_drbd_A 100: nodeA
>> location loc_ms_drbd_B ms_drbd_B 100: nodeA
> You'd want to add $role=Master in these two. You could also
> collocate them:
>
> colocation two_drbd inf: ms_drbd_A:Master ms_drbd_B:Master
>
>> colocation A_on_drbdA inf: grp_1 ms_drbd_A:Master
>> colocation B_on_drbdB inf: grp_1 ms_drbd_B:Master
>>
>> order A_after_drbdA inf: ms_drbd_A:promote grp_1:start
> The order for _B missing?
>
>> Is there a better way to configure this?
> Otherwise, it seems fine to me.
>
> One alternative is to put the two drbd resources in a group then
> make a m/s resource of that group:
>
> group drbd_AB drbd_A drbd_B
> ms ms_drbd drbd_AB
> colocation tomcat_on_drbd inf: grp_1 ms_drbd:Master
> order tomcat_after_drbd inf: ms_drbd:promote grp_1
>
> But in that case the two drbd will start sequentially.
>
> Thanks,
>
> Dejan
>
>> Thanks for any help  /pointers.
>>
>> -- 
>> Best Regards,
>>
>> Brett Delle Grazie
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker


-- 

Best Regards,
Frank Lazzarini

<http://www.gefoo.org/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111005/fd1923b3/attachment-0002.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Frank.Lazzarini.png
Type: image/png
Size: 4803 bytes
Desc: not available
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111005/fd1923b3/attachment-0002.png>


More information about the Pacemaker mailing list