[Pacemaker] ManageRAID Constraints

Dejan Muhamedagic dejanmm at fastmail.fm
Wed Nov 24 08:34:13 EST 2010


Hi,

On Mon, Nov 22, 2010 at 12:25:51PM +0100, eugenio wrote:
> 
> Hi, 
>   I'm trying to figure out how to configure constraints for ocf:heartbeat:ManageRAID (RAID5) with iSCSI devices. 
> 
> This is a visual representation of my resources: 
> 
> A: Initiator_6 |
> B: Initiator_7 | --> D: ManageRAID_MD2 --> E: other stuff relying the mounted FS
> C: Initiator_8 |
> 
> These are the definitions: 
> primitive Initiator_7 ocf:heartbeat:iscsi \
>         params portal="172.20.4.12" target="iqn.2010-10.com.example:storage.drbd.iscsi040501" \
>         op monitor interval="120" timeout="30" depth="0" \
> primitive Initiator_8 ocf:heartbeat:iscsi \
>         params portal="172.20.5.12" target="iqn.2010-10.com.example:storage.drbd.iscsi060701" \
>         op monitor interval="120" timeout="30" depth="0" \
> primitive Initiator_9 ocf:heartbeat:iscsi \
>         params portal="172.20.6.12" target="iqn.2010-10.com.example:storage.drbd.iscsi080901" \
>         op monitor interval="120" timeout="30" depth="0"
> primitive ManageRAID_MD2 ocf:heartbeat:ManageRAID \
>         params raidname="MD2" \
>         op monitor interval="10" timeout="0" depth="0"
> 
> And here my /etc/HB-ManageRAID
> MD2_UUID="dec8980a:3a7eba1a:d36d14f2:9c1f2fea"
> MD2_DEV="md2"
> MD2_MOUNTPOINT="/mnt/md2"
> MD2_MOUNTOPTIONS="noatime"
> MD2_LOCALDISKS="/dev/disk/iscsi/d1e1e429899fbd06796f8c9d-part1 /dev/disk/iscsi/38275716e86870dd6c85286a-part1 /dev/disk/iscsi/e101097db6842e0814b1e537-part1"
> 
> And the constraints: 
>       <rsc_colocation id="RAID_MD2" score="INFINITY">
>         <resource_set id="RAID_MD2-0" sequential="false">
>           <resource_ref id="Initiator_7"/>
>           <resource_ref id="Initiator_8"/>
>           <resource_ref id="Initiator_9"/>
>         </resource_set>
>         <resource_set id="RAID_MD2-1" sequential="true">
>           <resource_ref id="ManageRAID_MD2"/>
>         </resource_set>
>       </rsc_colocation>
> 
> When I manually start the Initiators on the same node and then
> ManageRAID everything works fine as soon as one of the target
> goes offline.

You mean until one goes offline?

> So, here my problems: 
> 1) how can I define the constraints so that the resources are automatically started in the correct order (group is not a good idea... see 2) )?
> 2) how can I define the constraints so that when one initiators
> goes offlinte ManagerRAID stays active (and working)?

I think that right now it is impossible to express such a
configuration, i.e. that one resource depends on two out of three
other resources.

> 3) is it possible to start ManageRAID with just 2 Initiators online?

I guess so. Isn't that a property of RAID5? Or do you mean
something else.

> 4) finally, how can I recover from an Initiator failure when it
> comes again online?

If I understood correctly, you'd have to cleanup the resource:

# crm resource cleanup rscid

Thanks,

Dejan

> Thank you.
> 
> Bye
> 
> Eugene
>  		 	   		  

> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker





More information about the Pacemaker mailing list