[ClusterLabs] SAN with drbd and pacemaker

Marco Marino marino.mrc at gmail.com
Thu Sep 17 03:44:13 EDT 2015


Hi, I have 2 servers supermicro lsi 2108 with many disks (80TB) and I'm
trying to build a SAN with drbd and pacemaker. I'm studying, but I have no
experience on large array of disks with drbd and pacemaker, so I have some
questions:

I'm using MegaRAID Storage Manager to create virtual drives. Each virtual
drive is a device on linux (eg /dev/sdb, /dev/sdc.....), so my first
question is: it's a good idea to create virtual drive of 8 TB (max)? I'm
thinking to rebuild array time in case of disk failure (about 1 day for 8
TB).

This is my cluster "infrastructure":
1) linux device (eg /dev/sdb) (create as a virtual drive in megaraid
storage manager)
2) Ext4 partition (so /dev/sdb1 with maximum space allocated (8TB)) --
Question: ext4 is a good idea? Filesystem is mandatory at this level?
3) DRBD resource (rX. disk = /dev/sdb)
4) Physical Volume with /dev/drbdX
5) Volume Group vgiscsiXX
6) Logical Volume /dev/vgiscsiXX/lunY
7) Pacemaker resource DRBD (Master/slave)
8) Pacemaker resource LVM (ocf:LVM)
9) Pacemaker resource ISCSITarget
10) Pacemaker resource ISCSILogicalUnit
11) Paremaker resource VIP

Questions:
It's a good idea create more drbd resources or should I create only one
resource and add all "megaraid devices" at start time? At this link there
is an image of the cluster in LCMC -> http://pasteboard.co/HAinx8g.png  .
How should I manage multiple drbd resources in pacemaker? In particular, I
don't know if colocation and ordering respect to LVM are correct. Please
give me some advises....
It's a good idea create one Volume group for all luns?
Following this example, if I need to resize a LUN, I can add a drbd
resource, create a PV, vgextend the volume group and then resize the
logical volume associated to the lun.

I'm sorry If this is a OT or if I wrote stupid things...
Thank you,
MM
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20150917/33aea850/attachment-0002.html>


More information about the Users mailing list