<div dir="ltr"><div><div><div>"With 20 disk of 4TB you have a total capacity of 80TB. If you run all of<br>
them as RAID6 then you have a total of 72TB."<br><br></div>And that's the point! I'm trying to understand if I can create more RAID6 arrays and how my controller handles disk failures in that case. First I think we need to clarify terminology related to Megaraid Storage Manager and for this reason I attach here a screenshot -> (phyical drives -> <a href="http://pasteboard.co/NC3O60x.png">http://pasteboard.co/NC3O60x.png</a> and logical drives -> <a href="http://pasteboard.co/NC8DLcM.png">http://pasteboard.co/NC8DLcM.png</a> )<br></div>So, reading this -> <a href="http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/3rd-party/lsi/mrsas/userguide/LSI_MR_SAS_SW_UG.pdf">http://www.cisco.com/c/dam/en/us/td/docs/unified_computing/ucs/3rd-party/lsi/mrsas/userguide/LSI_MR_SAS_SW_UG.pdf</a> (page 41) I think I have a RAID (6 in my case) array as a drive group and a volume as a Virtual Drive. If this is right, I should discover how much RAID array my controller supports. Actually I have 20 disks, but I can add more. However, reducing rebuild array time is my goal, so I think that create a virtual drive for each drive group is the right way. Please give me some advises....<br></div>Thanks<br><div><br><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-18 13:02 GMT+02:00 Kai Dupke <span dir="ltr"><<a href="mailto:kdupke@suse.com" target="_blank">kdupke@suse.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 09/18/2015 09:28 AM, Marco Marino wrote:<br>
> Can you explain me this? 16 volumes?<br>
<br>
<br>
</span>With 20 disk of 4TB you have a total capacity of 80TB. If you run all of<br>
them as RAID6 then you have a total of 72TB.<br>
<br>
If you ask your controller to create a 8TB volume, this volume is spread<br>
across all the 20 disk. As 2 stripes are used for parity, you have<br>
20-2=18 data stripes per volume. This makes each stripe 444G big,<br>
leaving 3500G free for other volumes.<br>
<br>
If you fill up the remaining 3500G with volumes the same way, you get 8<br>
additional volumes (OK, the last volume is <8TB then).<br>
<br>
In total you have 9 volumes then, each disk has data/parity on all of<br>
these volumes.<br>
<br>
9x8=72, voila!<br>
<br>
If a disk error appear and the controller marks the disk dead then all 9<br>
volumes are affected.<br>
<br>
With 20 6TB/8TB drives, you just get more 8TB volumes using this way.<br>
<br>
What would of course reduce the risk is to always use <20 disk in one<br>
raid6 volume, so not each disk serves all volumes.<br>
<br>
Another issue is about performance, not every RAID controller performs<br>
best with 20 drives. Adaptec recommends an odd number of drives, with 7<br>
or 9 drives performs best AFAIK.<br>
<br>
So you could make volume 1 on disks 1-9, volume 2 on disk 2-10, volume 3<br>
on disk 3-11 etc. etc.<br>
<br>
Or consider using some combination of RAID6 and RAID1, but this gives<br>
you way less available disk size (and no, I have no calculation handy on<br>
the chance for failure for RAID6 vs. RAID15 vs. RAID16)<br>
<br>
greetings kai<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
><br>
> Thank you<br>
><br>
><br>
><br>
> 2015-09-17 15:54 GMT+02:00 Kai Dupke <<a href="mailto:kdupke@suse.com">kdupke@suse.com</a>>:<br>
><br>
>> On 09/17/2015 09:44 AM, Marco Marino wrote:<br>
>>> Hi, I have 2 servers supermicro lsi 2108 with many disks (80TB) and I'm<br>
>>> trying to build a SAN with drbd and pacemaker. I'm studying, but I have<br>
>> no<br>
>>> experience on large array of disks with drbd and pacemaker, so I have<br>
>> some<br>
>>> questions:<br>
>>><br>
>>> I'm using MegaRAID Storage Manager to create virtual drives. Each virtual<br>
>>> drive is a device on linux (eg /dev/sdb, /dev/sdc.....), so my first<br>
>>> question is: it's a good idea to create virtual drive of 8 TB (max)? I'm<br>
>>> thinking to rebuild array time in case of disk failure (about 1 day for 8<br>
>><br>
>> It depends on your disks and RAID level. If one disk fails the content<br>
>> of this disk has to be recreated by either copying (all RAID levels with<br>
>> some RAID 1 included) or calculating (all with no RAID1 included), in<br>
>> the later case all disks get really stressed.<br>
>><br>
>> If you run 20x4TB disks as RAID6, then an 8TB volume is only ~500G per<br>
>> disk. However, if one disk fails, then all the other 15 volumes this<br>
>> disk handles are broken, too. (BTW, most raid controller can handle<br>
>> multiple stripes per disk, but usually only a handful) In such case the<br>
>> complete 4TB of the broken disk has to be recovered, affecting all 16<br>
>> volumes.<br>
>><br>
>> On the other side, if you use 4x5x4TB as 4x 12TB RAID6, a broken disk<br>
>> only affects one of 4 volumes - but at the cost of more disks needed.<br>
>><br>
>> You can do the similar calculation based on RAID16/15.<br>
>><br>
>> The only reason I see to create small slices is to make them fit on<br>
>> smaller replacement disks, which might be more easily available/payable<br>
>> at time of error (but now we are entering a more low cost area where<br>
>> usually SAN and DRBD do not take place).<br>
>><br>
>> greetings<br>
>> Kai Dupke<br>
>> Senior Product Manager<br>
>> Server Product Line<br>
>> --<br>
>> Sell not virtue to purchase wealth, nor liberty to purchase power.<br>
>> Phone: +49-(0)5102-9310828 Mail: <a href="mailto:kdupke@suse.com">kdupke@suse.com</a><br>
>> Mobile: <a href="tel:%2B49-%280%29173-5876766" value="+491735876766">+49-(0)173-5876766</a> WWW: <a href="http://www.suse.com" rel="noreferrer" target="_blank">www.suse.com</a><br>
>><br>
>> SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)<br>
>> GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)<br>
>><br>
>> _______________________________________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>><br>
><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
<br>
<br>
Kai Dupke<br>
Senior Product Manager<br>
Server Product Line<br>
--<br>
Sell not virtue to purchase wealth, nor liberty to purchase power.<br>
Phone: <a href="tel:%2B49-%280%295102-9310828" value="+4951029310828">+49-(0)5102-9310828</a> Mail: <a href="mailto:kdupke@suse.com">kdupke@suse.com</a><br>
Mobile: <a href="tel:%2B49-%280%29173-5876766" value="+491735876766">+49-(0)173-5876766</a> WWW: <a href="http://www.suse.com" rel="noreferrer" target="_blank">www.suse.com</a><br>
<br>
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)<br>
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)<br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>