<div dir="ltr"><div><div><div><div><div>ok, first if all, thank you for your answer. This is acomplicated task and I cannot found many guides (if you have are welcome).<br></div>I'm using RAID6 and I have 20 disks of 4TB each.<br></div>In RAID6 space efficiency is 1-2/n, so a solution for small Virtual Drive could be 4 or 5 disks. If I use 4 disks I will have (4*4) * (1-2/4) = 8 TB of effective space. Instead, if I use 5 disks, I will have (5*4) * (1-2/5) = 12TB of effective space.<br></div>Space efficiency is not a primary goal for me, I'm trying to reduce time of rebuilding when a disk fails (and performance improvement!).<br><br>"If you run 20x4TB disks as RAID6, then an 8TB volume is only ~500G per<br>
disk. However, if one disk fails, then all the other 15 volumes this<br>
disk handles are broken, too. (BTW, most raid controller can handle<br>
multiple stripes per disk, but usually only a handful) In such case the<br>
complete 4TB of the broken disk has to be recovered, affecting all 16<br>
volumes."<br><br></div>Can you explain me this? 16 volumes?<br><br></div>Thank you<br><div><div><div><div><br><div><div><div><br></div></div></div></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2015-09-17 15:54 GMT+02:00 Kai Dupke <span dir="ltr"><<a href="mailto:kdupke@suse.com" target="_blank">kdupke@suse.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 09/17/2015 09:44 AM, Marco Marino wrote:<br>
> Hi, I have 2 servers supermicro lsi 2108 with many disks (80TB) and I'm<br>
> trying to build a SAN with drbd and pacemaker. I'm studying, but I have no<br>
> experience on large array of disks with drbd and pacemaker, so I have some<br>
> questions:<br>
><br>
> I'm using MegaRAID Storage Manager to create virtual drives. Each virtual<br>
> drive is a device on linux (eg /dev/sdb, /dev/sdc.....), so my first<br>
> question is: it's a good idea to create virtual drive of 8 TB (max)? I'm<br>
> thinking to rebuild array time in case of disk failure (about 1 day for 8<br>
<br>
</span>It depends on your disks and RAID level. If one disk fails the content<br>
of this disk has to be recreated by either copying (all RAID levels with<br>
some RAID 1 included) or calculating (all with no RAID1 included), in<br>
the later case all disks get really stressed.<br>
<br>
If you run 20x4TB disks as RAID6, then an 8TB volume is only ~500G per<br>
disk. However, if one disk fails, then all the other 15 volumes this<br>
disk handles are broken, too. (BTW, most raid controller can handle<br>
multiple stripes per disk, but usually only a handful) In such case the<br>
complete 4TB of the broken disk has to be recovered, affecting all 16<br>
volumes.<br>
<br>
On the other side, if you use 4x5x4TB as 4x 12TB RAID6, a broken disk<br>
only affects one of 4 volumes - but at the cost of more disks needed.<br>
<br>
You can do the similar calculation based on RAID16/15.<br>
<br>
The only reason I see to create small slices is to make them fit on<br>
smaller replacement disks, which might be more easily available/payable<br>
at time of error (but now we are entering a more low cost area where<br>
usually SAN and DRBD do not take place).<br>
<br>
greetings<br>
Kai Dupke<br>
Senior Product Manager<br>
Server Product Line<br>
--<br>
Sell not virtue to purchase wealth, nor liberty to purchase power.<br>
Phone: +49-(0)5102-9310828 Mail: <a href="mailto:kdupke@suse.com">kdupke@suse.com</a><br>
Mobile: <a href="tel:%2B49-%280%29173-5876766" value="+491735876766">+49-(0)173-5876766</a> WWW: <a href="http://www.suse.com" rel="noreferrer" target="_blank">www.suse.com</a><br>
<br>
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)<br>
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)<br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>