[ClusterLabs] Antw: [EXT] DRBD Dual Primary Write Speed Extremely Slow

Tyler Phillippe tylerphillippe at tutamail.com
Mon Nov 14 10:10:21 EST 2022


Actually, I think I just figured it out - it's the Microsoft Continuous Availability. It slows everything down to a crawl - sounds like that's almost by design for some reason.

Thanks!

Respectfully,
 Tyler



Nov 14, 2022, 9:48 AM by tylerphillippe at tutamail.com:

> Hi Vladislav,
>
> If I don't use the Scale-Out File Server, I don't have any issues with iSCSI speeds: if I directly connect the LUN(s) to the individual servers, I get 'full' speed - it just seems the Scale-Out File Server is causing the issue. Very strange of Microsoft (though not totally unexpected!). I can't find too much online about Scale-Out File Server, other than generic setup information.
>
> Thanks!
>
> Respectfully,
>  Tyler
>
>
>
> Nov 14, 2022, 9:20 AM by bubble at hoster-ok.com:
>
>> Hi
>>
>> On Mon, 2022-11-14 at 15:00 +0100, Tyler Phillippe via Users wrote:
>>
>>> Good idea! I setup a RAM disk on both of those systems, let them sync, added it to the cluster. 
>>>
>>> One thing I left out (which didn't hit me until yesterday as a possibility) is that I have the iSCSI LUN attached to two Windows servers that are acting as a Scale-Out File Server. When I copied a file over to the new RAMdisk LUN via Scale-Out File Server, I am still getting 10-20MB/s; however, when I create a large file to the underlying, shared DRBD on those CentOS machines, I am getting about 700+MB/s, which I watched via iostat. So, I guess it's the Scale-Out File Server causing the issue. Not sure why Microsoft and the Scale-Out File Server is causing the issue - guess Microsoft really doesn't like non-Microsoft backing disks
>>>
>>>
>>
>>
>> Not with Microsoft, but with overall iSCSI performance. For the older iSCSI target - IET - I used to use the following settings:
>> InitialR2T=No 
>> ImmediateData=Yes 
>> MaxRecvDataSegmentLength=65536 
>> MaxXmitDataSegmentLength=65536 
>> MaxBurstLength=262144 
>> FirstBurstLength=131072 
>> MaxOutstandingR2T=2 
>> Wthreads=128 
>> QueuedCommands=32
>>
>> Without that iSCSI LUNs were very slow independently of backing device speed.
>> Probably LIO provides a way to set them up as well.
>>
>> Best,
>> Vladislav
>>
>>
>>> Does anyone have any experience with that, perhaps? Thanks!!
>>>
>>> Respectfully,
>>>  Tyler
>>>
>>>
>>>
>>> Nov 14, 2022, 2:30 AM by Ulrich.Windl at rz.uni-regensburg.de:
>>>
>>>> Hi!
>>>>
>>>> If you have planty of RAM you could configure an iSCSI disk using a ram disk and try how much I/O you get from there.
>>>> Maybe you issue is not-su-much DRBD related. However when my local MD-RAID1 resyncs with about 120MB/s (spinning disks), the system also is hardly usable.
>>>>
>>>> Regards,
>>>> Ulrich
>>>>
>>>>>>> Tyler Phillippe via Users <users at clusterlabs.org> schrieb am 13.11.2022 um
>>>>>>>
>>>> 19:26 in Nachricht <NGmE_x7--3-9 at tutamail.com>:
>>>>
>>>>> Hello all,
>>>>>
>>>>> I have setup a Linux cluster on 2x CentOS 8 Stream machines - it has 
>>>>> resources to manage a dual primary, GFS2 DRBD setup. DRBD and the cluster 
>>>>> have a diskless witness. Everything works fine - I have the dual primary DRBD 
>>>>> working and it is able to present an iSCSI LUN out to my LAN. However, the 
>>>>> DRBD write speed is terrible. The backing DRBD disks (HDD) are RAID10 using 
>>>>> mdadm and they (re)sync at around 150MB/s. DRBD verify has been limited to 
>>>>> 100MB/s, but left untethered, it will get to around 140MB/s. If I write data 
>>>>> to the iSCSI LUN, I only get about 10-15MB/s. Here's the DRBD 
>>>>> global_common.conf - these are exactly the same on both machines:
>>>>>
>>>>> global {
>>>>> usage-count no;
>>>>> udev-always-use-vnr;
>>>>> }
>>>>>
>>>>> common {
>>>>> handlers {
>>>>> }
>>>>>
>>>>> startup {
>>>>> wfc-timeout 5;
>>>>> degr-wfc-timeout 5;
>>>>> }
>>>>>
>>>>> options {
>>>>> auto-promote yes;
>>>>> quorum 1;
>>>>> on-no-data-accessible suspend-io;
>>>>> on-no-quorum suspend-io;
>>>>> }
>>>>>
>>>>> disk {
>>>>> al-extents 4096;
>>>>> al-updates yes;
>>>>> no-disk-barrier;
>>>>> disk-flushes;
>>>>> on-io-error detach;
>>>>> c-plan-ahead 0;
>>>>> resync-rate 100M;
>>>>> }
>>>>>
>>>>> net {
>>>>> protocol C;
>>>>> allow-two-primaries yes;
>>>>> cram-hmac-alg "sha256";
>>>>> csums-alg "sha256";
>>>>> verify-alg "sha256";
>>>>> shared-secret "secret123";
>>>>> max-buffers 36864;
>>>>> rcvbuf-size 5242880;
>>>>> sndbuf-size 5242880;
>>>>> }
>>>>> }
>>>>>
>>>>> Respectfully,
>>>>> Tyler
>>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Manage your subscription:
>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>
>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>>
>>>
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: >>> https://www.clusterlabs.org/
>>>
>>
>>
>>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20221114/a5b952f0/attachment-0001.htm>


More information about the Users mailing list