<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body>
<div>Hi Vladislav,<br></div><div dir="auto"><br></div><div dir="auto">If I don't use the Scale-Out File Server, I don't have any issues with iSCSI speeds: if I directly connect the LUN(s) to the individual servers, I get 'full' speed - it just seems the Scale-Out File Server is causing the issue. Very strange of Microsoft (though not totally unexpected!). I can't find too much online about Scale-Out File Server, other than generic setup information.<br></div><div dir="auto"><br></div><div dir="auto">Thanks!<br></div><div style="16px"><br></div><div style="16px">Respectfully,<br></div><div style="16px"> Tyler<br></div><div><br></div><div><br></div><div><br></div><div>Nov 14, 2022, 9:20 AM by bubble@hoster-ok.com:<br></div><blockquote class="tutanota_quote" style="border-left: 1px solid #93A3B8; padding-left: 10px; margin-left: 5px;"><div>Hi<br></div><div><br></div><div>On Mon, 2022-11-14 at 15:00 +0100, Tyler Phillippe via Users wrote:<br></div><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><div>Good idea! I setup a RAM disk on both of those systems, let them sync, added it to the cluster. <br></div><div dir="auto"><br></div><div dir="auto">One thing I left out (which didn't hit me until yesterday as a possibility) is that I have the iSCSI LUN attached to two Windows servers that are acting as a Scale-Out File Server. When I copied a file over to the new RAMdisk LUN via Scale-Out File Server, I am still getting 10-20MB/s; however, when I create a large file to the underlying, shared DRBD on those CentOS machines, I am getting about 700+MB/s, which I watched via iostat. So, I guess it's the Scale-Out File Server causing the issue. Not sure why Microsoft and the Scale-Out File Server is causing the issue - guess Microsoft really doesn't like non-Microsoft backing disks<br></div><div dir="auto"><br></div></blockquote><div><br></div><div><br></div><div>Not with Microsoft, but with overall iSCSI performance. For the older iSCSI target - IET - I used to use the following settings:<br></div><div>InitialR2T=No <br></div><div>ImmediateData=Yes <br></div><div>MaxRecvDataSegmentLength=65536 <br></div><div>MaxXmitDataSegmentLength=65536 <br></div><div>MaxBurstLength=262144 <br></div><div>FirstBurstLength=131072 <br></div><div>MaxOutstandingR2T=2 <br></div><div>Wthreads=128 <br></div><div>QueuedCommands=32<br></div><div><br></div><div>Without that iSCSI LUNs were very slow independently of backing device speed.<br></div><div>Probably LIO provides a way to set them up as well.<br></div><div><br></div><div>Best,<br></div><div>Vladislav<br></div><div><br></div><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><div dir="auto">Does anyone have any experience with that, perhaps? Thanks!!<br></div><div style="16px"><br></div><div style="16px">Respectfully,<br></div><div style="16px"> Tyler<br></div><div><br></div><div><br></div><div><br></div><div>Nov 14, 2022, 2:30 AM by Ulrich.Windl@rz.uni-regensburg.de:<br></div><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><div>Hi!<br></div><div><br></div><div>If you have planty of RAM you could configure an iSCSI disk using a ram disk and try how much I/O you get from there.<br></div><div>Maybe you issue is not-su-much DRBD related. However when my local MD-RAID1 resyncs with about 120MB/s (spinning disks), the system also is hardly usable.<br></div><div><br></div><div>Regards,<br></div><div>Ulrich<br></div><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><div>Tyler Phillippe via Users <users@clusterlabs.org> schrieb am 13.11.2022 um<br></div></blockquote></blockquote></blockquote><div>19:26 in Nachricht <NGmE_x7--3-9@tutamail.com>:<br></div><blockquote style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex" type="cite"><div>Hello all,<br></div><div><br></div><div>I have setup a Linux cluster on 2x CentOS 8 Stream machines - it has <br></div><div>resources to manage a dual primary, GFS2 DRBD setup. DRBD and the cluster <br></div><div>have a diskless witness. Everything works fine - I have the dual primary DRBD <br></div><div>working and it is able to present an iSCSI LUN out to my LAN. However, the <br></div><div>DRBD write speed is terrible. The backing DRBD disks (HDD) are RAID10 using <br></div><div>mdadm and they (re)sync at around 150MB/s. DRBD verify has been limited to <br></div><div>100MB/s, but left untethered, it will get to around 140MB/s. If I write data <br></div><div>to the iSCSI LUN, I only get about 10-15MB/s. Here's the DRBD <br></div><div>global_common.conf - these are exactly the same on both machines:<br></div><div><br></div><div>global {<br></div><div>usage-count no;<br></div><div>udev-always-use-vnr;<br></div><div>}<br></div><div><br></div><div>common {<br></div><div>handlers {<br></div><div>}<br></div><div><br></div><div>startup {<br></div><div>wfc-timeout 5;<br></div><div>degr-wfc-timeout 5;<br></div><div>}<br></div><div><br></div><div>options {<br></div><div>auto-promote yes;<br></div><div>quorum 1;<br></div><div>on-no-data-accessible suspend-io;<br></div><div>on-no-quorum suspend-io;<br></div><div>}<br></div><div><br></div><div>disk {<br></div><div>al-extents 4096;<br></div><div>al-updates yes;<br></div><div>no-disk-barrier;<br></div><div>disk-flushes;<br></div><div>on-io-error detach;<br></div><div>c-plan-ahead 0;<br></div><div>resync-rate 100M;<br></div><div>}<br></div><div><br></div><div>net {<br></div><div>protocol C;<br></div><div>allow-two-primaries yes;<br></div><div>cram-hmac-alg "sha256";<br></div><div>csums-alg "sha256";<br></div><div>verify-alg "sha256";<br></div><div>shared-secret "secret123";<br></div><div>max-buffers 36864;<br></div><div>rcvbuf-size 5242880;<br></div><div>sndbuf-size 5242880;<br></div><div>}<br></div><div>}<br></div><div><br></div><div>Respectfully,<br></div><div>Tyler<br></div></blockquote><div><br></div><div><br></div><div><br></div><div><br></div><div>_______________________________________________<br></div><div>Manage your subscription:<br></div><div>https://lists.clusterlabs.org/mailman/listinfo/users<br></div><div><br></div><div>ClusterLabs home: https://www.clusterlabs.org/<br></div></blockquote><div dir="auto"><br></div><div>_______________________________________________<br></div><div>Manage your subscription:<br></div><div><a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noopener noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br></div><div><br></div><div>ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noopener noreferrer" target="_blank">https://www.clusterlabs.org/</a><br></div></blockquote><div><br></div><div><span></span><br></div></blockquote><div dir="auto"><br></div> </body>
</html>