[ClusterLabs] Hunting for the bad cLVM Mirror Performance (mirror log)
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Mon Jan 16 10:35:17 EST 2017
Hi!
(I had published some cLVM performance numbers / graphs before)
Being unhappy with cLVM's mirror performance, I've drilled down further: Seeing that a cLVM LV consists of several parts (visible from "dmsetup ls"):
CFS_VMs-xen: The LV (high level)
CFS_VMs-E3 (one leg's PV (on multipathed SAN storage))
CFS_VMs-E4 (the other leg's PV (on a different multipathed SAN storage))
CFS_VMs-xen_mimage_0 (the image of the LV on one leg (I guess))
CFS_VMs-xen_mimage_1 (the image of the LV on the other leg (I guess)
CFS_VMs-xen_mlog_mimage_0 (the mirror log's image on one leg (I guess))
CFS_VMs-xen_mlog_mimage_1 (the image of the mirror log on the other leg (I guess))
CFS_VMs-xen_mlog (the high-level mirror log)
So I used blockstats from sysfs to get the usage and performance numbers (in addition to the delay numbers I already had in the past).
According to the attachment one node (h01) shows the following performance numbers for the cLVM mirror log:
* Average 160 R+W operations per second
* Around 0.5s of wait time per second
* 0.3 secors/s read, 1ms/s read wait
* 14k sectors/s write, 0.5s/s write wait (about 250 sectors per request)
Compared to the activity on the mirror log is the activity on the data image:
* 20 sectors/s read + 0.3 sectors/s read
* 400 sectors/s write + 402 sectors/s write
That is about 4 time the amount of data written to the mirror log. That's quite inefficient of course. Can someone explain why that many data is written to the mirror log?
The LV is displayed as follows:
ph01:~ # lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
xen CFS_VMs mwi-aom-- 299.99g xen_mlog 100.00
h01:~ # lvdisplay -v CFS_VMs/xen
Using logical volume(s) on command line
--- Logical volume ---
LV Name /dev/CFS_VMs/xen
VG Name CFS_VMs
LV UUID NZuubw-Avxe-mJ94-DSfD-adc4-z2bc-MmDkEk
LV Write Access read/write
LV Creation host, time ,
LV Status available
# open 2
LV Size 299.99 GiB
Current LE 76798
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 1024
Block device 253:84
Our problem is not just that cLVM is slow, but the I/O load it creates on the SAN storage also makes other systems slow.
Using SLES11 SP4 with lvm2-clvm-2.02.98-0.42.3 (latest).
Regards,
Ulrich
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cLVM-blockstat-h01.pdf
Type: application/pdf
Size: 353430 bytes
Desc: not available
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170116/fbb293d8/attachment-0002.pdf>
More information about the Users
mailing list