[ClusterLabs] Question on sharing data with DRDB
Digimer
lists at alteeve.ca
Wed Mar 20 14:04:29 EDT 2019
On 2019-03-20 1:50 p.m., Valentin Vidic wrote:
> On Wed, Mar 20, 2019 at 01:44:06PM -0400, Digimer wrote:
>> GFS2 notified the peers of disk changes, and DRBD handles actually
>> copying to changes to the peer.
>>
>> Think of DRBD, in this context, as being mdadm RAID, like how writing to
>> /dev/md0 is handled behind the scenes to write to both /dev/sda3 +
>> /dev/sdb3. DRBD is like the same, any writes to /dev/drbd0 is written to
>> both node1:/dev/sda3 + node2:/dev/sda3.
>>
>> So DRBD handles replication, and GFS2 handles coordination.
>
> Yes, I was thinking more of the GFS2 in the shared storage setup, how
> much overhead is there if the cluster nodes all write to different files
> like VM images?
Raw write speed isn't really an issue, but the shared locking can be
very expensive under even moderate IOPS.
Basically, GFS2 on any given node has to ask DLM (cluster locking) for a
lock on some range of blocks/inodes. The cluster has to verify those
blocks aren't held open anywhere else, then grants the lock. GFS2 the
switches to internal locking to handle the actual write, then releases
the lock back to the cluster. This informs the other node(s) that they
need to update their view of the data.
As you could imagine, this can add up very quickly.
You can tune and tweak cluster filesystems, but in the end, they just
won't ever be very fast under any real IOPS loads. I advise against
using gfs2 to back qcow2 images in almost all cases.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
More information about the Users
mailing list