[ClusterLabs] two node cluster with clvm and virtual machines

Digimer lists at alteeve.ca
Fri Feb 3 14:38:56 EST 2017


On 03/02/17 10:16 AM, Lentes, Bernd wrote:
> ----- On Feb 2, 2017, at 8:32 PM, Digimer lists at alteeve.ca wrote:
>>> Until now everything is fine. The stonith resources have currently wrong
>>> passwords for the ILO adapters. It's difficult enough to establish a HA-cluster
>>> for the first time.
>>> Until now i don't like to have my hosts booting all the time because of my
>>> errors in the configuration.
>>
>> If stonith is called, DLM blocks and stays blocked until it is told that
>> stonith was successful (by design). So it is possible that a failed
>> stonith has left DLM blocked, which would block clvmd as it uses DLM.
>>
> 
> Hi Digimer,
> thanks for that information. I will keep it in mind.
> 
>>> I created a vg and a lv, it's visible on both nodes.
>>> My plan is to use for each vm a dedicated lv. VM's should run on both nodes,
>>> some on nodeA, some on nodeB.
>>> If the cluster cares about the mounting of the fs inside the lv (i'm planning to
>>> use btrfs), i should not need a cluster fs ? Right ?
>>
>> It would be wiser to use the LV as the raw device for the VM, if you are
>> creating an LV per VM anyway. btrfs (and most FSes) are not cluster
>> aware and can only be mounted on one node or the other at a time,
>> preventing live-migration.
>>
> 
> And if i don't have a non cluster fs like btrfs, but just a plain lv ?
> Would live migration then be possible ?
> I'd like to have live migration.

No. For a short period during the migration, data needs to be writable
from both nodes at the same time. To do this, you need to mount the FS
on both nodes at the same time. For that, you need a cluster-aware file
system, like GFS2, which uses DLM to coordinate locks.

Alternatively, you can use clustered LVM LVs as the raw backing device,
no FS, and have the LVs active on both nodes at the same time. This is
how we do it.

> What is with Active/Active ? Is it possible to have the second vm already running
> and taking over the tasks from the first one if the first one stops ?
> How could i achieve that ?

No. The server is only one server, it is just a question of where it is
running at a given time. If you tried to run the same server a second
time, it would cause lots of problems, including destroying the data.

Active/Active is a generic term and the exact meaning depends on the
application using that term.

>>> I stumbled across sfex. It seems to provide an additional layer of security
>>> concerning access to a shared storage (my lv ?).
>>> Is it senseful, does anyone have experience with it ?
>>>
>>> Btw: Suse recommends
>>> (https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_clvm_config.html)
>>> to create a mirrored lv.
>>> Is that really necessary/advisable ? My lv's reside on a SAN which is a RAID5
>>> configuration. I don't see the benefit and the need of a mirrored lv,
>>> just the disadvantage of wasting disk space. Beside the RAID we have a backup,
>>> and before changes of the vm's i will create a btrfs snapshot.
>>> Unfortunately i'm not able to create a snapshot inside the vm because they are
>>> running older versions of Suse which don't support btrfs. Of course i could
>>> recreate the vm's with a lvm configuration inside themselves. Maybe, if i have
>>> time enough. Then i could create snapshots with lvm tools.
>>>
>>> Thanks.
>>
>> Snapshoting running VMs is not advised, in my opinion. There is no way
>> to be sure that disk writes are flushed, or that apps like DBs are
>> consistent. You might well find that you snapshot doesn't work when you
>> need it most. It is much safer to use a backup program/agents that know
>> how to put the data being backed up into a clean state.
> 
> I was thinking of snapshotting before applying an update or changing configuration.
> For that btrfs is fine. The databases will be dumped with their respective tools.
> 
> Bernd

If the snapshot is make under the VM, there is no way that I know of the
ensure the system is in a crash-safe state (which is what booting the
snapshotted image would be; recovery from a power loss event,
effectively). If you want to use snapshotting inside the VM, and you can
ensure the data is consistent, then that is up to you.

-- 
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould




More information about the Users mailing list