[ClusterLabs] HA problem: No live migration when setting node on standby

Andrei Borzenkov arvidjaar at gmail.com
Mon Apr 17 07:55:56 EDT 2023


On Mon, Apr 17, 2023 at 10:48 AM Philip Schiller
<p.schiller at plusoptix.de> wrote:
>
> Hello Andrei,
>
> you wrote:
>
> >>As a workaround you could add dummy clone resource colocated with and
> >>ordered after your DRBD masters and order VM after this clone.
>
> Thanks for the idea. This looks like a good option to solve my problem.
>
> I have also researched a little more and came up with an option which seems to work for my case.
> Would you be so kind to evaluate if i understand it correctly?
>
> As mentioned in the original thread
> >> Wed Apr 12 05:28:48 EDT 2023
>
> My system looks like this:
>
> >>I am using a simple two-nodes cluster with Zvol -> DRBD -> Virsh in
> >>primary/primary mode (necessary for live migration).
>
>
> Where drbd-resources and zvol are clones.
> So it is basically a chain of resources, first zvol then drbd then vm.
>
> From documentation i read that in those cases order constraints are not even necessary. This can be done with colocations constraints only
> -> https://access.redhat.com/documentation/de-de/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-orderconstraints-haar#s2-resourceorderlist-HAAR
>
> There is stated:
> >> A common situation is for an administrator to create a chain of ordered
> >> resources, where, for example, resource A starts before resource B which
> >> starts before resource C. If your configuration requires that you
> >> create a set of resources that is colocated and started in order, you
> >> can configure a resource group that contains those resources, as
> >> described in Section 6.5, “Resource Groups”.
>
> I can't create a Resource Group because apparently clone-resources are not supported. So i have the following setup now:
>
> >> colocation colocation-mas-drbd-alarmanlage-clo-pri-zfs-drbd_storage-INFINITY inf: mas-drbd-alarmanlage clo-pri-zfs-drbd_storage
> >> colocation colocation-pri-vm-alarmanlage-mas-drbd-alarmanlage-INFINITY inf: pri-vm-alarmanlage:Started mas-drbd-alarmanlage:Master
> >> location location-pri-vm-alarmanlage-s0-200 pri-vm-alarmanlage 200: s0
>
> Migration works flawless and also the startup is correct: zvol -> drbd -> vm
>

To my best knowledge there is no implied ordering for colocated
resources. So it may work in your case simply due to specific timings.
I would not rely on it. Any software or hardware change may change
timings.

> I am little bit concerned though. Does corosync work like an interpeter and knows the correct order when i do <colocation zvol/drbd> before <colocation drbd/vm>?
>

Colocation and ordering are entirely orthogonal. Colocating defines
where pacemaker will attempt to start resources while ordering defines
in which order it does it. It is a bit more complicated in case of
promotable clones, because master is not static and is determined at
run time based on resource agent behavior. So pacemaker may delay
placement of dependent resources until masters are known which may
look like ordering.

> Another thing is the Multistate Constraint which i implanted -> pri-vm-alarmanlage:Started mas-drbd-alarmanlage:Master
> Is this equivalent to the <order order-mas-drbd-alarmanlage-pri-vm-alarmanlage-mandatory mas-drbd-alarmanlage:promote pri-vm-alarmanlage:start> which i was trying to achieve?
>
> Basically i just want to have zvol started then drbd stared and promoted to master state and then finally vm started. All on the same node.
> Can you confirm that my cluster does this behavior permanently with this configuration.
>

No, I cannot (but I am happy to be proved wrong).

> Note that I would like to avoid any order constraints and dummy resources if possible. But if it is unavoidable let me know.
>
> Thanks for the replies.
>
> With kind regards
> Philip.
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/


More information about the Users mailing list