[ClusterLabs] HA problem: No live migration when setting node on standby

Andrei Borzenkov arvidjaar at gmail.com
Fri Apr 14 13:33:46 EDT 2023


On 14.04.2023 14:35, Andrei Borzenkov wrote:
> On Fri, Apr 14, 2023 at 11:45 AM Philip Schiller
> <p.schiller at plusoptix.de> wrote:
>>
>> I would like to know if the order constraint <order drbd_vm_after_drbd_fs Mandatory: ms-drbd_fs:promote drbd_vm>
>> is equivalent to: "First promote ms-drbd_fs then start drbd_vm".
>>
> 
> No, it is not. It is equivalent to
> 
> order drbd_vm_after_drbd_fs Mandatory: ms-drbd_fs:promote drbd_vm:promote
> 
> which is effectively ignored (as drbd)vm is never promoted).
> 
> As far as I can tell, pacemaker simply does not support migration
> together with demote/promote actions. 

As a workaround you could add dummy clone resource colocated with and 
ordered after your DRBD masters and order VM after this clone. Like

primitive drbd_fs ocf:pacemaker:Stateful \
	op monitor role=Master interval=10s \
	op monitor role=Slave interval=11s
primitive drbd_vm ocf:pacemaker:Dummy \
	op monitor interval=10s \
	meta allow-migrate=true
primitive dummy_drbd_fs ocf:pacemaker:Dummy \
	op monitor interval=10s
primitive dummy_stonith stonith:external/_dummy \
	op monitor interval=3600 timeout=20
clone cl-dummy_drbd_fs dummy_drbd_fs \
	meta clone-max=2 clone-node-max=1 interleave=true
clone ms-drbd_fs drbd_fs \
	meta promotable=yes promoted-max=2 clone-max=2 clone-node-max=1 
promoted-node-max=1 interleave=true
location drbd_fs_not_on_qnetd ms-drbd_fs -inf: qnetd
order drbd_vm_after_dummy_drbd_fs Mandatory: cl-dummy_drbd_fs drbd_vm
location drbd_vm_not_on_qnetd drbd_vm -inf: qnetd
order dummy_drbd_fs_after_drbd_fs Mandatory: ms-drbd_fs:promote 
cl-dummy_drbd_fs:start
location dummy_drbd_fs_not_on_qnetd cl-dummy_drbd_fs -inf: qnetd
colocation dummy_drbd_fs_with_drbd_fs inf: cl-dummy_drbd_fs 
ms-drbd_fs:Master

which results in

Transition Summary:
   * Stop       drbd_fs:1           ( Master ha2 )  due to node availability
   * Migrate    drbd_vm             ( ha2 -> ha1 )
   * Stop       dummy_drbd_fs:1     (        ha2 )  due to node availability



More information about the Users mailing list