[ClusterLabs] Early VM resource migration

Ken Gaillot kgaillot at redhat.com
Wed Dec 16 17:08:35 UTC 2015


On 12/16/2015 10:30 AM, Klechomir wrote:
> On 16.12.2015 17:52, Ken Gaillot wrote:
>> On 12/16/2015 02:09 AM, Klechomir wrote:
>>> Hi list,
>>> I have a cluster with VM resources on a cloned active-active storage.
>>>
>>> VirtualDomain resource migrates properly during failover (node standby),
>>> but tries to migrate back too early, during failback, ignoring the
>>> "order" constraint, telling it to start when the cloned storage is up.
>>> This causes unnecessary VM restart.
>>>
>>> Is there any way to make it wait, until its storage resource is up?
>> Hi Klecho,
>>
>> If you have an order constraint, the cluster will not try to start the
>> VM until the storage resource agent returns success for its start. If
>> the storage isn't fully up at that point, then the agent is faulty, and
>> should be modified to wait until the storage is truly available before
>> returning success.
>>
>> If you post all your constraints, I can look for anything that might
>> affect the behavior.
> Thanks for the reply, Ken
> 
> Seems to me that that the constraints for a cloned resources act a a bit
> different.
> 
> Here is my config:
> 
> primitive p_AA_Filesystem_CDrive1 ocf:heartbeat:Filesystem \
>         params device="/dev/CSD_CDrive1/AA_CDrive1"
> directory="/volumes/AA_CDrive1" fstype="ocfs2" options="rw,noatime"
> primitive VM_VM1 ocf:heartbeat:VirtualDomain \
>         params config="/volumes/AA_CDrive1/VM_VM1/VM1.xml"
> hypervisor="qemu:///system" migration_transport="tcp" \
>         meta allow-migrate="true" target-role="Started"
> clone AA_Filesystem_CDrive1 p_AA_Filesystem_CDrive1 \
>         meta interleave="true" resource-stickiness="0"
> target-role="Started"
> order VM_VM1_after_AA_Filesystem_CDrive1 inf: AA_Filesystem_CDrive1 VM_VM1
> 
> Every time when a node comes back from standby, the VM tries to live
> migrate to it long before the filesystem is up.

In most cases (including this one), when you have an order constraint,
you also need a colocation constraint.

colocation = two resources must be run on the same node

order = one resource must be started/stopped/whatever before another

Or you could use a group, which is essentially a shortcut for specifying
colocation and order constraints for any sequence of resources.




More information about the Users mailing list