[ClusterLabs] VM failure during shutdown
Ken Gaillot
kgaillot at redhat.com
Wed Jun 27 16:58:51 UTC 2018
On Wed, 2018-06-27 at 18:01 +0300, Vaggelis Papastavros wrote:
> Dear friends ,
> ( i send the same messages again in order to conform with the forum
> text formatting )
> Many Thanks for your brilliant answers ,
> Ken your suggestion :
> "The second problem is that you have an ordering constraint but no
> colocation constraint. With your current setup, windows_VM has to
> start
> after the storage, but it doesn't have to start on the same node. You
> need a colocation constraint as well, to ensure they start on the sam
> node."
>
> for the storage i have the following complete steps:
> pcs resource create ProcDRBD_SigmaVMs ocf:linbit:drbd
> drbd_resource=sigma_vms drbdconf=/etc/drbd.conf op monitor
> interval=10s
> pcs resource master clone_ProcDRBD_SigmaVMs ProcDRBD_SigmaVMs master-
> max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
> pcs resource create StorageDRBD_SigmaVMs Filesystem
> device="/dev/drbd1" directory="/opt/sigma_vms/" fstype="ext4"
> pcs constraint location clone_ProcDRBD_SigmaVMs prefers sgw-01
> pcs constraint colocation add StorageDRBD_SigmaVMs with
> clone_ProcDRBD_SigmaVMs INFINITY with-rsc-role=Master
> pcs constraint order promote clone_ProcDRBD_SigmaVMs then start
> StorageDRBD_SigmaVMs
>
> and when i create the VM
> pcs resource create windows_VM_res VirtualDomain
> hypervisor="qemu:///system"
> config="/opt/sigma_vms/xml_definitions/windows_VM.xml"
> pcs constraint colocation add windows_VM_res with
> StorageDRBD_SigmaVMs INFINITY
> pcs constraint order start StorageDRBD_SigmaVMs_rers then start
> windows_VM
>
>
> My question is :
>
> are the below steps enough to ensure that the new VM will be placed
> on the node 1 ?
Almost --
>
> (storage process prefers node 1 (the primary of drbd) with weight
> INFINITY, windows_VM should be placed with StorageDRBD_SigmaVMs
> always
> and from transitive rule windows_VM should be placed on node1
What you intend to say is that the *master role* of the storage process
prefers node 1. Without that, it's meaningless, since storage is a
clone that runs on all nodes.
If you add master role to the storage location preference, and to the
VM colocation, you'll get what you want. Otherwise you're just saying
that the VM has to run where *any* instance (master or slave) of the
storage resource is running.
BTW, a positive preference of INFINITY means the resource will run
there whenever possible, but if the node is not available, the resource
can run elsewhere (which is likely what you want, but it's useful to
contrast that with the opposite: a -INFINITY preference for the other
node would mean it can never run on that node, even if no other node is
available).
>
> (assume that ---> means prefer) storage --> node1 , windows --->
> storage thus from transitive rule windows_VM ---> node1
>
> pcs constraint location clone_ProcDRBD_SigmaVMs prefers sgw-01
>
> pcs constraint colocation add windows_VM_res with
> StorageDRBD_SigmaVMs INFINITY
>
> pcs constraint order start StorageDRBD_SigmaVMs_rers then start
> windows_VM
>
> Sincerely
--
Ken Gaillot <kgaillot at redhat.com>
More information about the Users
mailing list