[Pacemaker] Pacemaker unnecessarily (?) restarts a vm on active node when other node brought out of standby - possible solution?

Ian cl-3627 at jusme.com
Thu May 15 13:41:35 EDT 2014

Doing some experiments and Reading TFM, I found this:

5.2.2. Advisory Ordering
When the kind=Optional option is specified for an order constraint, the 
constraint is considered optional and only has an effect when both 
resources are stopping and/or starting. Any change in state of the first 
resource you specified has no effect on the second resource you 


This seems to tickle the right area. Adding "kind=Optional" to the gfs2 
-> drbd order constraint makes it all work as desired (start-up and 
shut-down is correctly ordered, and bringing the other node out of 
standby doesn't force a gratuitous restart of the gfs2 filesystem and 
the vms that rely on it on the already active node).

Is that the correct solution I wonder? The term "optional" makes me 
nervous, but the description matches the desired behavior, in normal 
cases at least.

FYI, Here's the "working" configuration:

# pcs config
Cluster Name: jusme
Corosync Nodes:

Pacemaker Nodes:
  sv06 sv07

  Master: vm_storage_core_dev-master
   Meta Attrs: master-max=2 master-node-max=1 clone-max=2 
clone-node-max=1 notify=true
   Group: vm_storage_core_dev
    Resource: res_drbd_vm1 (class=ocf provider=linbit type=drbd)
     Attributes: drbd_resource=vm1
     Operations: monitor interval=60s (res_drbd_vm1-monitor-interval-60s)
  Clone: vm_storage_core-clone
   Group: vm_storage_core
    Resource: res_fs_vm1 (class=ocf provider=heartbeat type=Filesystem)
     Attributes: device=/dev/drbd/by-res/vm1 directory=/data/vm1 
fstype=gfs2 options=noatime,nodiratime
     Operations: monitor interval=60s (res_fs_vm1-monitor-interval-60s)
  Master: nfs_server_dev-master
   Meta Attrs: master-max=1 master-node-max=1 clone-max=2 
clone-node-max=1 notify=true
   Group: nfs_server_dev
    Resource: res_drbd_live (class=ocf provider=linbit type=drbd)
     Attributes: drbd_resource=live
     Operations: monitor interval=60s 
  Resource: res_vm_nfs_server (class=ocf provider=heartbeat 
   Attributes: config=/etc/libvirt/qemu/vm09.xml
   Meta Attrs: resource-stickiness=100
   Operations: monitor interval=60s 

Stonith Devices:
Fencing Levels:

Location Constraints:
Ordering Constraints:
   promote vm_storage_core_dev-master then start vm_storage_core-clone 
   promote nfs_server_dev-master then start res_vm_nfs_server (Mandatory) 
   start vm_storage_core-clone then start res_vm_nfs_server (Mandatory) 
Colocation Constraints:
   vm_storage_core-clone with vm_storage_core_dev-master (INFINITY) 
(rsc-role:Started) (with-rsc-role:Master) 
   res_vm_nfs_server with nfs_server_dev-master (INFINITY) 
(rsc-role:Started) (with-rsc-role:Master) 
   res_vm_nfs_server with vm_storage_core-clone (INFINITY) 


More information about the Pacemaker mailing list