[Pacemaker] How to live migrate the kvm vm

邱志刚 qiuzhigang at fronware.com
Mon Dec 12 05:22:51 EST 2011


Hi all,

I have 2-node cluster of pacemaker,I want to migrate the kvm vm with command
"migrate", but I found the vm isn't migrated, 
actually it is shutdown and then start on other node. I checked the log and
found the vm is stopped but not migrated.

How could I live migrate the vm ? The configuration :

crm(live)configure# show
node h10_145
node h10_151
primitive test1 ocf:heartbeat:VirtualDomain \
	params config="/etc/libvirt/qemu/test1.xml"
hypervisor="qemu:///system" \
	meta allow-migrate="ture" priority="100" target-role="Started"
is-managed="true" \
	op start interval="0" timeout="120s" \
	op stop interval="0" timeout="120s" \
	op monitor interval="10s" timeout="30s" depth="0" \
	op migrate_from interval="0" timeout="120s" \
	op migrate_to interval="0" timeout="120"
primitive test2 ocf:heartbeat:VirtualDomain \
	params config="/etc/libvirt/qemu/test2.xml"
hypervisor="qemu:///system" \
	meta allow-migrate="ture" priority="100" target-role="Started"
is-managed="true" \
	op start interval="0" timeout="120s" \
	op stop interval="0" timeout="120s" \
	op monitor interval="20s" timeout="30s" depth="0" \
	op migrate_from interval="0" timeout="120s" \
	op migrate_to interval="0" timeout="120s"
property $id="cib-bootstrap-options" \
	dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \
	cluster-infrastructure="openais" \
	expected-quorum-votes="2" \
	no-quorum-policy="ignore" \
	last-lrm-refresh="1323683481" \
	symmetric-cluster="true" \
	cluster-recheck-interval="1m" \
	start-failure-is-fatal="false" \
	stonith-enabled="false"
    rsc_defaults $id="rsc-options" \
	resource-stickiness="1000"
    rsc_defaults $id="rsc_defaults-options" \
	multiple-active="stop_start"

The log is following when execute migrate command.

Dec 12 18:04:10 h10_145 lrmd: [5520]: info: cancel_op: operation monitor[15]
on ocf::VirtualDomain::test2 for client 5523, its parameters:
hypervisor=[qemu:///system] CRM_meta_depth=[0]
config=[/etc/libvirt/qemu/test2.xml] depth=[0] crm_feature_set=[3.0.2]
CRM_meta_name=[monitor] CRM_meta_timeout=[30000] CRM_meta_interval=[20000]
cancelled
Dec 12 18:04:10 h10_145 crmd: [5523]: info: do_lrm_rsc_op: Performing
key=7:41:0:2673a006-012b-44e3-9329-087245782771 op=test2_stop_0 )
Dec 12 18:04:10 h10_145 lrmd: [5520]: info: rsc:test2:16: stop
Dec 12 18:04:10 h10_145 crmd: [5523]: info: process_lrm_event: LRM operation
test2_monitor_20000 (call=15, status=1, cib-update=0, confirmed=true)
Cancelled
Dec 12 18:04:10 h10_145 cib: [5519]: info: write_cib_contents: Archived
previous version as /var/lib/heartbeat/crm/cib-50.raw
Dec 12 18:04:11 h10_145 cib: [5519]: info: write_cib_contents: Wrote version
0.858.0 of the CIB to disk (digest: ba9e311049d3a3ff19ad12325cf329f5)
Dec 12 18:04:11 h10_145 VirtualDomain[8238]: INFO: Issuing graceful shutdown
request for domain test2.
Dec 12 18:04:11 h10_145 cib: [5519]: info: retrieveCib: Reading cluster
configuration from: /var/lib/heartbeat/crm/cib.xzk7wg (digest:
/var/lib/heartbeat/crm/cib.oKKQ9P)
Dec 12 18:04:11 h10_145 lrmd: [5520]: info: RA output: (test2:stop:stdout)
Domain test2 is being shutdown
Dec 12 18:04:28 h10_145 kernel: sw1: port 2(t2v1) entering disabled state
Dec 12 18:04:28 h10_145 kernel: device t2v1 left promiscuous mode
Dec 12 18:04:28 h10_145 kernel: sw1: port 2(t2v1) entering disabled state
Dec 12 18:04:28 h10_145 kernel: sw1: port 3(t2v2) entering disabled state
Dec 12 18:04:28 h10_145 kernel: device t2v2 left promiscuous mode
Dec 12 18:04:28 h10_145 kernel: sw1: port 3(t2v2) entering disabled state
Dec 12 18:04:29 h10_145 crmd: [5523]: info: process_lrm_event: LRM operation
test2_stop_0 (call=16, rc=0, cib-update=31, confirmed=true) ok


Best Regards,
Jackie





More information about the Pacemaker mailing list