<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 12pt;
font-family:Calibri
}
--></style></head>
<body class='hmmessage'><div dir='ltr'>Hi Ken, thanks for your answer. Before making the live migration tests I ran tests to see how Pacemaker manages the virtual machine shutdown. Using the command "pcs cluster standby nodoX" there were no errors, but rebooting or shutting down the node, the virtualmachine resource, gfs2wa and iscsi resources failed and the node became UNCLEAN. After a lot of tests I did modified my /usr/lib/systemd/system/corosync.service file, and add this entries.<br><br>After=iscsid.service<br>After=remote-fs.target<br>After=libvirtd.service<br><br>It solved the shutting down /reboot error, giving Pacemaker enough time to shutting down the virtual machine, restarting it on another node, and continue with the rebooting of the node, but when testing the live migration, it fails. <br><br>I added your modification on /usr/lib/systemd/system/pacemaker.service, but it did not work. <br><br>Searching about this error I found out that systemd now includes the service "systemd-machined.service" a service to monitor, start or shut down a virtual machine using the command machinectl. I tried to disable the process but libvirt needs it to run a virtual machine.<br><br>[root@nodo3 system]# machinectl<br>MACHINE CONTAINER SERVICE<br>qemu-centos2 vm libvirt-qemu<br><br>1 machines listed.<br>[root@nodo3 system]#<br>[root@nodo3 system]# systemctl status systemd-machined.service<br>systemd-machined.service - Virtual Machine and Container Registration Service<br> Loaded: loaded (/usr/lib/systemd/system/systemd-machined.service; static)<br> Active: active (running) since Fri 2015-03-27 16:13:20 ECT; 22min ago<br> Docs: man:systemd-machined.service(8)<br> http://www.freedesktop.org/wiki/Software/systemd/machined<br> Main PID: 2982 (systemd-machine)<br> CGroup: /system.slice/systemd-machined.service<br> тт2982 /usr/lib/systemd/systemd-machined<br><br>Mar 27 16:13:20 nodo3.redwa.local systemd[1]: Starting Virtual Machine and Container Registration Service...<br>Mar 27 16:13:20 nodo3.redwa.local systemd[1]: Started Virtual Machine and Container Registration Service.<br>Mar 27 16:13:20 nodo3.redwa.local systemd-machined[2982]: New machine qemu-centos2.<br><br>I guess that service is guilty, but I don't know how to deal with it. <br><br>Thanks a lot. <br><br><div><hr id="stopSpelling">From: rasalax@hotmail.com<br>To: users@clusterlabs.org<br>Subject: Error at testing live migration<br>Date: Fri, 27 Mar 2015 12:46:47 -0500<br><br>
<style><!--
.ExternalClass .ecxhmmessage P {
padding:0px;
}
.ExternalClass body.ecxhmmessage {
font-size:12pt;
font-family:Calibri;
}
--></style>
<div dir="ltr"><div>Hi everybody, </div><div><br></div><div>I have a pacemaker + corosync cluster that manages a virtual machine (kvm) the virtual machine drives are stored in a shared storage (gfs2 + lvm+ iscsi LUN). The resource agent is VirtualDomain. </div><div><br></div><div>When I test the live migration with a command 'pcs resource move vmcentos2 nodo2' or putting the node on standby, the migration works with no problem. </div><div><br></div><div>But when I want to test the live migration rebooting or shutting down the node that runs the virtual machine, migration fails. Is this a expected behaviour or a bug?</div><div><br></div><div>My cluster configuration is:</div><div><br></div><div>OS=Centos 7 </div><div>Pacemaker 1.1.10-32.el7_0.1</div><div>Corosync Cluster Engine, version '2.3.3'</div><div><br></div><div>[root@nodo2 ~]# pcs status</div><div>Cluster name: clusterwa</div><div>Last updated: Fri Mar 27 12:20:04 2015</div><div>Last change: Thu Mar 26 16:11:11 2015 via crm_resource on nodo2</div><div>Stack: corosync</div><div>Current DC: nodo2 (2) - partition with quorum</div><div>Version: 1.1.10-32.el7_0.1-368c726</div><div>5 Nodes configured</div><div>29 Resources configured</div><div><br></div><div>Online: [ nodo2 nodo3 nodo4 ]</div><div>Containers: [ centos1.7:vmcentos3 ]</div><div><br></div><div>Full list of resources:</div><div><br></div><div> wti_wa (stonith:fence_wti): Started nodo3</div><div> Clone Set: dlmwa-clone [dlmwa]</div><div> Started: [ nodo2 nodo3 nodo4 ]</div><div> Stopped: [ centos1.7 centosSC3 ]</div><div> Clone Set: clvmwa-clone [clvmwa]</div><div> Started: [ nodo2 nodo3 nodo4 ]</div><div> Stopped: [ centos1.7 centosSC3 ]</div><div> Clone Set: gfs2wa-clone [gfs2wa]</div><div> Started: [ nodo2 nodo3 nodo4 ]</div><div> Stopped: [ centos1.7 centosSC3 ]</div><div> vmcentos2 (ocf::heartbeat:VirtualDomain): Started nodo2</div><div><br></div><div> Clone Set: iscsiwa-clone [iscsiwa]</div><div> Started: [ nodo2 nodo3 nodo4 ]</div><div> Stopped: [ centos1.7 centosSC3 ]</div><div><br></div><div>PCSD Status:</div><div> nodo2: Online</div><div> nodo3: Online</div><div> nodo4: Online</div><div><br></div><div>Daemon Status:</div><div> corosync: active/enabled</div><div> pacemaker: active/enabled</div><div> pcsd: active/enabled</div><div><br></div><div>Many thanks. </div><div>Many thanks.</div> </div></div> </div></body>
</html>