<div dir="ltr">if don't set your vm to start at boot time, you don't to put in cluster libvirtd, maybe the problem isn't this, but why put the os services in cluster, for example crond ...... :)<br></div><div class="gmail_extra">
<br><br><div class="gmail_quote">2013/12/19 Bob Haxo <span dir="ltr"><<a href="mailto:bhaxo@sgi.com" target="_blank">bhaxo@sgi.com</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<u></u>
<div>
Hello,<br>
<br>
Earlier emails related to this topic:<br>
[pacemaker] chicken-egg-problem with libvirtd and a VM within cluster<br>
<font color="#000000">[pacemaker] VirtualDomain problem after reboot of one node</font><br>
<br>
<br>
My configuration:<br>
<br>
RHEL6.5/CMAN/gfs2/Pacemaker/crmsh<br>
<br>
pacemaker-libs-1.1.10-14.el6_5.1.x86_64<br>
pacemaker-cli-1.1.10-14.el6_5.1.x86_64<br>
pacemaker-1.1.10-14.el6_5.1.x86_64<br>
pacemaker-cluster-libs-1.1.10-14.el6_5.1.x86_64<br>
<br>
Two node HA VM cluster using real shared drive, not drbd.<br>
<br>
Resources (relevant to this discussion):<br>
primitive p_fs_images ocf:heartbeat:Filesystem \<br>
primitive p_libvirtd lsb:libvirtd \<br>
primitive virt ocf:heartbeat:VirtualDomain \<br>
<br>
services chkconfig on: cman, clvmd, pacemaker<br>
services chkconfig off: corosync, gfs2, libvirtd<br>
<br>
Observation:<br>
<br>
Rebooting the NON-host system results in the restart of the VM merrily running on the host system.<br>
<br>
Apparent cause:<br>
<br>
Upon startup, Pacemaker apparently checks the status of configured resources. However, the status request for the virt (ocf:heartbeat:VirtualDomain) resource fails with:<br>
<br>
<pre>Dec 18 12:19:30 [4147] mici-admin2 lrmd: warning: child_timeout_callback: virt_monitor_0 process (PID 4158) timed out
Dec 18 12:19:30 [4147] mici-admin2 lrmd: warning: operation_finished: virt_monitor_0:4158 - timed out after 200000ms
Dec 18 12:19:30 [4147] mici-admin2 lrmd: notice: operation_finished: virt_monitor_0:4158:stderr [ error: Failed to reconnect to the hypervisor ]
Dec 18 12:19:30 [4147] mici-admin2 lrmd: notice: operation_finished: virt_monitor_0:4158:stderr [ error: no valid connection ]
Dec 18 12:19:30 [4147] mici-admin2 lrmd: notice: operation_finished: virt_monitor_0:4158:stderr [ error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory ]
</pre>
This failure then snowballs into an "orphan" situation in which the running VM is restarted.<br>
<br>
There was the suggestion of chkconfig on libvirtd (and presumably deleting the resource) so that the /var/run/libvirt/libvirt-sock has been created by service libvirtd. With libvirtd started by the system, there is no un-needed reboot of the VM.<br>
<br>
However, it may be that removing libvirtd from Pacemaker control leaves the VM vdisk filesystem susceptible to corruption during a reboot induced failover.<br>
<br>
Question:<br>
<br>
Is there an accepted Pacemaker configuration such that the un-needed restart of the VM does not occur with the reboot of the non-host system?<br>
<br>
Regards,<br>
Bob Haxo<br>
<br>
<br>
<pre>
</pre>
<br>
<br>
<br>
<br>
</div>
<br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera
</div>