<div dir="ltr">Hi Ken, <div><br></div><div>I have checked the /var/log/cluster/corosync.log and there no information about why system hangs stopping... </div><div><br></div><div>¿Can you be more specific about what logs to check?</div><div><br></div><div>Thanks a lot.</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-02-02 21:10 GMT+01:00 Ken Gaillot <span dir="ltr">&lt;<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 02/02/2017 12:35 PM, Oscar Segarra wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt; I have a two node cluster... when I try to shutdown the physical host I<br>
&gt; get the following message in console: &quot;a stop job is running for<br>
&gt; pacemaker high availability cluster manager&quot; and never stops...<br>
<br>
</span>That would be a message from systemd. You&#39;ll need to check the pacemaker<br>
status and/or logs to see why pacemaker can&#39;t shut down.<br>
<br>
Without stonith enabled, pacemaker will be unable to recover if a<br>
resource fails to stop. That could lead to a hang.<br>
<div><div class="h5"><br>
&gt; This is my configuration:<br>
&gt;<br>
&gt; [root@vdicnode01 ~]# pcs config<br>
&gt; Cluster Name: vdic-cluster<br>
&gt; Corosync Nodes:<br>
&gt;  vdicnode01-priv vdicnode02-priv<br>
&gt; Pacemaker Nodes:<br>
&gt;  vdicnode01-priv vdicnode02-priv<br>
&gt;<br>
&gt; Resources:<br>
&gt;  Resource: nfs-vdic-mgmt-vm-vip (class=ocf provider=heartbeat type=IPaddr)<br>
&gt;   Attributes: ip=192.168.100.200 cidr_netmask=24<br>
&gt;   Operations: start interval=0s timeout=20s<br>
&gt; (nfs-vdic-mgmt-vm-vip-start-<wbr>interval-0s)<br>
&gt;               stop interval=0s timeout=20s<br>
&gt; (nfs-vdic-mgmt-vm-vip-stop-<wbr>interval-0s)<br>
&gt;               monitor interval=10s<br>
&gt; (nfs-vdic-mgmt-vm-vip-monitor-<wbr>interval-10s)<br>
&gt;  Clone: nfs_setup-clone<br>
&gt;   Resource: nfs_setup (class=ocf provider=heartbeat type=ganesha_nfsd)<br>
&gt;    Attributes: ha_vol_mnt=/var/run/gluster/<wbr>shared_storage<br>
&gt;    Operations: start interval=0s timeout=5s (nfs_setup-start-interval-0s)<br>
&gt;                stop interval=0s timeout=5s (nfs_setup-stop-interval-0s)<br>
&gt;                monitor interval=0 timeout=5s (nfs_setup-monitor-interval-0)<br>
&gt;  Clone: nfs-mon-clone<br>
&gt;   Resource: nfs-mon (class=ocf provider=heartbeat type=ganesha_mon)<br>
&gt;    Operations: start interval=0s timeout=40s (nfs-mon-start-interval-0s)<br>
&gt;                stop interval=0s timeout=40s (nfs-mon-stop-interval-0s)<br>
&gt;                monitor interval=10s timeout=10s<br>
&gt; (nfs-mon-monitor-interval-10s)<br>
&gt;  Clone: nfs-grace-clone<br>
&gt;   Meta Attrs: notify=true<br>
&gt;   Resource: nfs-grace (class=ocf provider=heartbeat type=ganesha_grace)<br>
&gt;    Meta Attrs: notify=true<br>
&gt;    Operations: start interval=0s timeout=40s (nfs-grace-start-interval-0s)<br>
&gt;                stop interval=0s timeout=40s (nfs-grace-stop-interval-0s)<br>
&gt;                monitor interval=5s timeout=10s<br>
&gt; (nfs-grace-monitor-interval-<wbr>5s)<br>
&gt;  Resource: vm-vdicone01 (class=ocf provider=heartbeat type=VirtualDomain)<br>
&gt;   Attributes: hypervisor=qemu:///system<br>
&gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml<br>
&gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;   Utilization: cpu=1 hv_memory=512<br>
&gt;   Operations: start interval=0s timeout=90 (vm-vdicone01-start-interval-<wbr>0s)<br>
&gt;               stop interval=0s timeout=90 (vm-vdicone01-stop-interval-<wbr>0s)<br>
&gt;               monitor interval=20s role=Stopped<br>
&gt; (vm-vdicone01-monitor-<wbr>interval-20s)<br>
&gt;               monitor interval=30s (vm-vdicone01-monitor-<wbr>interval-30s)<br>
&gt;  Resource: vm-vdicsunstone01 (class=ocf provider=heartbeat<br>
&gt; type=VirtualDomain)<br>
&gt;   Attributes: hypervisor=qemu:///system<br>
&gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicsunstone01.xml<br>
&gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;   Utilization: cpu=1 hv_memory=1024<br>
&gt;   Operations: start interval=0s timeout=90<br>
&gt; (vm-vdicsunstone01-start-<wbr>interval-0s)<br>
&gt;               stop interval=0s timeout=90<br>
&gt; (vm-vdicsunstone01-stop-<wbr>interval-0s)<br>
&gt;               monitor interval=20s role=Stopped<br>
&gt; (vm-vdicsunstone01-monitor-<wbr>interval-20s)<br>
&gt;               monitor interval=30s (vm-vdicsunstone01-monitor-<wbr>interval-30s)<br>
&gt;  Resource: vm-vdicdb01 (class=ocf provider=heartbeat type=VirtualDomain)<br>
&gt;   Attributes: hypervisor=qemu:///system<br>
&gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml<br>
&gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;   Utilization: cpu=1 hv_memory=512<br>
&gt;   Operations: start interval=0s timeout=90 (vm-vdicdb01-start-interval-<wbr>0s)<br>
&gt;               stop interval=0s timeout=90 (vm-vdicdb01-stop-interval-0s)<br>
&gt;               monitor interval=20s role=Stopped<br>
&gt; (vm-vdicdb01-monitor-interval-<wbr>20s)<br>
&gt;               monitor interval=30s (vm-vdicdb01-monitor-interval-<wbr>30s)<br>
&gt;  Clone: nfs-vdic-images-vip-clone<br>
&gt;   Resource: nfs-vdic-images-vip (class=ocf provider=heartbeat type=IPaddr)<br>
&gt;    Attributes: ip=192.168.100.201 cidr_netmask=24<br>
&gt;    Operations: start interval=0s timeout=20s<br>
&gt; (nfs-vdic-images-vip-start-<wbr>interval-0s)<br>
&gt;                stop interval=0s timeout=20s<br>
&gt; (nfs-vdic-images-vip-stop-<wbr>interval-0s)<br>
&gt;                monitor interval=10s<br>
&gt; (nfs-vdic-images-vip-monitor-<wbr>interval-10s)<br>
&gt;  Resource: vm-vdicudsserver (class=ocf provider=heartbeat<br>
&gt; type=VirtualDomain)<br>
&gt;   Attributes: hypervisor=qemu:///system<br>
&gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicudsserver.xml<br>
&gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;   Utilization: cpu=1 hv_memory=1024<br>
&gt;   Operations: start interval=0s timeout=90<br>
&gt; (vm-vdicudsserver-start-<wbr>interval-0s)<br>
&gt;               stop interval=0s timeout=90<br>
&gt; (vm-vdicudsserver-stop-<wbr>interval-0s)<br>
&gt;               monitor interval=20s role=Stopped<br>
&gt; (vm-vdicudsserver-monitor-<wbr>interval-20s)<br>
&gt;               monitor interval=30s (vm-vdicudsserver-monitor-<wbr>interval-30s)<br>
&gt;  Resource: vm-vdicudstuneler (class=ocf provider=heartbeat<br>
&gt; type=VirtualDomain)<br>
&gt;   Attributes: hypervisor=qemu:///system<br>
&gt; config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicudstuneler.xml<br>
&gt; migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
&gt;   Meta Attrs: allow-migrate=true target-role=Stopped<br>
&gt;   Utilization: cpu=1 hv_memory=1024<br>
&gt;   Operations: start interval=0s timeout=90<br>
&gt; (vm-vdicudstuneler-start-<wbr>interval-0s)<br>
&gt;               stop interval=0s timeout=90<br>
&gt; (vm-vdicudstuneler-stop-<wbr>interval-0s)<br>
&gt;               monitor interval=20s role=Stopped<br>
&gt; (vm-vdicudstuneler-monitor-<wbr>interval-20s)<br>
&gt;               monitor interval=30s (vm-vdicudstuneler-monitor-<wbr>interval-30s)<br>
&gt;<br>
&gt; Stonith Devices:<br>
&gt; Fencing Levels:<br>
&gt;<br>
&gt; Location Constraints:<br>
&gt;   Resource: nfs-grace-clone<br>
&gt;     Constraint: location-nfs-grace-clone<br>
&gt;       Rule: score=-INFINITY  (id:location-nfs-grace-clone-<wbr>rule)<br>
&gt;         Expression: grace-active ne 1<br>
&gt;  (id:location-nfs-grace-clone-<wbr>rule-expr)<br>
&gt;   Resource: nfs-vdic-images-vip-clone<br>
&gt;     Constraint: location-nfs-vdic-images-vip<br>
&gt;       Rule: score=-INFINITY  (id:location-nfs-vdic-images-<wbr>vip-rule)<br>
&gt;         Expression: ganesha-active ne 1<br>
&gt;  (id:location-nfs-vdic-images-<wbr>vip-rule-expr)<br>
&gt;   Resource: nfs-vdic-mgmt-vm-vip<br>
&gt;     Constraint: location-nfs-vdic-mgmt-vm-vip<br>
&gt;       Rule: score=-INFINITY  (id:location-nfs-vdic-mgmt-vm-<wbr>vip-rule)<br>
&gt;         Expression: ganesha-active ne 1<br>
&gt;  (id:location-nfs-vdic-mgmt-vm-<wbr>vip-rule-expr)<br>
&gt; Ordering Constraints:<br>
&gt; Colocation Constraints:<br>
&gt;   nfs-vdic-mgmt-vm-vip with nfs-vdic-images-vip-clone (score:-1)<br>
&gt; (id:colocation-nfs-vdic-mgmt-<wbr>vm-vip-nfs-vdic-images-vip-<wbr>INFINITY)<br>
&gt;   vm-vdicone01 with vm-vdicdb01 (score:-10)<br>
&gt; (id:colocation-vm-vdicone01-<wbr>vm-vdicdb01-INFINITY)<br>
&gt;   vm-vdicsunstone01 with vm-vdicone01 (score:-10)<br>
&gt; (id:colocation-vm-<wbr>vdicsunstone01-vm-vdicone01-<wbr>INFINITY)<br>
&gt;   vm-vdicsunstone01 with vm-vdicdb01 (score:-10)<br>
&gt; (id:colocation-vm-<wbr>vdicsunstone01-vm-vdicdb01-<wbr>INFINITY)<br>
&gt; Ticket Constraints:<br>
&gt;<br>
&gt; Alerts:<br>
&gt;  No alerts defined<br>
&gt;<br>
&gt; Resources Defaults:<br>
&gt;  No defaults set<br>
&gt; Operations Defaults:<br>
&gt;  No defaults set<br>
&gt;<br>
&gt; Cluster Properties:<br>
&gt;  cluster-infrastructure: corosync<br>
&gt;  cluster-name: vdic-cluster<br>
&gt;  dc-version: 1.1.15-11.el7_3.2-e174ec8<br>
&gt;  have-watchdog: false<br>
&gt;  last-lrm-refresh: 1485628578<br>
&gt;  start-failure-is-fatal: false<br>
&gt;  stonith-enabled: false<br>
&gt; Node Attributes:<br>
&gt;  vdicnode01-priv: grace-active=1<br>
&gt;  vdicnode02-priv: grace-active=1<br>
&gt;<br>
&gt; Quorum:<br>
&gt;   Options:<br>
&gt; [root@vdicnode01 ~]#<br>
&gt;<br>
&gt; Any help will be welcome!<br>
&gt;<br>
&gt; Thanks a lot.<br>
<br>
</div></div>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>