[ClusterLabs] A stop job is running for pacemaker high availability cluster manager

Oscar Segarra oscar.segarra at gmail.com
Thu Feb 2 21:06:15 UTC 2017


Hi Ken,

I have checked the /var/log/cluster/corosync.log and there no information
about why system hangs stopping...

¿Can you be more specific about what logs to check?

Thanks a lot.

2017-02-02 21:10 GMT+01:00 Ken Gaillot <kgaillot at redhat.com>:

> On 02/02/2017 12:35 PM, Oscar Segarra wrote:
> > Hi,
> >
> > I have a two node cluster... when I try to shutdown the physical host I
> > get the following message in console: "a stop job is running for
> > pacemaker high availability cluster manager" and never stops...
>
> That would be a message from systemd. You'll need to check the pacemaker
> status and/or logs to see why pacemaker can't shut down.
>
> Without stonith enabled, pacemaker will be unable to recover if a
> resource fails to stop. That could lead to a hang.
>
> > This is my configuration:
> >
> > [root at vdicnode01 ~]# pcs config
> > Cluster Name: vdic-cluster
> > Corosync Nodes:
> >  vdicnode01-priv vdicnode02-priv
> > Pacemaker Nodes:
> >  vdicnode01-priv vdicnode02-priv
> >
> > Resources:
> >  Resource: nfs-vdic-mgmt-vm-vip (class=ocf provider=heartbeat
> type=IPaddr)
> >   Attributes: ip=192.168.100.200 cidr_netmask=24
> >   Operations: start interval=0s timeout=20s
> > (nfs-vdic-mgmt-vm-vip-start-interval-0s)
> >               stop interval=0s timeout=20s
> > (nfs-vdic-mgmt-vm-vip-stop-interval-0s)
> >               monitor interval=10s
> > (nfs-vdic-mgmt-vm-vip-monitor-interval-10s)
> >  Clone: nfs_setup-clone
> >   Resource: nfs_setup (class=ocf provider=heartbeat type=ganesha_nfsd)
> >    Attributes: ha_vol_mnt=/var/run/gluster/shared_storage
> >    Operations: start interval=0s timeout=5s (nfs_setup-start-interval-0s)
> >                stop interval=0s timeout=5s (nfs_setup-stop-interval-0s)
> >                monitor interval=0 timeout=5s
> (nfs_setup-monitor-interval-0)
> >  Clone: nfs-mon-clone
> >   Resource: nfs-mon (class=ocf provider=heartbeat type=ganesha_mon)
> >    Operations: start interval=0s timeout=40s (nfs-mon-start-interval-0s)
> >                stop interval=0s timeout=40s (nfs-mon-stop-interval-0s)
> >                monitor interval=10s timeout=10s
> > (nfs-mon-monitor-interval-10s)
> >  Clone: nfs-grace-clone
> >   Meta Attrs: notify=true
> >   Resource: nfs-grace (class=ocf provider=heartbeat type=ganesha_grace)
> >    Meta Attrs: notify=true
> >    Operations: start interval=0s timeout=40s
> (nfs-grace-start-interval-0s)
> >                stop interval=0s timeout=40s (nfs-grace-stop-interval-0s)
> >                monitor interval=5s timeout=10s
> > (nfs-grace-monitor-interval-5s)
> >  Resource: vm-vdicone01 (class=ocf provider=heartbeat type=VirtualDomain)
> >   Attributes: hypervisor=qemu:///system
> > config=/mnt/nfs-vdic-mgmt-vm/vdicone01.xml
> > migration_network_suffix=tcp:// migration_transport=ssh
> >   Meta Attrs: allow-migrate=true target-role=Stopped
> >   Utilization: cpu=1 hv_memory=512
> >   Operations: start interval=0s timeout=90 (vm-vdicone01-start-interval-
> 0s)
> >               stop interval=0s timeout=90 (vm-vdicone01-stop-interval-
> 0s)
> >               monitor interval=20s role=Stopped
> > (vm-vdicone01-monitor-interval-20s)
> >               monitor interval=30s (vm-vdicone01-monitor-interval-30s)
> >  Resource: vm-vdicsunstone01 (class=ocf provider=heartbeat
> > type=VirtualDomain)
> >   Attributes: hypervisor=qemu:///system
> > config=/mnt/nfs-vdic-mgmt-vm/vdicsunstone01.xml
> > migration_network_suffix=tcp:// migration_transport=ssh
> >   Meta Attrs: allow-migrate=true target-role=Stopped
> >   Utilization: cpu=1 hv_memory=1024
> >   Operations: start interval=0s timeout=90
> > (vm-vdicsunstone01-start-interval-0s)
> >               stop interval=0s timeout=90
> > (vm-vdicsunstone01-stop-interval-0s)
> >               monitor interval=20s role=Stopped
> > (vm-vdicsunstone01-monitor-interval-20s)
> >               monitor interval=30s (vm-vdicsunstone01-monitor-
> interval-30s)
> >  Resource: vm-vdicdb01 (class=ocf provider=heartbeat type=VirtualDomain)
> >   Attributes: hypervisor=qemu:///system
> > config=/mnt/nfs-vdic-mgmt-vm/vdicdb01.xml
> > migration_network_suffix=tcp:// migration_transport=ssh
> >   Meta Attrs: allow-migrate=true target-role=Stopped
> >   Utilization: cpu=1 hv_memory=512
> >   Operations: start interval=0s timeout=90 (vm-vdicdb01-start-interval-
> 0s)
> >               stop interval=0s timeout=90 (vm-vdicdb01-stop-interval-0s)
> >               monitor interval=20s role=Stopped
> > (vm-vdicdb01-monitor-interval-20s)
> >               monitor interval=30s (vm-vdicdb01-monitor-interval-30s)
> >  Clone: nfs-vdic-images-vip-clone
> >   Resource: nfs-vdic-images-vip (class=ocf provider=heartbeat
> type=IPaddr)
> >    Attributes: ip=192.168.100.201 cidr_netmask=24
> >    Operations: start interval=0s timeout=20s
> > (nfs-vdic-images-vip-start-interval-0s)
> >                stop interval=0s timeout=20s
> > (nfs-vdic-images-vip-stop-interval-0s)
> >                monitor interval=10s
> > (nfs-vdic-images-vip-monitor-interval-10s)
> >  Resource: vm-vdicudsserver (class=ocf provider=heartbeat
> > type=VirtualDomain)
> >   Attributes: hypervisor=qemu:///system
> > config=/mnt/nfs-vdic-mgmt-vm/vdicudsserver.xml
> > migration_network_suffix=tcp:// migration_transport=ssh
> >   Meta Attrs: allow-migrate=true target-role=Stopped
> >   Utilization: cpu=1 hv_memory=1024
> >   Operations: start interval=0s timeout=90
> > (vm-vdicudsserver-start-interval-0s)
> >               stop interval=0s timeout=90
> > (vm-vdicudsserver-stop-interval-0s)
> >               monitor interval=20s role=Stopped
> > (vm-vdicudsserver-monitor-interval-20s)
> >               monitor interval=30s (vm-vdicudsserver-monitor-
> interval-30s)
> >  Resource: vm-vdicudstuneler (class=ocf provider=heartbeat
> > type=VirtualDomain)
> >   Attributes: hypervisor=qemu:///system
> > config=/mnt/nfs-vdic-mgmt-vm/vdicudstuneler.xml
> > migration_network_suffix=tcp:// migration_transport=ssh
> >   Meta Attrs: allow-migrate=true target-role=Stopped
> >   Utilization: cpu=1 hv_memory=1024
> >   Operations: start interval=0s timeout=90
> > (vm-vdicudstuneler-start-interval-0s)
> >               stop interval=0s timeout=90
> > (vm-vdicudstuneler-stop-interval-0s)
> >               monitor interval=20s role=Stopped
> > (vm-vdicudstuneler-monitor-interval-20s)
> >               monitor interval=30s (vm-vdicudstuneler-monitor-
> interval-30s)
> >
> > Stonith Devices:
> > Fencing Levels:
> >
> > Location Constraints:
> >   Resource: nfs-grace-clone
> >     Constraint: location-nfs-grace-clone
> >       Rule: score=-INFINITY  (id:location-nfs-grace-clone-rule)
> >         Expression: grace-active ne 1
> >  (id:location-nfs-grace-clone-rule-expr)
> >   Resource: nfs-vdic-images-vip-clone
> >     Constraint: location-nfs-vdic-images-vip
> >       Rule: score=-INFINITY  (id:location-nfs-vdic-images-vip-rule)
> >         Expression: ganesha-active ne 1
> >  (id:location-nfs-vdic-images-vip-rule-expr)
> >   Resource: nfs-vdic-mgmt-vm-vip
> >     Constraint: location-nfs-vdic-mgmt-vm-vip
> >       Rule: score=-INFINITY  (id:location-nfs-vdic-mgmt-vm-vip-rule)
> >         Expression: ganesha-active ne 1
> >  (id:location-nfs-vdic-mgmt-vm-vip-rule-expr)
> > Ordering Constraints:
> > Colocation Constraints:
> >   nfs-vdic-mgmt-vm-vip with nfs-vdic-images-vip-clone (score:-1)
> > (id:colocation-nfs-vdic-mgmt-vm-vip-nfs-vdic-images-vip-INFINITY)
> >   vm-vdicone01 with vm-vdicdb01 (score:-10)
> > (id:colocation-vm-vdicone01-vm-vdicdb01-INFINITY)
> >   vm-vdicsunstone01 with vm-vdicone01 (score:-10)
> > (id:colocation-vm-vdicsunstone01-vm-vdicone01-INFINITY)
> >   vm-vdicsunstone01 with vm-vdicdb01 (score:-10)
> > (id:colocation-vm-vdicsunstone01-vm-vdicdb01-INFINITY)
> > Ticket Constraints:
> >
> > Alerts:
> >  No alerts defined
> >
> > Resources Defaults:
> >  No defaults set
> > Operations Defaults:
> >  No defaults set
> >
> > Cluster Properties:
> >  cluster-infrastructure: corosync
> >  cluster-name: vdic-cluster
> >  dc-version: 1.1.15-11.el7_3.2-e174ec8
> >  have-watchdog: false
> >  last-lrm-refresh: 1485628578
> >  start-failure-is-fatal: false
> >  stonith-enabled: false
> > Node Attributes:
> >  vdicnode01-priv: grace-active=1
> >  vdicnode02-priv: grace-active=1
> >
> > Quorum:
> >   Options:
> > [root at vdicnode01 ~]#
> >
> > Any help will be welcome!
> >
> > Thanks a lot.
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170202/6e1790ee/attachment-0002.html>


More information about the Users mailing list