[ClusterLabs] Antw: Re: Disabled resource is hard logging

Oscar Segarra oscar.segarra at gmail.com
Tue Feb 21 14:08:34 CET 2017


Hi,

After applying last changes... it looks not appear de huge amount of errors
related to vdicone01 vm.

I apologice for my maybe simple question, but can you explain which is the
difference between the following commands:

pcs resource op remove vm-vdicone01 monitor role=Stopped
pcs resource op remove vm-vdicone01 stop interval=0s timeout=90

After executing both commands I have experienced that sometimes (not
always) in virt-manager I can see the vdicone01 started on hypervisor1 and
stopped in hypervisor2. I can delete it from hypervisor2 (not deleting
storage) but it appears. This behaviour can be caused by that commands?

Thanks in advance!

2017-02-17 8:33 GMT+01:00 Ulrich Windl <Ulrich.Windl at rz.uni-regensburg.de>:

> >>> Oscar Segarra <oscar.segarra at gmail.com> schrieb am 16.02.2017 um
> 13:55 in
> Nachricht
> <CAJq8taHh1iDd62b-ApKVVZrzerh5cUoNNHGSJ4Z_C-C+waUM_w at mail.gmail.com>:
> > Hi Klaus,
> >
> > Thanks a lot, I will try to delete the stop monitor.
> >
> > Nevertheless, I have 6 domains configured exactly the same... Is there
> any
> > reason why just this domain has this behaviour ?
>
> Some years ago I was playing with NPIV, and it worked perfectly for one and
> for several VMs. However when multiple VMs were started or stopped at the
> same
> time (this NPIV added/removed), I had "interesting" failures due to
> concurrency, even a kernel lockup (which is fixed meanwhile). So most
> likely
> "something is not correct".
> I know it doesn't help you the way you would like, but it's how life is.
>
> Regards,
> Ulrich
>
> >
> > Thanks a lot.
> >
> > 2017-02-16 11:12 GMT+01:00 Klaus Wenninger <kwenning at redhat.com>:
> >
> >> On 02/16/2017 11:02 AM, Oscar Segarra wrote:
> >> > Hi Kaluss
> >> >
> >> > Which is your proposal to fix this behavior?
> >>
> >> First you can try to remove the monitor op for role=stopped.
> >> Then the startup-probing will probably still fail but for that
> >> the behaviour is different.
> >> The startup-probing can be disabled globally via cluster-property
> >> enable-startup-probes that is defaulting to true.
> >> But be aware that the cluster then wouldn't be able to react
> >> properly if services are already up when pacemaker is starting.
> >> It should be possible to disable the probing on a per resource
> >> or node basis as well iirc. But I can't tell you out of my mind
> >> how that worked - there was a discussion a few weeks ago
> >> on the list iirc.
> >>
> >> Regards,
> >> Klaus
> >>
> >> >
> >> > Thanks a lot!
> >> >
> >> >
> >> > El 16 feb. 2017 10:57 a. m., "Klaus Wenninger" <kwenning at redhat.com
> >> > <mailto:kwenning at redhat.com>> escribió:
> >> >
> >> >     On 02/16/2017 09:05 AM, Oscar Segarra wrote:
> >> >     > Hi,
> >> >     >
> >> >     > In my environment I have deployed 5 VirtualDomains as one can
> >> >     see below:
> >> >     > [root at vdicnode01 ~]# pcs status
> >> >     > Cluster name: vdic-cluster
> >> >     > Stack: corosync
> >> >     > Current DC: vdicnode01-priv (version 1.1.15-11.el7_3.2-e174ec8)
> -
> >> >     > partition with quorum
> >> >     > Last updated: Thu Feb 16 09:02:53 2017          Last change: Thu
> >> Feb
> >> >     > 16 08:20:53 2017 by root via crm_attribute on vdicnode02-priv
> >> >     >
> >> >     > 2 nodes and 14 resources configured: 5 resources DISABLED and 0
> >> >     > BLOCKED from being started due to failures
> >> >     >
> >> >     > Online: [ vdicnode01-priv vdicnode02-priv ]
> >> >     >
> >> >     > Full list of resources:
> >> >     >
> >> >     >  nfs-vdic-mgmt-vm-vip   (ocf::heartbeat:IPaddr):        Started
> >> >     > vdicnode01-priv
> >> >     >  Clone Set: nfs_setup-clone [nfs_setup]
> >> >     >      Started: [ vdicnode01-priv vdicnode02-priv ]
> >> >     >  Clone Set: nfs-mon-clone [nfs-mon]
> >> >     >      Started: [ vdicnode01-priv vdicnode02-priv ]
> >> >     >  Clone Set: nfs-grace-clone [nfs-grace]
> >> >     >      Started: [ vdicnode01-priv vdicnode02-priv ]
> >> >     >  vm-vdicone01   (ocf::heartbeat:VirtualDomain): FAILED
> (disabled)[
> >> >     > vdicnode02-priv vdicnode01-priv ]
> >> >     >  vm-vdicsunstone01      (ocf::heartbeat:VirtualDomain): FAILED
> >> >     > vdicnode01-priv (disabled)
> >> >     >  vm-vdicdb01    (ocf::heartbeat:VirtualDomain): FAILED
> (disabled)[
> >> >     > vdicnode02-priv vdicnode01-priv ]
> >> >     >  vm-vdicudsserver       (ocf::heartbeat:VirtualDomain): FAILED
> >> >     > (disabled)[ vdicnode02-priv vdicnode01-priv ]
> >> >     >  vm-vdicudstuneler      (ocf::heartbeat:VirtualDomain): FAILED
> >> >     > vdicnode01-priv (disabled)
> >> >     >  Clone Set: nfs-vdic-images-vip-clone [nfs-vdic-images-vip]
> >> >     >      Stopped: [ vdicnode01-priv vdicnode02-priv ]
> >> >     >
> >> >     > Failed Actions:
> >> >     > * vm-vdicone01_monitor_20000 on vdicnode02-priv 'not installed'
> >> (5):
> >> >     > call=2322, status=complete, exitreason='Configuration file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not exist or is not
> >> >     readable.',
> >> >     >     last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,
> >> exec=21ms
> >> >     > * vm-vdicsunstone01_monitor_20000 on vdicnode02-priv 'not
> >> installed'
> >> >     > (5): call=2310, status=complete, exitreason='Configuration file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicsunstone01.xml does not exist or is
> not
> >> >     > readable.',
> >> >     >     last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,
> >> exec=37ms
> >> >     > * vm-vdicdb01_monitor_20000 on vdicnode02-priv 'not installed'
> (5):
> >> >     > call=2320, status=complete, exitreason='Configuration file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicdb01.xml does not exist or is not
> >> >     readable.',
> >> >     >     last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,
> >> exec=35ms
> >> >     > * vm-vdicudsserver_monitor_20000 on vdicnode02-priv 'not
> installed'
> >> >     > (5): call=2321, status=complete, exitreason='Configuration file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicudsserver.xml does not exist or is
> not
> >> >     > readable.',
> >> >     >     last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,
> >> exec=42ms
> >> >     > * vm-vdicudstuneler_monitor_20000 on vdicnode01-priv 'not
> >> installed'
> >> >     > (5): call=1987183, status=complete, exitreason='Configuration
> file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicudstuneler.xml does not exist or is
> not
> >> >     > readable.',
> >> >     >     last-rc-change='Thu Feb 16 04:00:25 2017', queued=0ms,
> >> exec=30ms
> >> >     > * vm-vdicdb01_monitor_20000 on vdicnode01-priv 'not installed'
> (5):
> >> >     > call=2550049, status=complete, exitreason='Configuration file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicdb01.xml does not exist or is not
> >> >     readable.',
> >> >     >     last-rc-change='Thu Feb 16 08:13:37 2017', queued=0ms,
> >> exec=44ms
> >> >     > * nfs-mon_monitor_10000 on vdicnode01-priv 'unknown error' (1):
> >> >     > call=1984009, status=Timed Out, exitreason='none',
> >> >     >     last-rc-change='Thu Feb 16 04:24:30 2017', queued=0ms,
> exec=0ms
> >> >     > * vm-vdicsunstone01_monitor_20000 on vdicnode01-priv 'not
> >> installed'
> >> >     > (5): call=2552050, status=complete, exitreason='Configuration
> file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicsunstone01.xml does not exist or is
> not
> >> >     > readable.',
> >> >     >     last-rc-change='Thu Feb 16 08:14:07 2017', queued=0ms,
> >> exec=22ms
> >> >     > * vm-vdicone01_monitor_20000 on vdicnode01-priv 'not installed'
> >> (5):
> >> >     > call=2620052, status=complete, exitreason='Configuration file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not exist or is not
> >> >     readable.',
> >> >     >     last-rc-change='Thu Feb 16 09:02:53 2017', queued=0ms,
> >> exec=45ms
> >> >     > * vm-vdicudsserver_monitor_20000 on vdicnode01-priv 'not
> installed'
> >> >     > (5): call=2550052, status=complete, exitreason='Configuration
> file
> >> >     > /mnt/nfs-vdic-mgmt-vm/vdicudsserver.xml does not exist or is
> not
> >> >     > readable.',
> >> >     >     last-rc-change='Thu Feb 16 08:13:37 2017', queued=0ms,
> >> exec=48ms
> >> >     >
> >> >     >
> >> >     > Al VirtualDomain resources are configured the same:
> >> >     >
> >> >     > [root at vdicnode01 cluster]# pcs resource show vm-vdicone01
> >> >     >  Resource: vm-vdicone01 (class=ocf provider=heartbeat
> >> >     type=VirtualDomain)
> >> >     >   Attributes: hypervisor=qemu:///system
> >> >     > config=/mnt/nfs-vdic-mgmt-vm/vdicone01.xml
> >> >     > migration_network_suffix=tcp:// migration_transport=ssh
> >> >     >   Meta Attrs: allow-migrate=true target-role=Stopped
> >> >     >   Utilization: cpu=1 hv_memory=512
> >> >     >   Operations: start interval=0s timeout=90
> >> >     > (vm-vdicone01-start-interval-0s)
> >> >     >               stop interval=0s timeout=90
> >> >     (vm-vdicone01-stop-interval-0s)
> >> >     >               monitor interval=20s role=Stopped
> >> >     > (vm-vdicone01-monitor-interval-20s)
> >> >     >               monitor interval=30s
> >> >     (vm-vdicone01-monitor-interval-30s)
> >> >     > [root at vdicnode01 cluster]# pcs resource show vm-vdicdb01
> >> >     >  Resource: vm-vdicdb01 (class=ocf provider=heartbeat
> >> >     type=VirtualDomain)
> >> >     >   Attributes: hypervisor=qemu:///system
> >> >     > config=/mnt/nfs-vdic-mgmt-vm/vdicdb01.xml
> >> >     > migration_network_suffix=tcp:// migration_transport=ssh
> >> >     >   Meta Attrs: allow-migrate=true target-role=Stopped
> >> >     >   Utilization: cpu=1 hv_memory=512
> >> >     >   Operations: start interval=0s timeout=90
> >> >     (vm-vdicdb01-start-interval-0s)
> >> >     >               stop interval=0s timeout=90
> >> >     (vm-vdicdb01-stop-interval-0s)
> >> >     >               monitor interval=20s role=Stopped
> >> >     > (vm-vdicdb01-monitor-interval-20s)
> >> >     >               monitor interval=30s
> >> >     (vm-vdicdb01-monitor-interval-30s)
> >> >     >
> >> >     >
> >> >     >
> >> >     > Nevertheless, one of the virtual domains is logging hardly and
> >> >     > fulfilling my hard disk:
> >> >     >
> >> >     > VirtualDomain(vm-vdicone01)[116359]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116401]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116423]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116444]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116466]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116487]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116509]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116530]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116552]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116573]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116595]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116616]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116638]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116659]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > VirtualDomain(vm-vdicone01)[116681]:    2017/02/16_08:52:27
> INFO:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml not
> >> readable,
> >> >     > resource considered stopped.
> >> >     > VirtualDomain(vm-vdicone01)[116702]:    2017/02/16_08:52:27
> ERROR:
> >> >     > Configuration file /mnt/nfs-vdic-mgmt-vm/vdicone01.xml does not
> >> >     exist
> >> >     > or is not readable.
> >> >     > [root at vdicnode01 cluster]# pcs status
> >> >     >
> >> >     >
> >> >     > Note! Is it normal the error as I have not mounted the nfs
> >> >     > resource /mnt/nfs-vdic-mgmt-vm/vdicone01.xml yet.
> >> >
> >> >     Well that is probably the explanation already:
> >> >     The resource should be stopped. The config-file is not available.
> >> >     But the resource needs the config-file to verify that it is really
> >> >     stopped.
> >> >     So the probe is failing and as you have a monitoring op for
> >> >     role="stopped" it
> >> >     is doing that over and over again.
> >> >
> >> >     >
> >> >     > ¿Is there any explanation for this heavy logging?
> >> >     >
> >> >     > Thanks a lot!
> >> >     >
> >> >     >
> >> >     > _______________________________________________
> >> >     > Users mailing list: Users at clusterlabs.org
> >> >     <mailto:Users at clusterlabs.org>
> >> >     > http://lists.clusterlabs.org/mailman/listinfo/users
> >> >     <http://lists.clusterlabs.org/mailman/listinfo/users>
> >> >     >
> >> >     > Project Home: http://www.clusterlabs.org
> >> >     > Getting started:
> >> >     http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> >     <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> >> >     > Bugs: http://bugs.clusterlabs.org
> >> >
> >> >
> >> >
> >> >     _______________________________________________
> >> >     Users mailing list: Users at clusterlabs.org
> >> >     <mailto:Users at clusterlabs.org>
> >> >     http://lists.clusterlabs.org/mailman/listinfo/users
> >> >     <http://lists.clusterlabs.org/mailman/listinfo/users>
> >> >
> >> >     Project Home: http://www.clusterlabs.org
> >> >     Getting started:
> >> >     http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >> >     <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> >> >     Bugs: http://bugs.clusterlabs.org
> >> >
> >> >
> >>
> >>
> >> _______________________________________________
> >> Users mailing list: Users at clusterlabs.org
> >> http://lists.clusterlabs.org/mailman/listinfo/users
> >>
> >> Project Home: http://www.clusterlabs.org
> >> Getting started: http://www.clusterlabs.org/
> doc/Cluster_from_Scratch.pdf
> >> Bugs: http://bugs.clusterlabs.org
> >>
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170221/6dc5847b/attachment-0001.html>


More information about the Users mailing list