[ClusterLabs] "resource cleanup" - but error message does not dissapear

Ken Gaillot kgaillot at redhat.com
Tue Jul 30 15:07:50 EDT 2019


On Tue, 2019-07-30 at 19:18 +0200, Lentes, Bernd wrote:
> Hi,
> 
> i always have on one of my cluster nodes "crm_mon -nfrALm 3" running
> in a ssh session,
> which gives a good and short overview of the status of the cluster.
> I just had some problems in live migrating some VirtualDomains.
> These are the errors i see:
> Failed Resource Actions:
> * vm_genetrap_migrate_to_0 on ha-idg-1 'unknown error' (1): call=324,
> status=complete, exitreason='genetrap: live migration to ha-idg-2-
> private failed: 1',
>     last-rc-change='Tue Jul 30 18:38:51 2019', queued=0ms,
> exec=42895ms
> * vm_idcc_devel_migrate_to_0 on ha-idg-1 'unknown error' (1):
> call=321, status=complete, exitreason='idcc_devel: live migration to
> ha-idg-2-private failed: 1',
>     last-rc-change='Tue Jul 30 18:38:51 2019', queued=0ms,
> exec=35885ms
> * vm_mausdb_migrate_to_0 on ha-idg-1 'unknown error' (1): call=312,
> status=complete, exitreason='mausdb: live migration to ha-idg-2-
> private failed: 1',
>     last-rc-change='Tue Jul 30 18:38:51 2019', queued=0ms,
> exec=37254ms
> * vm_geneious_migrate_to_0 on ha-idg-1 'unknown error' (1): call=318,
> status=complete, exitreason='geneious: live migration to ha-idg-2-
> private failed: 1',
>     last-rc-change='Tue Jul 30 18:38:51 2019', queued=1ms,
> exec=36175ms
> * vm_severin_migrate_to_0 on ha-idg-1 'unknown error' (1): call=333,
> status=complete, exitreason='severin: live migration to ha-idg-2-
> private failed: 1',
>     last-rc-change='Tue Jul 30 18:38:51 2019', queued=1ms,
> exec=36265ms
> * vm_sim_migrate_to_0 on ha-idg-1 'unknown error' (1): call=315,
> status=complete, exitreason='sim: live migration to ha-idg-2-private
> failed: 1',
>     last-rc-change='Tue Jul 30 18:38:51 2019', queued=1ms,
> exec=41875ms
> 
> What i'm used to do is to invoke a "resource cleanup" to get rid of
> these messages.
> But now it just worked for two of the messages, the errors for the
> other six remained !?!
> 
> Any idea ?
> 
> 
> Bernd

There was a regression in 1.1.20 and 2.0.0 (fixed in the next versions)
where cleanups of multiple errors would miss some of them. Any chance
you're using one of those?
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list