[ClusterLabs] stonith in dual HMC environment

Alexander Markov proforg at tic-tac.ru
Thu Mar 30 05:29:09 EDT 2017


Hello, Dejan,


> If the datacenters are completely separate, you might want to take a
> look at booth. With booth, you set up a separate cluster at each
> datacenter, and booth coordinates which one can host resources. Each
> datacenter must have its own self-sufficient cluster with its own
> fencing, but one site does not need to be able to fence the other.

This seems as an overkill for me ;) If I choose not to fence - 
stonith-enabled=false would be much more simple solution.

> Yes, it's just that the name escaped me at the time.  But I'm not
> sure which pacemaker version is used and if it supports the
> fencing topology.

Doesn't help in my case. The problem is I just haven't got any way to 
fence the node at all (because it's already offline with all datacenter 
environment).

I actually built a simple cluster and played a little with different 
stonith schemes and solutions. I tried hostlist analogue of ibmhmc 
stonith device, changing locations - nothing helps. Every time I end up 
with the following:

Last updated: Thu Mar 30 05:19:48 2017
Last change: Thu Mar 30 05:07:32 2017 by root via cibadmin on test01
Stack: classic openais (with plugin)
Current DC: test01 - partition WITHOUT quorum
Version: 1.1.12-f47ea56
2 Nodes configured, 2 expected votes
3 Resources configured


Node test02: UNCLEAN (offline)
Online: [ test01 ]

Full list of resources:

Resource Group: g_ip
rsc_ip_TST_HDB00	(ocf::heartbeat:IPaddr2):	Started test02 (UNCLEAN)
st-hq	(stonith:ibmhmc):	Started test01
st-ch	(stonith:ibmhmc):	Started test02 (UNCLEAN)

and logs like

Mar 30 05:10:32 [5112] test01       crmd:   notice: 
too_many_st_failures:       No devices found in cluster to fence test02, 
giving up

and I totally second this. There's no device able to fence node, that is 
already offline. I just need to know how to resolve it without manual 
intervention. The ideal solution for me would be to do a failover.

--
Regards,
Alexander




More information about the Users mailing list