[ClusterLabs] stonith in dual HMC environment

Alexander Markov proforg at tic-tac.ru
Tue Mar 28 13:20:12 UTC 2017


Hello, Dejan,

> Why? I don't have a test system right now, but for instance this
> should work:
> 
> $ stonith -t ibmhmc ipaddr=10.1.2.9 -lS
> $ stonith -t ibmhmc ipaddr=10.1.2.9 -T reset {nodename}

Ah, I see. Everything (including stonith methods, fencing and failover) 
works just fine under normal circumstances. Sorry if I wasn't clear 
about that. The problem occurs only when I have one datacenter (i.e. one 
IBM machine and one HMC) lost due to power outage.

For example:
test01:~ # stonith -t ibmhmc ipaddr=10.1.2.8 -lS | wc -l
info: ibmhmc device OK.
39
test01:~ # stonith -t ibmhmc ipaddr=10.1.2.9 -lS | wc -l
info: ibmhmc device OK.
39

As I had said stonith device can see and manage all the cluster nodes.

> If so, then your configuration does not appear to be correct. If
> both are capable of managing all nodes then you should tell
> pacemaker about it.

Thanks for the hint. But if stonith device return node list, isn't it 
obvious for cluster that it can manage those nodes? Could you please be 
more precise about what you refer to? I currently changed configuration 
to two fencing levels (one per HMC) but still don't think I get an idea 
here.

> Survived node, running stonith resource for dead node tries to
> contact ipmi device (which is also dead). How does cluster understand 
> that
> lost node is really dead and it's not just a network issue?
> 
> It cannot.

How do people then actually solve the problem of two node metro cluster?
I mean, I know one option: stonith-enabled=false, but it doesn't seem 
right for me.

Thank you.

Regards,
Alexander Markov





More information about the Users mailing list