[ClusterLabs] DRBD failover in Pacemaker

Digimer lists at alteeve.ca
Thu Sep 8 03:03:21 EDT 2016


> Thank you for the responses, I followed Digimer's instructions along with some information I had read on the DRBD site and configured fencing on the DRBD resource. I also configured STONITH using IPMI in Pacemaker. I setup Pacemaker first and verified that it kills the other node. 
> 
> After configuring DRBD fencing though I ran into a problem where failover stopped working. If I disable fencing in DRBD when one node is taken offline pacemaker kills it and everything fails over to the other as I would expect, but with fencing enabled the second node doesn't become master in DRBD until the first node completely finishes rebooting. This makes for a lot of downtime, and if one of the nodes has a hardware failure it would never fail over. I think its something to do with the fencing scripts. 
> 
> I am looking for complete redundancy including in the event of hardware failure. Is there a way I can prevent Split-Brain while still allowing for DRBD to failover to the other node? Right now I have only STONITH configured in pacemaker and fencing turned OFF in DRBD. So far it works as I want it to but sometimes when communication is lost between the two nodes the wrong one ends up getting killed, and when that happens it results in Split-Brain on recovery. I hope I described the situation well enough for someone to offer a little help. I'm currently experimenting with the delays before STONITH to see if I can figure something out.
> 
> Thank you,
> Devin

You need to solve the problem with fencing in DRBD. Leaving it off WILL
result in a split-brain eventually, full stop. With working fencing, you
will NOT get a split-brain, full stop.

With working fencing; nodes will block if fencing fails. So as an
example, if the IPMI fencing fails because the IPMI BMC died with the
host, then the surviving node(s) will hang. The logic is that it is
better to hang than risk a split-brain/corruption.

If fencing via IPMI works, then pacemaker should be told as much by
fence_ipmilan and recover as soon as the fence agent exits. If it
doesn't recover until the node returns, fencing is NOT configured
properly (or otherwise not working).

If you want to make sure that the cluster will recover no matter what,
then you will need a backup fence method. We do this by using IPMI as
the primary fence method and a pair of switched PDUs as a backup. So
with this setup, if a node fails, first pacemaker will try to shoot the
peer using IPMI. If IPMI fails (say because the host lost all power),
pacemaker gives up and moves on to PDU fencing. In this case, both PDUs
are called to open the circuits feeding the lost node, thus ensuring it
is off.

If for some reason both methods fail, pacemaker goes back to IPMI and
tries that again, then on to PDUs, ... and will loop until one of the
methods succeeds, leaving the cluster (intentionally) hung in the mean time.

digimer

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?




More information about the Users mailing list