[ClusterLabs] DRBD not failing over
Nickle, Richard
rnickle at holycross.edu
Wed Feb 26 07:36:46 EST 2020
I spent many, many hours tackling the two-node problem and I had exactly
the same symptoms (only able to get the resource to move if I moved it
manually) until I did the following:
* Switch to DRBD 9 (added LINBIT repo because DRBD 8 is the default in the
Ubuntu repo)
* Build a third diskless quorum arbitration node.
My DRBD configuration now looks like this:
hatst2:$ sudo drbdadm status
r0 role:*Primary*
disk:*UpToDate*
hatst1 role:Secondary
peer-disk:UpToDate
hatst4 role:Secondary
peer-disk:Diskless
On Wed, Feb 26, 2020 at 6:59 AM Jaap Winius <jwinius at umrk.nl> wrote:
>
> Hi folks,
>
> My 2-node test system has a DRBD resource that is configured as follows:
>
> ~# pcs resource defaults resource-stickiness=100 ; \
> pcs resource create drbd ocf:linbit:drbd drbd_resource=r0 \
> op monitor interval=60s ; \
> pcs resource master drbd master-max=1 master-node-max=1 \
> clone-max=2 clone-node-max=1 notify=true
>
> The resource-stickiness setting is to prevent failbacks. I've got that
> to work with NFS and and VIP resources, but not with DRBD. Moreover,
> when configured as shown above, the DRBD master does not even want to
> fail over when the node it started up on is shut down.
>
> Any idea what I'm missing or doing wrong?
>
> Thanks,
>
> Jaap
>
> PS -- I can only get it to fail over if I first move the DRBD resource
> to the other node, which creates a "cli-prefer-drbd-master" location
> constraint for that node, but then it ignores the resource-stickiness
> setting and always performs the failbacks.
>
> PPS -- I'm using CentOS 7.7.1908, DRBD 9.10.0, Corosync 2.4.3,
> Pacemaker 1.1.20 and PCS 0.9.167.
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20200226/f1f4171a/attachment.html>
More information about the Users
mailing list