[Pacemaker] DRBD Recovery Policies

Menno Luiten mluiten at artifix.net
Fri Mar 12 23:05:31 UTC 2010


Op 12-3-2010 23:32, Lars Ellenberg schreef:
>
> Of course it does.
>   ;)
>
> It has to.
> It would not survive a local disk failure on a Primary, otherwise.
>
> Though obviously this has performance impact,
> reading from remote is most often less performant than reading locally.
>
> (To be technically correct,
> it does not care about files,
> but only about blocks.)
>
>> I believe that this is handled by DRBD by fencing the Master/Slave
>> resource during resync using Pacemaker. See
>> http://www.drbd.org/users-guide/s-pacemaker-fencing.html. This would
>> prevent Node A to promote/start services with outdated data
>> (fence-peer), and it would be forced to wait with takeover until the
>> resync is completed (after-resync-target).

So, if I understand correctly, the URL I linked from the documentation 
is actually optional in a Pacemaker configuration? I assumed DRBD didn't 
play these clever tricks and a resource without fencing would end up 
'time-warp'-ed, as mentioned on the docs page. In that case, wouldn't it 
be a good idea to explain this behavior on that page?

>
> For that to work as expected you should
>   * not have location preference constraints on the master role directly,
>     or give them a very low score. Recommended would be to place a
>     location preference, if needed, not on DRBD Master role, but on some
>     depending service (Filesystem for example).
>   * a recent version of the drbd ocf resource agent and crm-fence-peer scripts
>     (best to simply use drbd 8.3.7 [or later, in case someone pulls this
>     from the archives in the future...])
>




More information about the Pacemaker mailing list