[Pacemaker] drbd / libvirt / Pacemaker Cluster?

Heiner Meier linuxforums at erwo.net
Tue Dec 2 05:26:12 EST 2014


Hello emmanuel,

> export TERM=linux and resend your config

Sorry, now the readable config:


node $id="1084777473" master \
        attributes standby="off" maintenance="off"
node $id="1084777474" slave \
        attributes maintenance="off" standby="off"
primitive libvirt upstart:libvirt-bin \
        op start timeout="120s" interval="0" \
        op stop timeout="120s" interval="0" \
        op monitor interval="30s" \
        meta target-role="Started"
primitive st-null stonith:null \
        params hostlist="master slave"
primitive vmdata ocf:linbit:drbd \
        params drbd_resource="vmdata" \
        op monitor interval="29s" role="Master" timeout="20" \
        op monitor interval="31s" role="Slave" timeout="20" \
        op start interval="0" timeout="240" \
        op stop interval="0" timeout="100"
primitive vmdata_fs ocf:heartbeat:Filesystem \
        params device="/dev/drbd0" directory="/vmdata" fstype="ext4" \
        meta target-role="Started" \
        op monitor interval="20" timeout="40" \
        op start timeout="30" interval="0" \
        op stop timeout="30" interval="0"
ms drbd_master_slave vmdata \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" target-role="Started"
clone fencing st-null
location PrimaryNode-libvirt libvirt 200: master
location PrimaryNode-vmdata_fs vmdata_fs 200: master
location SecondaryNode-libvirt libvirt 10: slave
location SecondaryNode-vmdata_fs vmdata_fs 10: slave
location drbd-fence-by-handler-vmdata-drbd_master_slave drbd_master_slave \
        rule $id="drbd-fence-by-handler-vmdata-rule-drbd_master_slave" $role="Master" -inf: #uname ne master
colocation libvirt-with-fs inf: libvirt vmdata_fs
colocation services_colo inf: vmdata_fs drbd_master_slave:Master
order fs_after_drbd inf: drbd_master_slave:promote vmdata_fs:start libvirt:start
property $id="cib-bootstrap-options" \
        dc-version="1.1.10-42f2063" \
        cluster-infrastructure="corosync" \
        stonith-enabled="true" \
        no-quorum-policy="ignore" \
        last-lrm-refresh="1416390260"


I need a simple failover Cluster, if the drbs/fs_mount ist ok and
libvirt is started after that in all cases i dont have problems.

But from time to time the slave dont see that the master is gone
when i plug out the power of the active/master.

And also from time to time (when i test power loss/reboot etc)
i have to start drbd / libvirt manualy - in all cases it can be
"repaired" - but i need to automate it as good as it can.

hm

> ok i have configured it in pacemaker / crm
>
> Since the config has stonith/fencing it has many problems,
> after reboot the nodes are unclean and so on, i need an
> automatic Hot Standby...
>
> When i power off the master box - the slave resources dont came up,
> the slave always says then the master is "online" - but the machine
> is powered off...
>
> ---
>
> Logs that may be interesting:
> master corosync[1350]:   [QUORUM] This node is within the non-primary
> component and will NOT provide any services.
> master warning: do_state_transition: Only 1 of 2 cluster nodes are
> eligible to run resources - continue 0
> notice: pcmk_quorum_notification: Membership 900: quorum lost (1)
> notice: crm_update_peer_state: pcmk_quorum_notification: Node
> slave[1084777474] - state is now lost (was member)
> notice: stonith_device_register: Added 'st-null:0' to the device list (2
> active devices)
>




More information about the Pacemaker mailing list