[Pacemaker] Two slave nodes, neither will promote to Master

Lars Ellenberg lars.ellenberg at linbit.com
Tue Jul 3 19:55:09 UTC 2012


On Mon, Jun 25, 2012 at 04:48:50PM +0100, Regendoerp, Achim wrote:
> Hi,
> 
> I'm currently looking at two VMs which are supposed to mount a drive in
> a given directory, depending on who's the master. This was decided above
> me, therefore no DRBD stuff (which would've made things easier), but
> still using corosync/pacemaker to do the cluster work.
> 
> As it is currently, both nodes are online and configured, but none are
> switching to Master. In lack of a DRBD resource, I tried using the Dummy
> Pacemaker. If that's not the correct RA, please enlighten me on this too.


As has been stated already, to simulate a Stateful resource use the
ocf:pacemaker:Stateful agent.

But... iiuc, you are using a shared disk.

Why would you want that dummy resource at all?
why not simply:

> Below's the current config:
> 
> node NODE01 \
>         attributes standby="off"
> node NODE02 \
>         attributes standby="off"
> primitive clusterIP ocf:heartbeat:IPaddr2 \
>         params ip="10.64.96.31" nic="eth1:1" \
>         op monitor on-fail="restart" interval="5s"
> primitive clusterIParp ocf:heartbeat:SendArp \
>         params ip="10.64.96.31" nic="eth1:1"
> primitive fs_nfs ocf:heartbeat:Filesystem \
>         params device="/dev/vg_shared/lv_nfs_01" directory="/shared"
> fstype="ext4" \
>         op start interval="0" timeout="240" \
>         op stop interval="0" timeout="240" on-fail="restart"

delete that:
- primitive ms_dummy ocf:pacemaker:Dummy \
-         op start interval="0" timeout="240" \
-         op stop interval="0" timeout="240" \
-         op monitor interval="15" role="Master" timeout="240" \
-         op monitor interval="30" role="Slave" on-fail="restart" timeout-240

> primitive nfs_share ocf:heartbeat:nfsserver \
>         params nfs_ip="10.64.96.31" nfs_init_script="/etc/init.d/nfs"
> nfs_shared_infodir="/shared/nfs" nfs_notify_cmd="/sbin/rpc.statd" \
>         op start interval="0" timeout="240" \
>         op stop interval="0" timeout="240" on-fail="restart"
> group Services clusterIP clusterIParp fs_nfs nfs_share \
>         meta target-role="Started" is-managed="true"
> multiple-active="stop_start"

and that:
- ms ms_nfs ms_dummy \
-         meta target-role="Master" master-max="1" master-node="1" clone-max="2" clone-node-max="1" notify="true"

and that:
- colocation services_on_master inf: Services ms_nfs:Master
- order fs_before_services inf: ms_nfs:promote Services:start

> property $id="cib-bootstrap-options" \
>         dc-version="1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558" \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="2" \
>         no-quorum-policy="ignore" \
>         stonith-enabled="false"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="200"

That's all you need for a shared disk cluster.

Well. Almost.
Of course you have to configure, enable, test and use stonith.

-- 
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com




More information about the Pacemaker mailing list