[Pacemaker] How to troubleshoot crm_mon not configured and /var/log/messages hard error

Andrew Beekhof andrew at beekhof.net
Sun Sep 25 21:26:39 EDT 2011


On Sat, Sep 24, 2011 at 2:41 AM, Charles Richard
<chachi.richard at gmail.com> wrote:
> Hi,
>
> I'm having an issue with pacemaker, mysql and drbd that I'm not sure how to
> resolve and hoping for an idea how to go about it.
>
> When i do crm_mon, i get:
>
> [root at staging1 log]# crm_mon -1
> ============
> Last updated: Fri Sep 23 13:31:05 2011
> Stack: openais
> Current DC: staging1 - partition with quorum
> Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe
> 2 Nodes configured, 2 expected votes
> 3 Resources configured.
> ============
>
> Node staging2: UNCLEAN (online)
> Online: [ staging1 ]
>
>
> Failed actions:
>     drbd_mysql_monitor_0 (node=staging1, call=16, rc=6, status=complete):
> not configured
>     mysqld_monitor_0 (node=staging2, call=5, rc=1, status=complete): unknown
> error
>     mysqld_stop_0 (node=staging2, call=7, rc=4, status=complete):
> insufficient privileges
>
>
> In /var/log/messages, i have the following:
>
> Sep 23 13:33:05 staging1 pengine: [14140]: ERROR: unpack_rsc_op: Hard error
> - drbd_mysql_monitor_0 failed with rc=6: Preventing drbd_mysql from
> re-starting anywhere in the cluster
>
> How i can further troubleshoot those errors?

The system logs or drbd's log if there is one.
Also just reading the drbd script to see where it might return -6

>
> Also, for the secondary node, should there be a mysql/mysqld script in my
> /etc/init.d ?

If you want mysql to run there... yes.

> Articles suggest MySQL data and config should only be on the
> primary node

You still need mysql installed, regardless of where the config and data lives.

> so i guess I'm not sure i understand what the secondary node
> should have.

Well you'd want it available from anywhere you plan mysql to run -
presumably on some approximation of shared storage which is why you're
using drbd right?

>
> My resource file:
>
> resource mysqld {
>
> protocol C;
>
> startup { wfc-timeout 0; degr-wfc-timeout 120; }
>
> disk { on-io-error detach; }
>
>
> on staging1 {
>
> device /dev/drbd0;
>
> disk /dev/vg_staging1/lv_data;
>
> meta-disk internal;
>
> address 10.10.20.1:7788;
>
> }
>
> on staging2 {
>
> device /dev/drbd0;
>
> disk /dev/vg_staging2/lv_data;
>
> meta-disk internal;
>
> address 10.10.20.2:7788;
>
> }
>
> }
>
> My crm configuration:
>
> crm(live)configure# show
> node staging1
> node staging2
> primitive drbd_mysql ocf:linbit:drbd \
>   params drbd_resource="mysqld" \
>   op monitor interval="15s"
> primitive fs_mysql ocf:heartbeat:Filesystem \
>   params device="/dev/drbd/by-res/mysql"
> directory="/opt/data/mysql/data/mysql" fstype="ext4"
> primitive ip_mysql ocf:heartbeat:IPaddr2 \
>   params ip="10.10.10.31" nic="eth0"
> primitive ipmi stonith:fence_ipmilan \
>   op monitor interval="120s" \
>   params passwd="xxxxxxxx" \
>   meta target-role="Stopped"
> primitive mysqld lsb:mysqld
> group mysql fs_mysql ip_mysql mysqld
> ms ms_drbd_mysql drbd_mysql \
>   meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1"
> notify="true"
> property $id="cib-bootstrap-options" \
>   dc-version="1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe" \
>   cluster-infrastructure="openais" \
>   expected-quorum-votes="2" \
>   stonith-enabled="true" \
>   last-lrm-refresh="1316788450"
>
> Thanks,
> Charles
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>




More information about the Pacemaker mailing list