[Pacemaker] ocf:linbit:drbd monitor failed with stacked resource

Schmidt, Torsten torsten.schmidt at tecdoc.net
Mon Oct 19 10:32:25 UTC 2009


Hi list,

i configured pacemaker to take care of a three node drbd stetup but the monitor on the stacked drbd resouce failed and the resource never gets started.
Doing the necessary steps by hand everything works fine (on-node-with-ip: drbdadm --stacked adjust mysqlstack ; drbdadm --stacked primary mysqlstack)

is it possible that the drbd-monitor can't deal properly the a stacked-resource or do i miss something in my configuration?

ptest
-------------------------------------------------------------------------------
ptest[2395]: 2009/10/19_12:27:47 info: determine_online_status: Node mysqlHA2.cologne.tecdoc.local is online
ptest[2395]: 2009/10/19_12:27:47 info: unpack_rsc_op: res.drbd.mysqlstack:0_monitor_0 on mysqlHA2.cologne.tecdoc.local returned 6 (not configured) instead of the expected value: 7 (not running)
ptest[2395]: 2009/10/19_12:27:47 ERROR: unpack_rsc_op: Hard error - res.drbd.mysqlstack:0_monitor_0 failed with rc=6: Preventing ms.drbd.mysqlstack from re-starting anywhere in the cluster
ptest[2395]: 2009/10/19_12:27:47 info: unpack_rsc_op: res.ip.mysql_monitor_0 on mysqlHA2.cologne.tecdoc.local returned 0 (ok) instead of the expected value: 7 (not running)
ptest[2395]: 2009/10/19_12:27:47 notice: unpack_rsc_op: Operation res.ip.mysql_monitor_0 found resource res.ip.mysql active on mysqlHA2.cologne.tecdoc.local
ptest[2395]: 2009/10/19_12:27:47 info: determine_online_status: Node mysqlHA1.cologne.tecdoc.local is online
ptest[2395]: 2009/10/19_12:27:47 info: unpack_rsc_op: res.drbd.mysqlstack:0_monitor_0 on mysqlHA1.cologne.tecdoc.local returned 6 (not configured) instead of the expected value: 7 (not running)
ptest[2395]: 2009/10/19_12:27:47 ERROR: unpack_rsc_op: Hard error - res.drbd.mysqlstack:0_monitor_0 failed with rc=6: Preventing ms.drbd.mysqlstack from re-starting anywhere in the cluster
ptest[2395]: 2009/10/19_12:27:48 debug: native_assign_node: All nodes for resource res.drbd.mysqlstack:0 are unavailable, unclean or shutting down (mysqlHA2.cologne.tecdoc.local: 1, -1000000)
ptest[2395]: 2009/10/19_12:27:48 WARN: native_color: Resource res.drbd.mysqlstack:0 cannot run anywhere
-------------------------------------------------------------------------------


crm_mon
-------------------------------------------------------------------------------
============
Last updated: Mon Oct 19 12:10:48 2009
Stack: openais
Current DC: mysqlHA1.cologne.tecdoc.local - partition with quorum
Version: 1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7
2 Nodes configured, 2 expected votes
3 Resources configured.
============

Online: [ mysqlHA2.cologne.tecdoc.local mysqlHA1.cologne.tecdoc.local ]

Master/Slave Set: ms.drbd.mysqldb
        Masters: [ mysqlHA2.cologne.tecdoc.local ]
        Slaves: [ mysqlHA1.cologne.tecdoc.local ]
res.ip.mysql    (ocf::heartbeat:IPaddr2):       Started mysqlHA2.cologne.tecdoc.local

Failed actions:
    res.drbd.mysqlstack:0_monitor_0 (node=(null), call=3, rc=6, status=complete): not configured
-------------------------------------------------------------------------------

cat /proc/drbd on node mysqlHA2:
-------------------------------------------------------------------------------
version: 8.3.2 (api:88/proto:86-90)
GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by mockbuild at v20z-x86-64.home.local, 2009-08-29 14:07:55
 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r----
    ns:4 nr:0 dw:0 dr:4 al:0 bm:1 lo:0 pe:0 ua:0 ap:0 ep:1 wo:b oos:0
-------------------------------------------------------------------------------


her my pacemaker config:
-------------------------------------------------------------------------------
node mysqlHA1.cologne.tecdoc.local
node mysqlHA2.cologne.tecdoc.local
primitive res.drbd.mysqldb ocf:linbit:drbd \
        params drbd_resource="mysqldb"
primitive res.drbd.mysqlstack ocf:linbit:drbd \
        params drbd_resource="mysqlstack" \
        meta target-role="Started"
primitive res.ip.mysql ocf:heartbeat:IPaddr2 \
        params ip="172.30.2.10" nic="eth0" netmask="255.255.255.0"
ms ms.drbd.mysqldb res.drbd.mysqldb \
        meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true" globally_unique="false"
ms ms.drbd.mysqlstack res.drbd.mysqlstack \
        meta master-max="1" master-node-max="1" clone-max="1" clone-node-max="1" notify="true" globally_unique="false" t
colocation co.ms.drbd.mysqlstack_on_ms.drbd.mysqldb inf: ms.drbd.mysqlstack ms.drbd.mysqldb:Master
colocation co.res.ip.mysql_on_ms.drbd.mysqldb_master inf: res.ip.mysql ms.drbd.mysqldb:Master
order o.ip.mysql_before_ms.drbd.mysqlstack inf: res.ip.mysql ms.drbd.mysqlstack:start
order o.ms.drbd.mysqldb_before_ms.drbd.mysqlstack inf: ms.drbd.mysqldb:promote ms.drbd.mysqlstack:start
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-462f1569a43740667daf7b0f6b521742e9eb8fa7" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        last-lrm-refresh="1255946421"
-------------------------------------------------------------------------------


drbd.conf
-------------------------------------------------------------------------------
global {
  usage-count no;
}
common {
  syncer {
                rate 10M;
                verify-alg sha1;
        }
  protocol C;
}
resource mysqldb {
  handlers {
  fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
  after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
  split-brain   "/usr/lib/drbd/notify-split-brain.sh root";
  }
  startup {
    degr-wfc-timeout 60;    # 1 minute
    outdated-wfc-timeout 2;  # 2 seconds.
  }
  disk {
    on-io-error   detach;
  }
  net {
    cram-hmac-alg "sha1";
    shared-secret "supersecret";
    after-sb-0pri disconnect;
    after-sb-1pri disconnect;
    after-sb-2pri disconnect;
    rr-conflict disconnect;
  }
  syncer {
    rate 20M;
    al-extents 257;
  }
    device    /dev/drbd0;
    disk      /dev/sdb;
    meta-disk internal;
  on mysqlHA1.cologne.tecdoc.local {
    address    10.6.0.127:7788;
  }
  on mysqlHA2.cologne.tecdoc.local {
    address   10.6.0.128:7788;
  }
}
resource mysqlstack {
        protocol A;
        device          /dev/drbd10;
        stacked-on-top-of mysqldb {
                address         172.30.2.10:7789;  # Cluster IP
        }
        on mysqlHAoffsite.cologne.tecdoc.local {
                disk                    /dev/sdb;
                address         172.30.2.78:7789;  # public IP of backup-node
                meta-disk       internal;
        }
}
-------------------------------------------------------------------------------

Mit freundlichen Grüßen / with kind regards

Torsten Schmidt
System Manager

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20091019/ab77658f/attachment-0001.html>


More information about the Pacemaker mailing list